modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-11 12:33:28
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
555 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-11 12:33:10
card
stringlengths
11
1.01M
YoaneBailiang/STL
YoaneBailiang
2023-02-17T13:29:08Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-02-17T13:29:08Z
--- license: creativeml-openrail-m ---
jiaoqsh/mbart-large-50-finetuned-stocks-event-2
jiaoqsh
2023-02-17T13:25:53Z
8
0
transformers
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "summarization", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
summarization
2023-02-17T13:18:08Z
--- license: mit tags: - summarization - generated_from_trainer metrics: - rouge model-index: - name: mbart-large-50-finetuned-stocks-event-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mbart-large-50-finetuned-stocks-event-2 This model is a fine-tuned version of [facebook/mbart-large-50](https://huggingface.co/facebook/mbart-large-50) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1281 - Rouge1: 0.9005 - Rouge2: 0.8194 - Rougel: 0.9005 - Rougelsum: 0.9005 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.6e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:| | 3.9121 | 1.0 | 20 | 1.1223 | 0.1389 | 0.1111 | 0.1366 | 0.1377 | | 0.2649 | 2.0 | 40 | 0.1712 | 0.8218 | 0.6944 | 0.8241 | 0.8194 | | 0.0404 | 3.0 | 60 | 0.1892 | 0.9329 | 0.8611 | 0.9329 | 0.9329 | | 0.0176 | 4.0 | 80 | 0.1553 | 0.9236 | 0.8472 | 0.9236 | 0.9213 | | 0.0151 | 5.0 | 100 | 0.1848 | 0.8426 | 0.7454 | 0.8417 | 0.8426 | | 0.0117 | 6.0 | 120 | 0.1917 | 0.8727 | 0.7778 | 0.8727 | 0.8727 | | 0.0246 | 7.0 | 140 | 0.1366 | 0.9074 | 0.8333 | 0.9074 | 0.9074 | | 0.0018 | 8.0 | 160 | 0.1281 | 0.9005 | 0.8194 | 0.9005 | 0.9005 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
MunSu/xlm-roberta-base-finetuned-panx-en
MunSu
2023-02-17T13:18:27Z
5
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-02-15T10:53:33Z
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-en results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme config: PAN-X.en split: validation args: PAN-X.en metrics: - name: F1 type: f1 value: 0.7654741624077228 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-en This model is a fine-tuned version of [MunSu/xlm-roberta-base-finetuned-panx-de](https://huggingface.co/MunSu/xlm-roberta-base-finetuned-panx-de) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.4148 - F1: 0.7655 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 148 | 0.3936 | 0.7289 | | No log | 2.0 | 296 | 0.3534 | 0.7607 | | No log | 3.0 | 444 | 0.4148 | 0.7655 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.8.0 - Datasets 2.9.0 - Tokenizers 0.13.2
MunSu/xlm-roberta-base-finetuned-panx-it
MunSu
2023-02-17T13:13:29Z
4
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-02-15T10:49:57Z
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-it results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme config: PAN-X.it split: validation args: PAN-X.it metrics: - name: F1 type: f1 value: 0.8610763454317898 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-it This model is a fine-tuned version of [MunSu/xlm-roberta-base-finetuned-panx-de](https://huggingface.co/MunSu/xlm-roberta-base-finetuned-panx-de) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.2900 - F1: 0.8611 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 210 | 0.2713 | 0.8249 | | No log | 2.0 | 420 | 0.2468 | 0.8539 | | No log | 3.0 | 630 | 0.2900 | 0.8611 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.8.0 - Datasets 2.9.0 - Tokenizers 0.13.2
MunSu/xlm-roberta-base-finetuned-panx-fr
MunSu
2023-02-17T13:08:27Z
5
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-02-15T10:43:20Z
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-fr results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme config: PAN-X.fr split: validation args: PAN-X.fr metrics: - name: F1 type: f1 value: 0.851908396946565 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-fr This model is a fine-tuned version of [MunSu/xlm-roberta-base-finetuned-panx-de](https://huggingface.co/MunSu/xlm-roberta-base-finetuned-panx-de) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.4418 - F1: 0.8519 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 573 | 0.3510 | 0.8469 | | No log | 2.0 | 1146 | 0.4346 | 0.8487 | | 0.1319 | 3.0 | 1719 | 0.4418 | 0.8519 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.8.0 - Datasets 2.9.0 - Tokenizers 0.13.2
NTQAI/wav2vec2-large-japanese
NTQAI
2023-02-17T13:07:47Z
479
7
transformers
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "ja", "dataset:common_voice", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:04Z
--- language: ja datasets: - common_voice metrics: - wer - cer tags: - audio - automatic-speech-recognition - speech model-index: - name: Wav2Vec2 Japanese by NTQAI results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice ja type: common_voice args: ja metrics: - name: Test WER type: wer value: 81.3 - name: Test CER type: cer value: 21.9 --- # Wav2Vec2-Large-Japanese Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Japanese using the [Common Voice](https://huggingface.co/datasets/common_voice), [JSUT](https://sites.google.com/site/shinnosuketakamichi/publication/jsut), [TEDxJP](https://github.com/laboroai/TEDxJP-10K) and some other data. This model is a model trained on public data. If you want to use trained model with more 600 hours of data and higher accuracy please contact nha282@gmail.com When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import librosa from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor LANG_ID = "ja" MODEL_ID = "NTQAI/wav2vec2-large-japanese" SAMPLES = 3 test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]") processor = Wav2Vec2Processor.from_pretrained(MODEL_ID) model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID) # Preprocessing the datasets. # We need to read the audio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000) batch["speech"] = speech_array batch["sentence"] = batch["sentence"].upper() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) predicted_sentences = processor.batch_decode(predicted_ids) for i, predicted_sentence in enumerate(predicted_sentences): print("-" * 100) print("Reference:", test_dataset[i]["sentence"]) print("Prediction:", predicted_sentence) ``` | Reference | Prediction | | ------------- | ------------- | | 祖母は、おおむね機嫌よく、サイコロをころがしている。 | 祖母思い切れを最布ロぼがしている | | 財布をなくしたので、交番へ行きます。 | 財布をなく時間ので交番でへ行きます | | 飲み屋のおやじ、旅館の主人、医者をはじめ、交際のある人にきいてまわったら、みんな、私より収入が多いはずなのに、税金は安い。 | ロみ屋のおやし旅館の主人に医をはめ交載のあの人に聞いて回ったらみんな私より収入が多い発ずなのに請金は安い | ## Evaluation The model can be evaluated as follows on the Japanese test data of Common Voice. ```python import torch import re import librosa from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor LANG_ID = "ja" MODEL_ID = "NTQAI/wav2vec2-large-japanese" DEVICE = "cuda" CHARS_TO_IGNORE = [",", "?", "¿", ".", "!", "¡", ";", ";", ":", '""', "%", '"', "�", "ʿ", "·", "჻", "~", "՞", "؟", "،", "।", "॥", "«", "»", "„", "“", "”", "「", "」", "‘", "’", "《", "》", "(", ")", "[", "]", "{", "}", "=", "`", "_", "+", "<", ">", "…", "–", "°", "´", "ʾ", "‹", "›", "©", "®", "—", "→", "。", "、", "﹂", "﹁", "‧", "~", "﹏", ",", "{", "}", "(", ")", "[", "]", "【", "】", "‥", "〽", "『", "』", "〝", "〟", "⟨", "⟩", "〜", ":", "!", "?", "♪", "؛", "/", "\\", "º", "−", "^", "'", "ʻ", "ˆ"] test_dataset = load_dataset("common_voice", LANG_ID, split="test") wer = load_metric("wer.py") # https://github.com/jonatasgrosman/wav2vec2-sprint/blob/main/wer.py cer = load_metric("cer.py") # https://github.com/jonatasgrosman/wav2vec2-sprint/blob/main/cer.py chars_to_ignore_regex = f"[{re.escape(''.join(CHARS_TO_IGNORE))}]" processor = Wav2Vec2Processor.from_pretrained(MODEL_ID) model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID) model.to(DEVICE) # Preprocessing the datasets. # We need to read the audio files as arrays def speech_file_to_array_fn(batch): with warnings.catch_warnings(): warnings.simplefilter("ignore") speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000) batch["speech"] = speech_array batch["sentence"] = re.sub(chars_to_ignore_regex, "", batch["sentence"]).upper() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the audio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to(DEVICE), attention_mask=inputs.attention_mask.to(DEVICE)).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) predictions = [x.upper() for x in result["pred_strings"]] references = [x.upper() for x in result["sentence"]] print(f"WER: {wer.compute(predictions=predictions, references=references, chunk_size=1000) * 100}") print(f"CER: {cer.compute(predictions=predictions, references=references, chunk_size=1000) * 100}") ``` **Test Result**: | Model | WER | CER | | ------------- | ------------- | ------------- | | NTQAI/wav2vec2-large-japanese | **73.10%** | **18.15%** | | vumichien/wav2vec2-large-xlsr-japanese | 1108.86% | 23.40% | | qqhann/w2v_hf_jsut_xlsr53 | 1012.18% | 70.77% |
Tritkoman/EnglishtoChurchSlavonicV1
Tritkoman
2023-02-17T12:59:26Z
3
0
transformers
[ "transformers", "pytorch", "mt5", "text2text-generation", "autotrain", "translation", "en", "hi", "dataset:Tritkoman/autotrain-data-agahata", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2023-02-17T11:39:44Z
--- tags: - autotrain - translation language: - en - hi datasets: - Tritkoman/autotrain-data-agahata co2_eq_emissions: emissions: 0.6456799104854907 --- # Model Trained Using AutoTrain - Problem type: Translation - Model ID: 3547595703 - CO2 Emissions (in grams): 0.6457 ## Validation Metrics - Loss: 1.659 - SacreBLEU: 2.851 - Gen len: 18.977
123aadsaax/111
123aadsaax
2023-02-17T12:56:53Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-02-17T12:55:53Z
--- license: creativeml-openrail-m ---
NoNameFound/ppo-LunarLander-v2
NoNameFound
2023-02-17T12:52:05Z
4
0
transformers
[ "transformers", "tensorboard", "LunarLander-v2", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course", "model-index", "endpoints_compatible", "region:us" ]
reinforcement-learning
2022-12-18T07:49:02Z
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 91.33 +/- 118.72 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters ```python {'exp_name': 'ppo' 'seed': 1 'torch_deterministic': True 'cuda': True 'track': False 'wandb_project_name': 'cleanRL' 'wandb_entity': None 'capture_video': False 'env_id': 'LunarLander-v2' 'total_timesteps': 5000000 'learning_rate': 0.00025 'num_envs': 4 'num_steps': 128 'anneal_lr': True 'gae': True 'gamma': 0.99 'gae_lambda': 0.95 'num_minibatches': 4 'update_epochs': 4 'norm_adv': True 'clip_coef': 0.2 'clip_vloss': True 'ent_coef': 0.01 'vf_coef': 0.5 'max_grad_norm': 0.5 'target_kl': None 'repo_id': 'NoNameFound/ppo-LunarLander-v2' 'batch_size': 512 'minibatch_size': 128} ```
FabioDataGeek/ppo-SnowballTarget
FabioDataGeek
2023-02-17T12:49:35Z
7
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "unity-ml-agents", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget", "region:us" ]
reinforcement-learning
2023-02-17T12:49:30Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget library_name: ml-agents --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget 2. Step 1: Write your model_id: FabioDataGeek/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
samitizerxu/final-algae-swin-wirs
samitizerxu
2023-02-17T12:45:45Z
57
0
transformers
[ "transformers", "pytorch", "tensorboard", "swin", "image-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-02-16T23:00:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: final-algae-swin-wirs results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # final-algae-swin-wirs This model is a fine-tuned version of [samitizerxu/final-algae-swin-wirs](https://huggingface.co/samitizerxu/final-algae-swin-wirs) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7645 - Accuracy: 0.6725 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.3024 | 1.0 | 120 | 0.7645 | 0.6725 | | 1.2375 | 2.0 | 240 | 0.8013 | 0.6673 | | 1.1875 | 3.0 | 360 | 0.8251 | 0.6649 | | 1.187 | 4.0 | 480 | 0.8960 | 0.6403 | | 1.1472 | 5.0 | 600 | 0.9558 | 0.6244 | | 1.1374 | 6.0 | 720 | 1.1951 | 0.4877 | | 1.1114 | 7.0 | 840 | 1.1109 | 0.5358 | | 1.1201 | 8.0 | 960 | 0.9724 | 0.6227 | | 1.0801 | 9.0 | 1080 | 0.9913 | 0.5863 | | 1.0995 | 10.0 | 1200 | 1.0117 | 0.5933 | | 1.0817 | 11.0 | 1320 | 1.0239 | 0.5951 | | 1.0679 | 12.0 | 1440 | 1.0381 | 0.5839 | | 1.094 | 13.0 | 1560 | 1.0480 | 0.5910 | | 1.0325 | 14.0 | 1680 | 1.0671 | 0.5839 | | 1.0087 | 15.0 | 1800 | 1.0133 | 0.5892 | | 1.0525 | 16.0 | 1920 | 1.0332 | 0.5775 | | 1.0614 | 17.0 | 2040 | 1.0085 | 0.5939 | | 1.0065 | 18.0 | 2160 | 1.0070 | 0.5974 | | 1.0474 | 19.0 | 2280 | 1.0023 | 0.5898 | | 1.0346 | 20.0 | 2400 | 1.0072 | 0.5839 | | 1.0226 | 21.0 | 2520 | 1.0219 | 0.5792 | | 1.0474 | 22.0 | 2640 | 1.0106 | 0.5880 | | 0.983 | 23.0 | 2760 | 1.0020 | 0.5874 | | 0.9997 | 24.0 | 2880 | 1.0838 | 0.5593 | | 1.0074 | 25.0 | 3000 | 1.0781 | 0.5593 | | 1.0 | 26.0 | 3120 | 1.0378 | 0.5751 | | 1.0279 | 27.0 | 3240 | 1.0737 | 0.5604 | | 0.9696 | 28.0 | 3360 | 1.1385 | 0.5123 | | 0.9862 | 29.0 | 3480 | 1.1236 | 0.5282 | | 1.0155 | 30.0 | 3600 | 1.0415 | 0.5798 | | 0.9723 | 31.0 | 3720 | 1.1447 | 0.5258 | | 0.9935 | 32.0 | 3840 | 1.1166 | 0.5323 | | 0.9965 | 33.0 | 3960 | 1.0502 | 0.5716 | | 0.9645 | 34.0 | 4080 | 1.1316 | 0.5329 | | 0.9771 | 35.0 | 4200 | 1.1860 | 0.5170 | | 0.9976 | 36.0 | 4320 | 1.2937 | 0.4906 | | 0.9207 | 37.0 | 4440 | 1.2272 | 0.5135 | | 0.9813 | 38.0 | 4560 | 1.2067 | 0.5258 | | 0.9337 | 39.0 | 4680 | 1.2162 | 0.5282 | | 0.9628 | 40.0 | 4800 | 1.2700 | 0.5059 | | 0.9561 | 41.0 | 4920 | 1.2428 | 0.5094 | | 0.9208 | 42.0 | 5040 | 1.2271 | 0.5158 | | 0.9097 | 43.0 | 5160 | 1.2388 | 0.5182 | | 0.9487 | 44.0 | 5280 | 1.1966 | 0.5264 | | 0.9386 | 45.0 | 5400 | 1.2107 | 0.5258 | | 0.9291 | 46.0 | 5520 | 1.2893 | 0.4977 | | 0.9357 | 47.0 | 5640 | 1.2764 | 0.5041 | | 0.9064 | 48.0 | 5760 | 1.2710 | 0.5012 | | 0.9032 | 49.0 | 5880 | 1.2695 | 0.5 | | 0.9423 | 50.0 | 6000 | 1.2703 | 0.4982 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu117 - Datasets 2.9.0 - Tokenizers 0.13.2
katkha/whisper-small-ka
katkha
2023-02-17T12:22:05Z
9
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "whisper-event", "generated_from_trainer", "ka", "dataset:mozilla-foundation/common_voice_11_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-02-13T11:25:36Z
--- language: - ka license: apache-2.0 tags: - whisper-event - generated_from_trainer datasets: - mozilla-foundation/common_voice_11_0 model-index: - name: Whisper Small ka - Davit Barbakadze results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small ka - Davit Barbakadze This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - eval_loss: 0.1652 - eval_wer: 47.0800 - eval_runtime: 1493.5786 - eval_samples_per_second: 1.673 - eval_steps_per_second: 0.21 - epoch: 13.01 - step: 1000 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 5000 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.27.0.dev0 - Pytorch 1.13.1+cu116 - Datasets 2.9.1.dev0 - Tokenizers 0.13.2
MunSu/xlm-roberta-base-finetuned-panx-de
MunSu
2023-02-17T12:18:59Z
9
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-02-14T23:58:31Z
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme config: PAN-X.fr split: validation args: PAN-X.fr metrics: - name: F1 type: f1 value: 0.8525033829499323 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [MunSu/xlm-roberta-base-finetuned-panx-de](https://huggingface.co/MunSu/xlm-roberta-base-finetuned-panx-de) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.4005 - F1: 0.8525 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 500 | 0.3080 | 0.8254 | | No log | 2.0 | 1000 | 0.3795 | 0.8448 | | No log | 3.0 | 1500 | 0.4005 | 0.8525 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.8.0 - Datasets 2.9.0 - Tokenizers 0.13.2
ZhihongDeng/Reinforce-CartPole-v1
ZhihongDeng
2023-02-17T12:07:05Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-02-17T12:06:55Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-CartPole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
Tincando/my_awesome_neo_gpt-model
Tincando
2023-02-17T11:33:12Z
12
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt_neo", "text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-02-16T19:00:41Z
--- license: mit tags: - generated_from_trainer model-index: - name: my_awesome_neo_gpt-model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_neo_gpt-model This model is a fine-tuned version of [EleutherAI/gpt-neo-125M](https://huggingface.co/EleutherAI/gpt-neo-125M) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.4912 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.3957 | 1.0 | 1109 | 3.4908 | | 3.2523 | 2.0 | 2218 | 3.4871 | | 3.1771 | 3.0 | 3327 | 3.4912 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
joyc360/deepfakes
joyc360
2023-02-17T11:30:34Z
58
3
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "huggingpics", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-02-17T11:06:08Z
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: deepfakes results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.8157894611358643 --- # deepfakes Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### deepfake image ![deepfake image](images/deepfake_image.jpg) #### real image ![real image](images/real_image.jpg)
egyee/dqn-SpaceInvadersNoFrameskip-v4-TEST_2
egyee
2023-02-17T11:26:35Z
2
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-02-17T11:25:46Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 872.50 +/- 297.66 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga eryzml -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga eryzml -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga eryzml ``` ## Hyperparameters ```python OrderedDict([('batch_size', 64), ('buffer_size', 150000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', True), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
mohammadbehdad/q-Taxi-v3
mohammadbehdad
2023-02-17T11:23:33Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-02-17T11:23:29Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.50 +/- 2.63 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="mohammadbehdad/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
mohammadbehdad/q-FrozenLake-v1-4x4-noSlippery
mohammadbehdad
2023-02-17T11:16:19Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-02-17T11:16:15Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="mohammadbehdad/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
alexrink/pegasus-xsum-finetuned-xsum
alexrink
2023-02-17T11:00:17Z
3
0
transformers
[ "transformers", "tf", "tensorboard", "pegasus", "text2text-generation", "generated_from_keras_callback", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-02-15T18:35:35Z
--- tags: - generated_from_keras_callback model-index: - name: alexrink/pegasus-xsum-finetuned-xsum results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # alexrink/pegasus-xsum-finetuned-xsum This model is a fine-tuned version of [google/pegasus-xsum](https://huggingface.co/google/pegasus-xsum) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 4.6217 - Validation Loss: 3.8943 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 4.6217 | 3.8943 | 0 | ### Framework versions - Transformers 4.26.1 - TensorFlow 2.11.0 - Datasets 2.9.0 - Tokenizers 0.13.2
rizalmilyardi/IndobertNewsTest
rizalmilyardi
2023-02-17T11:00:15Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-02-17T10:40:47Z
--- license: mit tags: - generated_from_trainer metrics: - accuracy model-index: - name: IndobertNewsTest results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # IndobertNewsTest This model is a fine-tuned version of [indolem/indobert-base-uncased](https://huggingface.co/indolem/indobert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6846 - Accuracy: 0.8183 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 208 | 1.1026 | 0.7329 | | No log | 2.0 | 416 | 0.7514 | 0.8099 | | 1.2532 | 3.0 | 624 | 0.6846 | 0.8183 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
NoNameFound/ppo-LunarLander-v1
NoNameFound
2023-02-17T10:58:30Z
0
0
null
[ "tensorboard", "LunarLander-v2", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course", "model-index", "region:us" ]
reinforcement-learning
2023-02-17T05:11:36Z
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -9.39 +/- 78.05 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters ```python {'exp_name': 'ppo' 'seed': 1 'torch_deterministic': True 'cuda': True 'track': False 'wandb_project_name': 'cleanRL' 'wandb_entity': None 'capture_video': False 'env_id': 'LunarLander-v2' 'total_timesteps': 500000 'learning_rate': 0.00025 'num_envs': 4 'num_steps': 128 'anneal_lr': True 'gae': True 'gamma': 0.99 'gae_lambda': 0.95 'num_minibatches': 4 'update_epochs': 4 'norm_adv': True 'clip_coef': 0.2 'clip_vloss': True 'ent_coef': 0.01 'vf_coef': 0.5 'max_grad_norm': 0.5 'target_kl': None 'repo_id': 'NoNameFound/ppo-LunarLander-v1' 'batch_size': 512 'minibatch_size': 128} ```
ybelkada/gpt-neo-125m-detox
ybelkada
2023-02-17T10:57:56Z
14
0
transformers
[ "transformers", "pytorch", "gpt_neo", "text-generation", "trl", "reinforcement-learning", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
reinforcement-learning
2023-02-13T19:41:35Z
--- license: apache-2.0 tags: - trl - transformers - reinforcement-learning --- # TRL Model This is a [TRL language model](https://github.com/lvwerra/trl) that has been fine-tuned with reinforcement learning to guide the model outputs according to a value, function, or human feedback. The model can be used for text generation. ## Training logs The training logs can be found [here](https://wandb.ai/distill-bloom/trl/runs/ogn1tdv3?workspace=user-younesbelkada) ## Usage To use this model for inference, first install the TRL library: ```bash python -m pip install trl ``` You can then generate text as follows: ```python from transformers import pipeline generator = pipeline("text-generation", model="ybelkada//var/tmp/tmppugfzd45/ybelkada/gpt-neo-125m-detoxified-small-context") outputs = generator("Hello, my llama is cute") ``` If you want to use the model for training or to obtain the outputs from the value head, load the model as follows: ```python from transformers import AutoTokenizer from trl import AutoModelForCausalLMWithValueHead tokenizer = AutoTokenizer.from_pretrained("ybelkada//var/tmp/tmppugfzd45/ybelkada/gpt-neo-125m-detoxified-small-context") model = AutoModelForCausalLMWithValueHead.from_pretrained("ybelkada//var/tmp/tmppugfzd45/ybelkada/gpt-neo-125m-detoxified-small-context") inputs = tokenizer("Hello, my llama is cute", return_tensors="pt") outputs = model(**inputs, labels=inputs["input_ids"]) ```
odahl/a2c-PandaReachDense-v2
odahl
2023-02-17T10:53:55Z
0
0
stable-baselines3
[ "stable-baselines3", "PandaReachDense-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-02-15T19:45:33Z
--- library_name: stable-baselines3 tags: - PandaReachDense-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v2 type: PandaReachDense-v2 metrics: - type: mean_reward value: -1.13 +/- 0.07 name: mean_reward verified: false --- # **A2C** Agent playing **PandaReachDense-v2** This is a trained model of a **A2C** agent playing **PandaReachDense-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
arjunvinod/bert-finetuned-squad
arjunvinod
2023-02-17T10:30:08Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-02-17T06:53:18Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: bert-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-squad This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
Ahmade/bert_fine_tuned_cola
Ahmade
2023-02-17T10:27:04Z
5
0
transformers
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-02-17T09:06:37Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Ahmade/bert_fine_tuned_cola results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Ahmade/bert_fine_tuned_cola This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.5945 - Validation Loss: 0.5177 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.5945 | 0.5177 | 0 | ### Framework versions - Transformers 4.26.1 - TensorFlow 2.11.0 - Datasets 2.9.0 - Tokenizers 0.13.2
Botnoi/wav2vec2-xls-r-300m-th-cv11_0
Botnoi
2023-02-17T10:12:55Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "th", "dataset:mozilla-foundation/common_voice_11_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-02-01T04:46:49Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - wer - cer model-index: - name: wav2vec2-xls-r-300m-th-cv11_0 results: [] datasets: - mozilla-foundation/common_voice_11_0 language: - th pipeline_tag: automatic-speech-recognition --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-xls-r-300m-th-cv11_0 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3391 - Wer: 0.2915 - Cer: 0.0651 - Clean Cer: 0.0508 - Learning Rate: 0.0000 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | Clean Cer | Rate | |:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:---------:|:------:| | 7.5397 | 0.37 | 500 | 3.5716 | 1.0 | 0.9811 | 0.9774 | 0.0001 | | 1.7478 | 0.75 | 1000 | 0.7702 | 0.8097 | 0.2296 | 0.1746 | 0.0001 | | 0.7687 | 1.12 | 1500 | 0.4997 | 0.5392 | 0.1415 | 0.1182 | 0.0001 | | 0.6064 | 1.5 | 2000 | 0.4270 | 0.4956 | 0.1238 | 0.1001 | 0.0001 | | 0.5473 | 1.87 | 2500 | 0.3809 | 0.4489 | 0.1105 | 0.0898 | 0.0001 | | 0.454 | 2.24 | 3000 | 0.3585 | 0.4256 | 0.1021 | 0.0813 | 0.0001 | | 0.4219 | 2.62 | 3500 | 0.3375 | 0.4063 | 0.0974 | 0.0777 | 0.0001 | | 0.4075 | 2.99 | 4000 | 0.3274 | 0.4036 | 0.0948 | 0.0746 | 0.0001 | | 0.3355 | 3.37 | 4500 | 0.3257 | 0.3782 | 0.0898 | 0.0729 | 0.0001 | | 0.3203 | 3.74 | 5000 | 0.3024 | 0.3561 | 0.0830 | 0.0659 | 0.0001 | | 0.3151 | 4.11 | 5500 | 0.3038 | 0.3606 | 0.0830 | 0.0653 | 0.0001 | | 0.2713 | 4.49 | 6000 | 0.3052 | 0.3595 | 0.0832 | 0.0655 | 0.0001 | | 0.2685 | 4.86 | 6500 | 0.2933 | 0.3436 | 0.0796 | 0.0628 | 0.0001 | | 0.2379 | 5.24 | 7000 | 0.3020 | 0.3362 | 0.0763 | 0.0608 | 0.0000 | | 0.224 | 5.61 | 7500 | 0.2874 | 0.3265 | 0.0745 | 0.0589 | 0.0000 | | 0.2204 | 5.98 | 8000 | 0.2922 | 0.3191 | 0.0724 | 0.0576 | 0.0000 | | 0.1927 | 6.36 | 8500 | 0.3107 | 0.3163 | 0.0719 | 0.0568 | 0.0000 | | 0.1875 | 6.73 | 9000 | 0.3034 | 0.3084 | 0.0703 | 0.0554 | 0.0000 | | 0.1786 | 7.11 | 9500 | 0.3210 | 0.3107 | 0.0702 | 0.0553 | 0.0000 | | 0.1606 | 7.48 | 10000 | 0.3231 | 0.3062 | 0.0688 | 0.0541 | 0.0000 | | 0.1594 | 7.85 | 10500 | 0.3234 | 0.3033 | 0.0680 | 0.0535 | 0.0000 | | 0.1498 | 8.23 | 11000 | 0.3276 | 0.3035 | 0.0680 | 0.0530 | 0.0000 | | 0.1396 | 8.6 | 11500 | 0.3265 | 0.2975 | 0.0668 | 0.0520 | 0.0000 | | 0.142 | 8.98 | 12000 | 0.3236 | 0.2930 | 0.0659 | 0.0515 | 0.0000 | | 0.1242 | 9.35 | 12500 | 0.3403 | 0.2921 | 0.0655 | 0.0511 | 0.0000 | | 0.1225 | 9.72 | 13000 | 0.3391 | 0.2915 | 0.0651 | 0.0508 | 0.0000 | ### Framework versions - Transformers 4.27.0.dev0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
google/efficientnet-b7
google
2023-02-17T10:08:23Z
3,005
11
transformers
[ "transformers", "pytorch", "efficientnet", "image-classification", "vision", "dataset:imagenet-1k", "arxiv:1905.11946", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-02-15T23:35:01Z
--- license: apache-2.0 tags: - vision - image-classification datasets: - imagenet-1k widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace --- # EfficientNet (b7 model) EfficientNet model trained on ImageNet-1k at resolution 600x600. It was introduced in the paper [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks ](https://arxiv.org/abs/1905.11946) by Mingxing Tan and Quoc V. Le, and first released in [this repository](https://github.com/keras-team/keras). Disclaimer: The team releasing EfficientNet did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description EfficientNet is a mobile friendly pure convolutional model (ConvNet) that proposes a new scaling method that uniformly scales all dimensions of depth/width/resolution using a simple yet highly effective compound coefficient. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/efficientnet_architecture.png) ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=efficientnet) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python import torch from datasets import load_dataset from transformers import EfficientNetImageProcessor, EfficientNetForImageClassification dataset = load_dataset("huggingface/cats-image") image = dataset["test"]["image"][0] preprocessor = EfficientNetImageProcessor.from_pretrained("google/efficientnet-b7") model = EfficientNetForImageClassification.from_pretrained("google/efficientnet-b7") inputs = preprocessor(image, return_tensors="pt") with torch.no_grad(): logits = model(**inputs).logits # model predicts one of the 1000 ImageNet classes predicted_label = logits.argmax(-1).item() print(model.config.id2label[predicted_label]), ``` For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/efficientnet). ### BibTeX entry and citation info ```bibtex @article{Tan2019EfficientNetRM, title={EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks}, author={Mingxing Tan and Quoc V. Le}, journal={ArXiv}, year={2019}, volume={abs/1905.11946} } ```
google/efficientnet-b6
google
2023-02-17T10:08:06Z
159
0
transformers
[ "transformers", "pytorch", "efficientnet", "image-classification", "vision", "dataset:imagenet-1k", "arxiv:1905.11946", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-02-15T23:28:54Z
--- license: apache-2.0 tags: - vision - image-classification datasets: - imagenet-1k widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace --- # EfficientNet (b6 model) EfficientNet model trained on ImageNet-1k at resolution 528x528. It was introduced in the paper [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks ](https://arxiv.org/abs/1905.11946) by Mingxing Tan and Quoc V. Le, and first released in [this repository](https://github.com/keras-team/keras). Disclaimer: The team releasing EfficientNet did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description EfficientNet is a mobile friendly pure convolutional model (ConvNet) that proposes a new scaling method that uniformly scales all dimensions of depth/width/resolution using a simple yet highly effective compound coefficient. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/efficientnet_architecture.png) ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=efficientnet) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python import torch from datasets import load_dataset from transformers import EfficientNetImageProcessor, EfficientNetForImageClassification dataset = load_dataset("huggingface/cats-image") image = dataset["test"]["image"][0] preprocessor = EfficientNetImageProcessor.from_pretrained("google/efficientnet-b6") model = EfficientNetForImageClassification.from_pretrained("google/efficientnet-b6") inputs = preprocessor(image, return_tensors="pt") with torch.no_grad(): logits = model(**inputs).logits # model predicts one of the 1000 ImageNet classes predicted_label = logits.argmax(-1).item() print(model.config.id2label[predicted_label]), ``` For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/efficientnet). ### BibTeX entry and citation info ```bibtex @article{Tan2019EfficientNetRM, title={EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks}, author={Mingxing Tan and Quoc V. Le}, journal={ArXiv}, year={2019}, volume={abs/1905.11946} } ```
google/efficientnet-b5
google
2023-02-17T10:07:49Z
179
1
transformers
[ "transformers", "pytorch", "efficientnet", "image-classification", "vision", "dataset:imagenet-1k", "arxiv:1905.11946", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-02-15T23:24:47Z
--- license: apache-2.0 tags: - vision - image-classification datasets: - imagenet-1k widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace --- # EfficientNet (b5 model) EfficientNet model trained on ImageNet-1k at resolution 456x456. It was introduced in the paper [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks ](https://arxiv.org/abs/1905.11946) by Mingxing Tan and Quoc V. Le, and first released in [this repository](https://github.com/keras-team/keras). Disclaimer: The team releasing EfficientNet did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description EfficientNet is a mobile friendly pure convolutional model (ConvNet) that proposes a new scaling method that uniformly scales all dimensions of depth/width/resolution using a simple yet highly effective compound coefficient. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/efficientnet_architecture.png) ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=efficientnet) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python import torch from datasets import load_dataset from transformers import EfficientNetImageProcessor, EfficientNetForImageClassification dataset = load_dataset("huggingface/cats-image") image = dataset["test"]["image"][0] preprocessor = EfficientNetImageProcessor.from_pretrained("google/efficientnet-b5") model = EfficientNetForImageClassification.from_pretrained("google/efficientnet-b5") inputs = preprocessor(image, return_tensors="pt") with torch.no_grad(): logits = model(**inputs).logits # model predicts one of the 1000 ImageNet classes predicted_label = logits.argmax(-1).item() print(model.config.id2label[predicted_label]), ``` For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/efficientnet). ### BibTeX entry and citation info ```bibtex @article{Tan2019EfficientNetRM, title={EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks}, author={Mingxing Tan and Quoc V. Le}, journal={ArXiv}, year={2019}, volume={abs/1905.11946} } ```
google/efficientnet-b3
google
2023-02-17T10:06:26Z
200
0
transformers
[ "transformers", "pytorch", "efficientnet", "image-classification", "vision", "dataset:imagenet-1k", "arxiv:1905.11946", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-02-15T23:18:33Z
--- license: apache-2.0 tags: - vision - image-classification datasets: - imagenet-1k widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace --- # EfficientNet (b3 model) EfficientNet model trained on ImageNet-1k at resolution 300x300. It was introduced in the paper [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks ](https://arxiv.org/abs/1905.11946) by Mingxing Tan and Quoc V. Le, and first released in [this repository](https://github.com/keras-team/keras). Disclaimer: The team releasing EfficientNet did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description EfficientNet is a mobile friendly pure convolutional model (ConvNet) that proposes a new scaling method that uniformly scales all dimensions of depth/width/resolution using a simple yet highly effective compound coefficient. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/efficientnet_architecture.png) ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=efficientnet) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python import torch from datasets import load_dataset from transformers import EfficientNetImageProcessor, EfficientNetForImageClassification dataset = load_dataset("huggingface/cats-image") image = dataset["test"]["image"][0] preprocessor = EfficientNetImageProcessor.from_pretrained("google/efficientnet-b3") model = EfficientNetForImageClassification.from_pretrained("google/efficientnet-b3") inputs = preprocessor(image, return_tensors="pt") with torch.no_grad(): logits = model(**inputs).logits # model predicts one of the 1000 ImageNet classes predicted_label = logits.argmax(-1).item() print(model.config.id2label[predicted_label]), ``` For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/efficientnet). ### BibTeX entry and citation info ```bibtex @article{Tan2019EfficientNetRM, title={EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks}, author={Mingxing Tan and Quoc V. Le}, journal={ArXiv}, year={2019}, volume={abs/1905.11946} } ```
google/efficientnet-b2
google
2023-02-17T10:06:07Z
255,301
0
transformers
[ "transformers", "pytorch", "efficientnet", "image-classification", "vision", "dataset:imagenet-1k", "arxiv:1905.11946", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-02-15T22:32:36Z
--- license: apache-2.0 tags: - vision - image-classification datasets: - imagenet-1k widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace --- # EfficientNet (b2 model) EfficientNet model trained on ImageNet-1k at resolution 260x260. It was introduced in the paper [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks ](https://arxiv.org/abs/1905.11946) by Mingxing Tan and Quoc V. Le, and first released in [this repository](https://github.com/keras-team/keras). Disclaimer: The team releasing EfficientNet did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description EfficientNet is a mobile friendly pure convolutional model (ConvNet) that proposes a new scaling method that uniformly scales all dimensions of depth/width/resolution using a simple yet highly effective compound coefficient. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/efficientnet_architecture.png) ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=efficientnet) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python import torch from datasets import load_dataset from transformers import EfficientNetImageProcessor, EfficientNetForImageClassification dataset = load_dataset("huggingface/cats-image") image = dataset["test"]["image"][0] preprocessor = EfficientNetImageProcessor.from_pretrained("google/efficientnet-b2") model = EfficientNetForImageClassification.from_pretrained("google/efficientnet-b2") inputs = preprocessor(image, return_tensors="pt") with torch.no_grad(): logits = model(**inputs).logits # model predicts one of the 1000 ImageNet classes predicted_label = logits.argmax(-1).item() print(model.config.id2label[predicted_label]), ``` For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/efficientnet). ### BibTeX entry and citation info ```bibtex @article{Tan2019EfficientNetRM, title={EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks}, author={Mingxing Tan and Quoc V. Le}, journal={ArXiv}, year={2019}, volume={abs/1905.11946} } ```
google/efficientnet-b1
google
2023-02-17T10:05:45Z
3,499
1
transformers
[ "transformers", "pytorch", "efficientnet", "image-classification", "vision", "dataset:imagenet-1k", "arxiv:1905.11946", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-02-15T22:30:43Z
--- license: apache-2.0 tags: - vision - image-classification datasets: - imagenet-1k widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace --- # EfficientNet (b1 model) EfficientNet model trained on ImageNet-1k at resolution 240x240. It was introduced in the paper [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks ](https://arxiv.org/abs/1905.11946) by Mingxing Tan and Quoc V. Le, and first released in [this repository](https://github.com/keras-team/keras). Disclaimer: The team releasing EfficientNet did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description EfficientNet is a mobile friendly pure convolutional model (ConvNet) that proposes a new scaling method that uniformly scales all dimensions of depth/width/resolution using a simple yet highly effective compound coefficient. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/efficientnet_architecture.png) ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=efficientnet) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python import torch from datasets import load_dataset from transformers import EfficientNetImageProcessor, EfficientNetForImageClassification dataset = load_dataset("huggingface/cats-image") image = dataset["test"]["image"][0] preprocessor = EfficientNetImageProcessor.from_pretrained("google/efficientnet-b1") model = EfficientNetForImageClassification.from_pretrained("google/efficientnet-b1") inputs = preprocessor(image, return_tensors="pt") with torch.no_grad(): logits = model(**inputs).logits # model predicts one of the 1000 ImageNet classes predicted_label = logits.argmax(-1).item() print(model.config.id2label[predicted_label]), ``` For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/efficientnet). ### BibTeX entry and citation info ```bibtex @article{Tan2019EfficientNetRM, title={EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks}, author={Mingxing Tan and Quoc V. Le}, journal={ArXiv}, year={2019}, volume={abs/1905.11946} } ```
google/efficientnet-b0
google
2023-02-17T10:05:19Z
16,072
8
transformers
[ "transformers", "pytorch", "efficientnet", "image-classification", "vision", "dataset:imagenet-1k", "arxiv:1905.11946", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-02-15T20:17:27Z
--- license: apache-2.0 tags: - vision - image-classification datasets: - imagenet-1k widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace --- # EfficientNet (b0 model) EfficientNet model trained on ImageNet-1k at resolution 224x224. It was introduced in the paper [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks ](https://arxiv.org/abs/1905.11946) by Mingxing Tan and Quoc V. Le, and first released in [this repository](https://github.com/keras-team/keras). Disclaimer: The team releasing EfficientNet did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description EfficientNet is a mobile friendly pure convolutional model (ConvNet) that proposes a new scaling method that uniformly scales all dimensions of depth/width/resolution using a simple yet highly effective compound coefficient. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/efficientnet_architecture.png) ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=efficientnet) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python import torch from datasets import load_dataset from transformers import EfficientNetImageProcessor, EfficientNetForImageClassification dataset = load_dataset("huggingface/cats-image") image = dataset["test"]["image"][0] preprocessor = EfficientNetImageProcessor.from_pretrained("google/efficientnet-b0") model = EfficientNetForImageClassification.from_pretrained("google/efficientnet-b0") inputs = preprocessor(image, return_tensors="pt") with torch.no_grad(): logits = model(**inputs).logits # model predicts one of the 1000 ImageNet classes predicted_label = logits.argmax(-1).item() print(model.config.id2label[predicted_label]), ``` For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/efficientnet). ### BibTeX entry and citation info ```bibtex @article{Tan2019EfficientNetRM, title={EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks}, author={Mingxing Tan and Quoc V. Le}, journal={ArXiv}, year={2019}, volume={abs/1905.11946} } ```
vijayprakash/arcade-game
vijayprakash
2023-02-17T10:01:05Z
18
1
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "huggingpics", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-02-16T19:13:30Z
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: arcade-game results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.9756097793579102 --- # arcade-game Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### playing card game ![ playing card game](images/_playing_card_game.jpg) #### 8 Ball Pool game ![8 Ball Pool game](images/8_Ball_Pool_game.jpg) #### Asphalt game ![Asphalt game](images/Asphalt_game.jpg) #### Bubble Shooter game ![Bubble Shooter game](images/Bubble_Shooter_game.jpg) #### Call of Duty game ![Call of Duty game](images/Call_of_Duty_game.jpg) #### Candy Crush Saga ![Candy Crush Saga](images/Candy_Crush_Saga.jpg) #### Carrom Pool: Disc Game ![Carrom Pool: Disc Game](images/Carrom_Pool:_Disc_Game.jpg) #### Clash of Clans game ![Clash of Clans game](images/Clash_of_Clans_game.jpg) #### Coin Master game ![Coin Master game](images/Coin_Master_game.jpg) #### Cricket League game ![Cricket League game](images/Cricket_League_game.jpg)
iubeda/Reinforce-CartPole-v1
iubeda
2023-02-17T09:57:19Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-02-17T09:57:09Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-CartPole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
pneubauer/basic-scratch-ppo-LunarLander-v2
pneubauer
2023-02-17T09:53:15Z
0
0
null
[ "tensorboard", "LunarLander-v2", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course", "model-index", "region:us" ]
reinforcement-learning
2023-02-16T22:03:29Z
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 248.45 +/- 75.59 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters ```python {'exp_name': 'ppo' 'seed': 1 'torch_deterministic': True 'cuda': True 'track': False 'wandb_project_name': 'cleanRL' 'wandb_entity': None 'capture_video': False 'env_id': 'LunarLander-v2' 'total_timesteps': 5000000 'learning_rate': 0.0003 'num_envs': 4 'num_steps': 2048 'anneal_lr': True 'gae': True 'gamma': 0.99 'gae_lambda': 0.95 'num_minibatches': 32 'update_epochs': 8 'norm_adv': True 'clip_coef': 0.2 'clip_vloss': True 'ent_coef': 0.01 'vf_coef': 0.5 'max_grad_norm': 0.5 'target_kl': None 'repo_id': 'pneubauer/basic-scratch-ppo-LunarLander-v2' 'batch_size': 8192 'minibatch_size': 256} ```
samitizerxu/large-algae-vit-wirs
samitizerxu
2023-02-17T09:40:49Z
40
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-02-16T23:56:58Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: large-algae-vit-wirs results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # large-algae-vit-wirs This model is a fine-tuned version of [samitizerxu/large-algae-vit-wirs](https://huggingface.co/samitizerxu/large-algae-vit-wirs) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9128 - Accuracy: 0.6209 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.1662 | 1.0 | 120 | 0.9128 | 0.6209 | | 1.0885 | 2.0 | 240 | 0.9469 | 0.6138 | | 1.1315 | 3.0 | 360 | 1.0919 | 0.5757 | | 1.0542 | 4.0 | 480 | 1.2291 | 0.5599 | | 1.028 | 5.0 | 600 | 1.1931 | 0.5599 | | 1.0023 | 6.0 | 720 | 1.1548 | 0.5675 | | 1.0176 | 7.0 | 840 | 1.0932 | 0.5757 | | 0.992 | 8.0 | 960 | 1.1387 | 0.5751 | | 0.9891 | 9.0 | 1080 | 1.2387 | 0.5464 | | 0.9635 | 10.0 | 1200 | 1.3772 | 0.5428 | | 0.9764 | 11.0 | 1320 | 1.4329 | 0.5258 | | 0.9375 | 12.0 | 1440 | 1.2830 | 0.5522 | | 0.9574 | 13.0 | 1560 | 1.4003 | 0.5229 | | 0.9907 | 14.0 | 1680 | 1.3447 | 0.5423 | | 0.9507 | 15.0 | 1800 | 1.2907 | 0.5604 | | 0.9866 | 16.0 | 1920 | 1.4578 | 0.5393 | | 0.9297 | 17.0 | 2040 | 1.4779 | 0.5282 | | 0.9385 | 18.0 | 2160 | 1.3874 | 0.5469 | | 0.9951 | 19.0 | 2280 | 1.2976 | 0.5587 | | 0.9794 | 20.0 | 2400 | 1.3110 | 0.5569 | | 0.9974 | 21.0 | 2520 | 1.3649 | 0.5276 | | 0.9284 | 22.0 | 2640 | 1.3713 | 0.5364 | | 0.9144 | 23.0 | 2760 | 1.4117 | 0.5340 | | 0.9771 | 24.0 | 2880 | 1.3836 | 0.5358 | | 0.8994 | 25.0 | 3000 | 1.5077 | 0.5282 | | 0.9061 | 26.0 | 3120 | 1.4622 | 0.5329 | | 0.9071 | 27.0 | 3240 | 1.4303 | 0.5393 | | 0.9288 | 28.0 | 3360 | 1.4556 | 0.5329 | | 0.9285 | 29.0 | 3480 | 1.3900 | 0.5446 | | 0.8955 | 30.0 | 3600 | 1.4082 | 0.5387 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
samitizerxu/large-algae-vit-rgb
samitizerxu
2023-02-17T09:31:17Z
27
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-02-16T23:57:45Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: large-algae-vit-rgb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # large-algae-vit-rgb This model is a fine-tuned version of [samitizerxu/large-algae-vit-rgb](https://huggingface.co/samitizerxu/large-algae-vit-rgb) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1659 - Accuracy: 0.5798 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.2115 | 1.0 | 120 | 0.9078 | 0.6315 | | 1.1249 | 2.0 | 240 | 0.9217 | 0.6320 | | 1.1385 | 3.0 | 360 | 0.9518 | 0.6180 | | 1.1347 | 4.0 | 480 | 1.0201 | 0.6068 | | 1.1358 | 5.0 | 600 | 1.0801 | 0.5892 | | 1.098 | 6.0 | 720 | 1.0932 | 0.5851 | | 1.0882 | 7.0 | 840 | 1.0347 | 0.6033 | | 1.0688 | 8.0 | 960 | 1.0403 | 0.6056 | | 1.0863 | 9.0 | 1080 | 1.0466 | 0.6009 | | 1.1253 | 10.0 | 1200 | 1.2308 | 0.5511 | | 1.0393 | 11.0 | 1320 | 1.1434 | 0.5869 | | 1.0749 | 12.0 | 1440 | 1.2155 | 0.5622 | | 1.0433 | 13.0 | 1560 | 1.2466 | 0.5522 | | 1.0141 | 14.0 | 1680 | 1.1880 | 0.5563 | | 1.0516 | 15.0 | 1800 | 1.1006 | 0.5992 | | 1.0696 | 16.0 | 1920 | 1.0971 | 0.5751 | | 0.9867 | 17.0 | 2040 | 1.1689 | 0.5827 | | 1.0234 | 18.0 | 2160 | 1.1846 | 0.5751 | | 1.0364 | 19.0 | 2280 | 1.1480 | 0.5739 | | 1.0314 | 20.0 | 2400 | 1.0977 | 0.5880 | | 1.0179 | 21.0 | 2520 | 1.1258 | 0.5851 | | 1.0584 | 22.0 | 2640 | 1.1569 | 0.5822 | | 1.0222 | 23.0 | 2760 | 1.1672 | 0.5839 | | 0.996 | 24.0 | 2880 | 1.1737 | 0.5798 | | 1.0343 | 25.0 | 3000 | 1.1588 | 0.5792 | | 0.9854 | 26.0 | 3120 | 1.1758 | 0.5763 | | 0.9753 | 27.0 | 3240 | 1.1715 | 0.5763 | | 0.9881 | 28.0 | 3360 | 1.1403 | 0.5839 | | 1.0057 | 29.0 | 3480 | 1.1765 | 0.5781 | | 0.9824 | 30.0 | 3600 | 1.1659 | 0.5798 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
kenyaari/bert-finetuned-squad
kenyaari
2023-02-17T09:18:04Z
8
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-02-17T06:54:11Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: bert-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-squad This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
ashutoshmondal/autotrain-wilderv2-3544295625
ashutoshmondal
2023-02-17T09:17:10Z
18
0
transformers
[ "transformers", "pytorch", "vit", "image-classification", "autotrain", "vision", "dataset:ashutoshmondal/autotrain-data-wilderv2", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-02-17T09:15:27Z
--- tags: - autotrain - vision - image-classification datasets: - ashutoshmondal/autotrain-data-wilderv2 widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace co2_eq_emissions: emissions: 2.829794634796424 --- # Model Trained Using AutoTrain - Problem type: Binary Classification - Model ID: 3544295625 - CO2 Emissions (in grams): 2.8298 ## Validation Metrics - Loss: 0.159 - Accuracy: 0.940 - Precision: 0.923 - Recall: 0.960 - AUC: 0.988 - F1: 0.941
WimStraetemans/ppo-LunarLander-CleanRL
WimStraetemans
2023-02-17T09:10:09Z
0
0
null
[ "tensorboard", "LunarLander-v2", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course", "model-index", "region:us" ]
reinforcement-learning
2023-02-17T09:09:32Z
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 60.11 +/- 72.90 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters ```python {'exp_name': 'ppo' 'seed': 1 'torch_deterministic': True 'cuda': True 'track': False 'wandb_project_name': 'cleanRL' 'wandb_entity': None 'capture_video': False 'env_id': 'LunarLander-v2' 'total_timesteps': 1000000 'learning_rate': 0.00025 'num_envs': 4 'num_steps': 512 'anneal_lr': True 'gae': True 'gamma': 0.999 'gae_lambda': 0.98 'num_minibatches': 4 'update_epochs': 4 'norm_adv': True 'clip_coef': 0.2 'clip_vloss': True 'ent_coef': 0.01 'vf_coef': 0.5 'max_grad_norm': 0.5 'target_kl': None 'repo_id': 'Rowehn/ppo-LunarLander-CleanRL' 'batch_size': 2048 'minibatch_size': 512} ```
stevendee5/bert-finetuned-squad
stevendee5
2023-02-17T09:09:14Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-02-16T19:40:25Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: bert-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-squad This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
igmarco/intel-image-classification
igmarco
2023-02-17T09:04:01Z
0
0
fastai
[ "fastai", "region:us" ]
null
2023-02-16T23:24:09Z
--- tags: - fastai --- # Amazing! 🥳 Congratulations on hosting your fastai model on the Hugging Face Hub! # Some next steps 1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))! 2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)). 3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)! Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card. --- # Model card ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed
pyesonekyaw/recycletree_metal
pyesonekyaw
2023-02-17T09:00:15Z
0
0
fastai
[ "fastai", "image-classification", "license:openrail", "region:us" ]
image-classification
2023-02-17T08:42:36Z
--- tags: - fastai library_name: fastai pipeline_tag: image-classification license: openrail --- # RecycleTree - Metal Classification Model ![Banner](https://huggingface.co/pyesonekyaw/recycletree_plastic/resolve/main/banner.png) RecycleTree is a project from CZ3002 Advanced Software Engineering in Nanyang Technological University. It aims to enable users to have a more informed recycling experience, from finding the nearest recycling bins, to checking whether the item they wish to recycle can indeed be recycled, to learning more about recycling and contamination in general. The whole project can be found on [GitHub](https://github.com/py-sk/RecycleTree) This image classification model in particular is to classify plastic trash items into the following classes: aerosol_can', 'aluminum_tray_foil', 'metal_can_or_container * Aerosol Can * Aluminum Tray Foil * Metal Can/Container ## Training Data The training dataset had 10872 images across 3 classes, with each class having roughly the same distribution of images. The images were either scraped from Google image search, or obtained by ourselves in real life. ## Training Procedure As the purpose of this model was to act just as a proof of concept for quick prototyping of RecycleTree, I opted to use the fast.ai library and a simple model architecture of ResNet34. The training procedure is following the recommendations from [fast.ai](https://docs.fast.ai/) ## Other Models There are also other models in the RecycleTree model series: * [Materials Classification Model](https://huggingface.co/pyesonekyaw/recycletree_materials) - Classification of images of trash into different materials * [Paper Classification Model](https://huggingface.co/pyesonekyaw/recycletree_paper) - Classification of images of paper trash into different classes * [Plastic Classification Model](https://huggingface.co/pyesonekyaw/recycletree_plastic) - Classification of images of plastic trash into different classes * [Glass Classification Model](https://huggingface.co/pyesonekyaw/recycletree_glass) - Classification of images of glass trash into different classes * [Others Classification Model](https://huggingface.co/pyesonekyaw/recycletree_others) - Classification of images of other (not paper, metal, glass, or plastic) trash into different classes
ybelkada/gpt-j-6b-detoxified-24-shdl-400steps
ybelkada
2023-02-17T08:58:27Z
8
0
transformers
[ "transformers", "pytorch", "gptj", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-02-17T08:52:31Z
--- # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1 # Doc / guide: https://huggingface.co/docs/hub/model-cards {} --- # Model Card for GPT-J 6B detoxified <!-- Provide a quick summary of what the model is/does. --> This model is a GPT-J 6B model that has been detoxified using RLHF. # Training details Training logs can be found [here](https://wandb.ai/distill-bloom/trl/runs/2dm41xvj?workspace=user-younesbelkada)
pyesonekyaw/recycletree_glass
pyesonekyaw
2023-02-17T08:52:25Z
0
0
fastai
[ "fastai", "image-classification", "license:openrail", "region:us" ]
image-classification
2023-02-17T08:41:53Z
--- tags: - fastai library_name: fastai pipeline_tag: image-classification license: openrail --- # RecycleTree - Glass Classification Model ![Banner](https://huggingface.co/pyesonekyaw/recycletree_plastic/resolve/main/banner.png) RecycleTree is a project from CZ3002 Advanced Software Engineering in Nanyang Technological University. It aims to enable users to have a more informed recycling experience, from finding the nearest recycling bins, to checking whether the item they wish to recycle can indeed be recycled, to learning more about recycling and contamination in general. The whole project can be found on [GitHub](https://github.com/py-sk/RecycleTree) This image classification model in particular is to classify plastic trash items into the following classes: * Ceramic * Glassware * Lightbulbs ## Training Data The training dataset had 7273 images across 3 classes, with each class having roughly the same distribution of images. The images were either scraped from Google image search, or obtained by ourselves in real life. ## Training Procedure As the purpose of this model was to act just as a proof of concept for quick prototyping of RecycleTree, I opted to use the fast.ai library and a simple model architecture of ResNet34. The training procedure is following the recommendations from [fast.ai](https://docs.fast.ai/) ## Other Models There are also other models in the RecycleTree model series: * [Materials Classification Model](https://huggingface.co/pyesonekyaw/recycletree_materials) - Classification of images of trash into different materials * [Paper Classification Model](https://huggingface.co/pyesonekyaw/recycletree_paper) - Classification of images of paper trash into different classes * [Metal Classification Model](https://huggingface.co/pyesonekyaw/recycletree_metal) - Classification of images of metal trash into different classes * [Plastic Classification Model](https://huggingface.co/pyesonekyaw/recycletree_plastic) - Classification of images of plastic trash into different classes * [Others Classification Model](https://huggingface.co/pyesonekyaw/recycletree_others) - Classification of images of other (not paper, metal, glass, or plastic) trash into different classes
paulinho123/PPO-LunarLander-v2-1
paulinho123
2023-02-17T08:51:36Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-02-17T08:50:59Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -137.23 +/- 89.42 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
pyesonekyaw/recycletree_plastic
pyesonekyaw
2023-02-17T08:47:27Z
0
2
fastai
[ "fastai", "image-classification", "license:openrail", "region:us" ]
image-classification
2023-02-17T06:07:40Z
--- tags: - fastai library_name: fastai pipeline_tag: image-classification license: openrail --- # RecycleTree - Plastics Classification Model ![Banner](https://huggingface.co/pyesonekyaw/recycletree_plastic/resolve/main/banner.png) RecycleTree is a project from CZ3002 Advanced Software Engineering in Nanyang Technological University. It aims to enable users to have a more informed recycling experience, from finding the nearest recycling bins, to checking whether the item they wish to recycle can indeed be recycled, to learning more about recycling and contamination in general. The whole project can be found on [GitHub](https://github.com/py-sk/RecycleTree) This image classification model in particular is to classify plastic trash items into the following classes: * CD * Drinking Straws * Plastic Bags * Plastic Clothes Hanger * Plastic Container/Bottle * Plastic Disposables * Plastic Packaging * Plastic Packaging with Foil * Styrofoam ## Training Data The training dataset had 9646 images across 9 classes, with each class having roughly the same distribution of images. The images were either scraped from Google image search, or obtained by ourselves in real life. ## Training Procedure As the purpose of this model was to act just as a proof of concept for quick prototyping of RecycleTree, I opted to use the fast.ai library and a simple model architecture of ResNet34. The training procedure is following the recommendations from [fast.ai](https://docs.fast.ai/) ## Other Models There are also other models in the RecycleTree model series: * [Materials Classification Model](https://huggingface.co/pyesonekyaw/recycletree_materials) - Classification of images of trash into different materials * [Paper Classification Model](https://huggingface.co/pyesonekyaw/recycletree_paper) - Classification of images of paper trash into different classes * [Metal Classification Model](https://huggingface.co/pyesonekyaw/recycletree_metal) - Classification of images of metal trash into different classes * [Glass Classification Model](https://huggingface.co/pyesonekyaw/recycletree_glass) - Classification of images of glass trash into different classes * [Others Classification Model](https://huggingface.co/pyesonekyaw/recycletree_others) - Classification of images of other (not paper, metal, glass, or plastic) trash into different classes
algoprivacy/bert-finetuned-squad
algoprivacy
2023-02-17T08:39:26Z
4
0
transformers
[ "transformers", "pytorch", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-02-14T20:19:25Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: bert-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-squad This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 1.12.0+cu102 - Datasets 2.9.0 - Tokenizers 0.13.2
taraxis/melov2
taraxis
2023-02-17T07:38:29Z
1
1
diffusers
[ "diffusers", "tensorboard", "text-to-image", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-02-17T07:37:03Z
--- license: creativeml-openrail-m tags: - text-to-image widget: - text: mloctst --- ### Melov2 Dreambooth model trained by taraxis with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts! Sample pictures of: mloctst (use that on your prompt) ![mloctst 0](https://huggingface.co/taraxis/melov2/resolve/main/concept_images/mloctst_%281%29.jpg)![mloctst 1](https://huggingface.co/taraxis/melov2/resolve/main/concept_images/mloctst_%282%29.jpg)![mloctst 2](https://huggingface.co/taraxis/melov2/resolve/main/concept_images/mloctst_%283%29.jpg)![mloctst 3](https://huggingface.co/taraxis/melov2/resolve/main/concept_images/mloctst_%284%29.jpg)![mloctst 4](https://huggingface.co/taraxis/melov2/resolve/main/concept_images/mloctst_%285%29.jpg)![mloctst 5](https://huggingface.co/taraxis/melov2/resolve/main/concept_images/mloctst_%286%29.jpg)![mloctst 6](https://huggingface.co/taraxis/melov2/resolve/main/concept_images/mloctst_%287%29.jpg)![mloctst 7](https://huggingface.co/taraxis/melov2/resolve/main/concept_images/mloctst_%288%29.jpg)![mloctst 8](https://huggingface.co/taraxis/melov2/resolve/main/concept_images/mloctst_%289%29.jpg)![mloctst 9](https://huggingface.co/taraxis/melov2/resolve/main/concept_images/mloctst_%2810%29.jpg)![mloctst 10](https://huggingface.co/taraxis/melov2/resolve/main/concept_images/mloctst_%2811%29.jpg)![mloctst 11](https://huggingface.co/taraxis/melov2/resolve/main/concept_images/mloctst_%2812%29.jpg)![mloctst 12](https://huggingface.co/taraxis/melov2/resolve/main/concept_images/mloctst_%2813%29.jpg)![mloctst 13](https://huggingface.co/taraxis/melov2/resolve/main/concept_images/mloctst_%2814%29.jpg)![mloctst 14](https://huggingface.co/taraxis/melov2/resolve/main/concept_images/mloctst_%2815%29.jpg)![mloctst 15](https://huggingface.co/taraxis/melov2/resolve/main/concept_images/mloctst_%2816%29.jpg)![mloctst 16](https://huggingface.co/taraxis/melov2/resolve/main/concept_images/mloctst_%2817%29.jpg)![mloctst 17](https://huggingface.co/taraxis/melov2/resolve/main/concept_images/mloctst_%2818%29.jpg)![mloctst 18](https://huggingface.co/taraxis/melov2/resolve/main/concept_images/mloctst_%2819%29.jpg)![mloctst 19](https://huggingface.co/taraxis/melov2/resolve/main/concept_images/mloctst_%2820%29.jpg)![mloctst 20](https://huggingface.co/taraxis/melov2/resolve/main/concept_images/mloctst_%2821%29.jpg)![mloctst 21](https://huggingface.co/taraxis/melov2/resolve/main/concept_images/mloctst_%2822%29.jpg)
noodlynoodle/a2c-PandaReachDense-v2
noodlynoodle
2023-02-17T07:09:09Z
4
0
stable-baselines3
[ "stable-baselines3", "PandaReachDense-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-02-17T07:06:41Z
--- library_name: stable-baselines3 tags: - PandaReachDense-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v2 type: PandaReachDense-v2 metrics: - type: mean_reward value: -0.94 +/- 0.28 name: mean_reward verified: false --- # **A2C** Agent playing **PandaReachDense-v2** This is a trained model of a **A2C** agent playing **PandaReachDense-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
dcerys/distilbert-base-uncased-finetuned-squad
dcerys
2023-02-17T07:05:23Z
21
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-02-17T03:27:04Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: distilbert-base-uncased-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.1511 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.2103 | 1.0 | 5533 | 1.1582 | | 0.9536 | 2.0 | 11066 | 1.1241 | | 0.7529 | 3.0 | 16599 | 1.1511 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
egyee/dqn-SpaceInvadersNoFrameskip-v4-Test_1
egyee
2023-02-17T06:59:38Z
8
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-02-17T06:58:58Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 227.50 +/- 137.95 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga eryzml -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga eryzml -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga eryzml ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 150000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.00025), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
Elaina617/anything-orangemix2
Elaina617
2023-02-17T06:48:46Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-02-17T06:42:16Z
--- license: creativeml-openrail-m ---
Gokulapriyan/swin-tiny-patch4-window7-224-finetuned-og_dataset_10e-finetuned-og_dataset_10e
Gokulapriyan
2023-02-17T06:46:35Z
36
0
transformers
[ "transformers", "pytorch", "tensorboard", "swin", "image-classification", "generated_from_trainer", "dataset:imagefolder", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-02-17T04:47:53Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imagefolder model-index: - name: swin-tiny-patch4-window7-224-finetuned-og_dataset_10e-finetuned-og_dataset_10e results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-og_dataset_10e-finetuned-og_dataset_10e This model is a fine-tuned version of [Gokulapriyan/swin-tiny-patch4-window7-224-finetuned-og_dataset_10e](https://huggingface.co/Gokulapriyan/swin-tiny-patch4-window7-224-finetuned-og_dataset_10e) on the imagefolder dataset. It achieves the following results on the evaluation set: - eval_loss: 0.0340 - eval_accuracy: 0.9878 - eval_runtime: 171.5097 - eval_samples_per_second: 72.002 - eval_steps_per_second: 2.251 - epoch: 4.0 - step: 2184 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
trinket2023/BERTModelQA
trinket2023
2023-02-17T06:42:00Z
21
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-02-17T03:43:40Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: BERTModelQA results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BERTModelQA This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.1564 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.3855 | 1.0 | 2188 | 1.2194 | | 1.0469 | 2.0 | 4376 | 1.1453 | | 0.8124 | 3.0 | 6564 | 1.1564 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
RyuExcalibur/bart-large-mnli-aitools-8n
RyuExcalibur
2023-02-17T06:33:12Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "bart", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-02-17T06:00:19Z
--- license: mit tags: - generated_from_trainer metrics: - accuracy model-index: - name: bart-large-mnli-aitools-8n results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-large-mnli-aitools-8n This model is a fine-tuned version of [facebook/bart-large-mnli](https://huggingface.co/facebook/bart-large-mnli) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2700 - Accuracy: 0.9630 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.07 | 50 | 0.5082 | 0.8580 | | No log | 0.14 | 100 | 0.5312 | 0.8580 | | No log | 0.21 | 150 | 0.3020 | 0.9259 | | No log | 0.27 | 200 | 0.3802 | 0.9259 | | No log | 0.34 | 250 | 0.3721 | 0.9259 | | No log | 0.41 | 300 | 0.3692 | 0.9321 | | No log | 0.48 | 350 | 0.4657 | 0.8951 | | No log | 0.55 | 400 | 0.5192 | 0.9198 | | No log | 0.62 | 450 | 0.4348 | 0.9259 | | 0.3718 | 0.68 | 500 | 0.3369 | 0.9383 | | 0.3718 | 0.75 | 550 | 0.3150 | 0.9444 | | 0.3718 | 0.82 | 600 | 0.2712 | 0.9630 | | 0.3718 | 0.89 | 650 | 0.2900 | 0.9444 | | 0.3718 | 0.96 | 700 | 0.2895 | 0.9444 | | 0.3718 | 1.03 | 750 | 0.2578 | 0.9383 | | 0.3718 | 1.09 | 800 | 0.3731 | 0.9506 | | 0.3718 | 1.16 | 850 | 0.1916 | 0.9506 | | 0.3718 | 1.23 | 900 | 0.1980 | 0.9444 | | 0.3718 | 1.3 | 950 | 0.3446 | 0.9506 | | 0.2003 | 1.37 | 1000 | 0.3997 | 0.9444 | | 0.2003 | 1.44 | 1050 | 0.3500 | 0.9444 | | 0.2003 | 1.5 | 1100 | 0.2820 | 0.9444 | | 0.2003 | 1.57 | 1150 | 0.3192 | 0.9506 | | 0.2003 | 1.64 | 1200 | 0.3207 | 0.9444 | | 0.2003 | 1.71 | 1250 | 0.2535 | 0.9444 | | 0.2003 | 1.78 | 1300 | 0.2543 | 0.9506 | | 0.2003 | 1.85 | 1350 | 0.2218 | 0.9691 | | 0.2003 | 1.92 | 1400 | 0.3685 | 0.9444 | | 0.2003 | 1.98 | 1450 | 0.2633 | 0.9630 | | 0.1534 | 2.05 | 1500 | 0.2700 | 0.9630 | | 0.1534 | 2.12 | 1550 | 0.1888 | 0.9568 | | 0.1534 | 2.19 | 1600 | 0.2366 | 0.9630 | | 0.1534 | 2.26 | 1650 | 0.2998 | 0.9630 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
huggingtweets/notgnasukitself
huggingtweets
2023-02-17T06:25:18Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-02-17T06:24:22Z
--- language: en thumbnail: http://www.huggingtweets.com/notgnasukitself/1676615114411/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1519358387833786370/HPTTS95u_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">ً</div> <div style="text-align: center; font-size: 14px;">@notgnasukitself</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from ً. | Data | ً | | --- | --- | | Tweets downloaded | 3074 | | Retweets | 625 | | Short tweets | 711 | | Tweets kept | 1738 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/j2r415mf/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @notgnasukitself's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/pggvrw7n) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/pggvrw7n/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/notgnasukitself') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
AnAmbitiousMonk/ppo-LunarLander-v4
AnAmbitiousMonk
2023-02-17T06:21:26Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-02-16T17:25:25Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 266.00 +/- 19.45 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
noodlynoodle/a2c-AntBulletEnv-v0
noodlynoodle
2023-02-17T06:15:09Z
3
0
stable-baselines3
[ "stable-baselines3", "AntBulletEnv-v0", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-02-02T06:34:42Z
--- library_name: stable-baselines3 tags: - AntBulletEnv-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: AntBulletEnv-v0 type: AntBulletEnv-v0 metrics: - type: mean_reward value: 1429.90 +/- 212.99 name: mean_reward verified: false --- # **A2C** Agent playing **AntBulletEnv-v0** This is a trained model of a **A2C** agent playing **AntBulletEnv-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
RyuExcalibur/bart-large-mnli-aitools-6n
RyuExcalibur
2023-02-17T05:58:42Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "bart", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-02-17T05:34:54Z
--- license: mit tags: - generated_from_trainer metrics: - accuracy model-index: - name: bart-large-mnli-aitools-6n results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-large-mnli-aitools-6n This model is a fine-tuned version of [facebook/bart-large-mnli](https://huggingface.co/facebook/bart-large-mnli) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2748 - Accuracy: 0.9444 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.09 | 50 | 0.0885 | 0.9762 | | No log | 0.18 | 100 | 0.4805 | 0.8571 | | No log | 0.26 | 150 | 0.2582 | 0.9524 | | No log | 0.35 | 200 | 0.2742 | 0.9286 | | No log | 0.44 | 250 | 0.1553 | 0.9683 | | No log | 0.53 | 300 | 0.2574 | 0.9603 | | No log | 0.62 | 350 | 0.3690 | 0.9444 | | No log | 0.7 | 400 | 0.3113 | 0.9365 | | No log | 0.79 | 450 | 0.3474 | 0.9206 | | 0.3671 | 0.88 | 500 | 0.2385 | 0.9206 | | 0.3671 | 0.97 | 550 | 0.2947 | 0.9365 | | 0.3671 | 1.05 | 600 | 0.2834 | 0.9444 | | 0.3671 | 1.14 | 650 | 0.2425 | 0.9524 | | 0.3671 | 1.23 | 700 | 0.2494 | 0.9524 | | 0.3671 | 1.32 | 750 | 0.3040 | 0.9444 | | 0.3671 | 1.41 | 800 | 0.2974 | 0.9444 | | 0.3671 | 1.49 | 850 | 0.2268 | 0.9683 | | 0.3671 | 1.58 | 900 | 0.3889 | 0.9365 | | 0.3671 | 1.67 | 950 | 0.3333 | 0.8968 | | 0.1777 | 1.76 | 1000 | 0.2748 | 0.9444 | | 0.1777 | 1.85 | 1050 | 0.3463 | 0.9206 | | 0.1777 | 1.93 | 1100 | 0.2951 | 0.9444 | | 0.1777 | 2.02 | 1150 | 0.2726 | 0.9524 | | 0.1777 | 2.11 | 1200 | 0.3241 | 0.9444 | | 0.1777 | 2.2 | 1250 | 0.3543 | 0.9365 | | 0.1777 | 2.28 | 1300 | 0.4440 | 0.9444 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
Seiriryu/VToonify
Seiriryu
2023-02-17T05:37:15Z
0
1
pytorch
[ "pytorch", "style-transfer", "face-stylization", "arxiv:2209.11224", "region:us" ]
null
2023-02-17T04:52:32Z
--- library_name: pytorch tags: - style-transfer - face-stylization --- ## Model Details This system provides a web demo for the following paper: **VToonify: Controllable High-Resolution Portrait Video Style Transfer (TOG/SIGGRAPH Asia 2022)** - Developed by: Shuai Yang, Liming Jiang, Ziwei Liu and Chen Change Loy - Resources for more information: - [Project Page](https://www.mmlab-ntu.com/project/vtoonify/) - [Research Paper](https://arxiv.org/abs/2209.11224) - [GitHub Repo](https://github.com/williamyang1991/VToonify) **Abstract** > Generating high-quality artistic portrait videos is an important and desirable task in computer graphics and vision. Although a series of successful portrait image toonification models built upon the powerful StyleGAN have been proposed, these image-oriented methods have obvious limitations when applied to videos, such as the fixed frame size, the requirement of face alignment, missing non-facial details and temporal inconsistency. In this work, we investigate the challenging controllable high-resolution portrait video style transfer by introducing a novel **VToonify** framework. Specifically, VToonify leverages the mid- and high-resolution layers of StyleGAN to render high-quality artistic portraits based on the multi-scale content features extracted by an encoder to better preserve the frame details. The resulting fully convolutional architecture accepts non-aligned faces in videos of variable size as input, contributing to complete face regions with natural motions in the output. Our framework is compatible with existing StyleGAN-based image toonification models to extend them to video toonification, and inherits appealing features of these models for flexible style control on color and intensity. This work presents two instantiations of VToonify built upon Toonify and DualStyleGAN for collection-based and exemplar-based portrait video style transfer, respectively. Extensive experimental results demonstrate the effectiveness of our proposed VToonify framework over existing methods in generating high-quality and temporally-coherent artistic portrait videos with flexible style controls. ## Citation Information ```bibtex @article{yang2022Vtoonify, title={VToonify: Controllable High-Resolution Portrait Video Style Transfer}, author={Yang, Shuai and Jiang, Liming and Liu, Ziwei and Loy, Chen Change}, journal={ACM Transactions on Graphics (TOG)}, volume={41}, number={6}, articleno={203}, pages={1--15}, year={2022}, publisher={ACM New York, NY, USA}, doi={10.1145/3550454.3555437}, } ``` ## License [S-Lab License 1.0](https://github.com/williamyang1991/VToonify/blob/main/LICENSE.md)
taraxis/melov1
taraxis
2023-02-17T05:06:07Z
2
0
diffusers
[ "diffusers", "tensorboard", "text-to-image", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-02-17T04:56:42Z
--- license: creativeml-openrail-m tags: - text-to-image widget: - text: mlloctst --- ### Melov1 Dreambooth model trained by taraxis with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts! Sample pictures of: mlloctst (use that on your prompt) ![mlloctst 0](https://huggingface.co/taraxis/mellov1/resolve/main/concept_images/mlloctst_%281%29.jpg)![mlloctst 1](https://huggingface.co/taraxis/mellov1/resolve/main/concept_images/mlloctst_%282%29.jpg)![mlloctst 2](https://huggingface.co/taraxis/mellov1/resolve/main/concept_images/mlloctst_%283%29.jpg)![mlloctst 3](https://huggingface.co/taraxis/mellov1/resolve/main/concept_images/mlloctst_%284%29.jpg)![mlloctst 4](https://huggingface.co/taraxis/mellov1/resolve/main/concept_images/mlloctst_%285%29.jpg)![mlloctst 5](https://huggingface.co/taraxis/mellov1/resolve/main/concept_images/mlloctst_%286%29.jpg)![mlloctst 6](https://huggingface.co/taraxis/mellov1/resolve/main/concept_images/mlloctst_%287%29.jpg)![mlloctst 7](https://huggingface.co/taraxis/mellov1/resolve/main/concept_images/mlloctst_%288%29.jpg)![mlloctst 8](https://huggingface.co/taraxis/mellov1/resolve/main/concept_images/mlloctst_%289%29.jpg)![mlloctst 9](https://huggingface.co/taraxis/mellov1/resolve/main/concept_images/mlloctst_%2810%29.jpg)![mlloctst 10](https://huggingface.co/taraxis/mellov1/resolve/main/concept_images/mlloctst_%2811%29.jpg)![mlloctst 11](https://huggingface.co/taraxis/mellov1/resolve/main/concept_images/mlloctst_%2812%29.jpg)![mlloctst 12](https://huggingface.co/taraxis/mellov1/resolve/main/concept_images/mlloctst_%2813%29.jpg)![mlloctst 13](https://huggingface.co/taraxis/mellov1/resolve/main/concept_images/mlloctst_%2814%29.jpg)![mlloctst 14](https://huggingface.co/taraxis/mellov1/resolve/main/concept_images/mlloctst_%2815%29.jpg)![mlloctst 15](https://huggingface.co/taraxis/mellov1/resolve/main/concept_images/mlloctst_%2816%29.jpg)![mlloctst 16](https://huggingface.co/taraxis/mellov1/resolve/main/concept_images/mlloctst_%2817%29.jpg)![mlloctst 17](https://huggingface.co/taraxis/mellov1/resolve/main/concept_images/mlloctst_%2818%29.jpg)![mlloctst 18](https://huggingface.co/taraxis/mellov1/resolve/main/concept_images/mlloctst_%2819%29.jpg)![mlloctst 19](https://huggingface.co/taraxis/mellov1/resolve/main/concept_images/mlloctst_%2820%29.jpg)![mlloctst 20](https://huggingface.co/taraxis/mellov1/resolve/main/concept_images/mlloctst_%2821%29.jpg)![mlloctst 21](https://huggingface.co/taraxis/mellov1/resolve/main/concept_images/mlloctst_%2822%29.jpg)![mlloctst 22](https://huggingface.co/taraxis/mellov1/resolve/main/concept_images/mlloctst_%2823%29.jpg)![mlloctst 23](https://huggingface.co/taraxis/mellov1/resolve/main/concept_images/mlloctst_%2824%29.jpg)![mlloctst 24](https://huggingface.co/taraxis/mellov1/resolve/main/concept_images/mlloctst_%2825%29.jpg)
Brhnglc/a2c-AntBulletEnv-v0
Brhnglc
2023-02-17T05:03:30Z
0
0
stable-baselines3
[ "stable-baselines3", "AntBulletEnv-v0", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-02-16T20:33:46Z
--- library_name: stable-baselines3 tags: - AntBulletEnv-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: AntBulletEnv-v0 type: AntBulletEnv-v0 metrics: - type: mean_reward value: 779.89 +/- 71.24 name: mean_reward verified: false --- # **A2C** Agent playing **AntBulletEnv-v0** This is a trained model of a **A2C** agent playing **AntBulletEnv-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
huggingtweets/dearearth_-elonmusk
huggingtweets
2023-02-17T05:03:02Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-02-17T05:02:52Z
--- language: en thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1611463006839250989/1prYL3Od_400x400.png&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Elon Musk & DearEarth.eth</div> <div style="text-align: center; font-size: 14px;">@dearearth_-elonmusk</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Elon Musk & DearEarth.eth. | Data | Elon Musk | DearEarth.eth | | --- | --- | --- | | Tweets downloaded | 3194 | 3233 | | Retweets | 174 | 2466 | | Short tweets | 1045 | 106 | | Tweets kept | 1975 | 661 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/9lsofvbj/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dearearth_-elonmusk's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3as3kvm7) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3as3kvm7/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/dearearth_-elonmusk') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
RyuExcalibur/bart-large-mnli-aitools-11n
RyuExcalibur
2023-02-17T04:09:30Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "bart", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-02-17T03:41:44Z
--- license: mit tags: - generated_from_trainer metrics: - accuracy model-index: - name: bart-large-mnli-aitools-11n results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-large-mnli-aitools-11n This model is a fine-tuned version of [facebook/bart-large-mnli](https://huggingface.co/facebook/bart-large-mnli) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1504 - Accuracy: 0.9631 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.05 | 50 | 0.5899 | 0.9171 | | No log | 0.1 | 100 | 0.2924 | 0.9171 | | No log | 0.15 | 150 | 0.1300 | 0.9171 | | No log | 0.21 | 200 | 0.3631 | 0.9447 | | No log | 0.26 | 250 | 0.1954 | 0.9493 | | No log | 0.31 | 300 | 0.2654 | 0.9447 | | No log | 0.36 | 350 | 0.3464 | 0.9401 | | No log | 0.41 | 400 | 0.1400 | 0.9585 | | No log | 0.46 | 450 | 0.1686 | 0.9631 | | 0.311 | 0.51 | 500 | 0.2399 | 0.9447 | | 0.311 | 0.56 | 550 | 0.2273 | 0.9585 | | 0.311 | 0.62 | 600 | 0.0956 | 0.9770 | | 0.311 | 0.67 | 650 | 0.1788 | 0.9309 | | 0.311 | 0.72 | 700 | 0.1840 | 0.9447 | | 0.311 | 0.77 | 750 | 0.1828 | 0.9631 | | 0.311 | 0.82 | 800 | 0.0765 | 0.9770 | | 0.311 | 0.87 | 850 | 0.1851 | 0.9585 | | 0.311 | 0.92 | 900 | 0.0854 | 0.9724 | | 0.311 | 0.97 | 950 | 0.0783 | 0.9724 | | 0.2123 | 1.03 | 1000 | 0.1504 | 0.9631 | | 0.2123 | 1.08 | 1050 | 0.2845 | 0.9585 | | 0.2123 | 1.13 | 1100 | 0.2655 | 0.9585 | | 0.2123 | 1.18 | 1150 | 0.1718 | 0.9585 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
xenon3134-mc/empty-eyes-LoRAs
xenon3134-mc
2023-02-17T03:27:32Z
0
17
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-02-10T05:49:44Z
--- license: creativeml-openrail-m --- # LoRAs When using these LoRAs, it may be a better image to redraw only face or eyes using inpaint. Or, it is recommended to reduce the Weight of LoRA. - [utsurome_v3.safetensors](#utsurome_v3.safetensors) - base model: [7th_anime_3.1_Cg](https://huggingface.co/syaimu/7th_test) - training dataset: [empty-eyes-dataset](https://huggingface.co/datasets/xenon3134-mc/empty-eyes-dataset/tree/main/empty_eyes) - [yorime.safetensors](#yorime.safetensors) - base model: [7th_anime_3.1_Cg](https://huggingface.co/syaimu/7th_test) - [shirome.safetensors](#shirome.safetensors) - base model: [7th_anime_3.1_Cg](https://huggingface.co/syaimu/7th_test) # utsurome_v3.safetensors [<img src="https://huggingface.co/xenon3134-mc/empty-eyes-LoRAs/resolve/main/samples/utsurome_v3.png" width="512" height="768">](https://huggingface.co/xenon3134-mc/empty-eyes-LoRAs/resolve/main/samples/utsurome_v3.png) <details> <summary>Sample Prompt</summary> <pre> masterpiece, best quality, 1girl, empty eyes, utsurome, maid, smile Negative prompt: (worst quality:1.4), (low quality:1.4), nsfw, Steps: 30, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 416625012, Size: 512x768, Model hash: 49576e83ad, Clip skip: 2, ENSD: 31337, AddNet Enabled: True, AddNet Module 1: LoRA, AddNet Model 1: utsurome_v5(2f0093a7aa52), AddNet Weight A 1: 0.6, AddNet Weight B 1: 0.6</pre> </details> v3 has improved image quality compared to v2. [<img src="https://huggingface.co/xenon3134-mc/empty-eyes-LoRAs/resolve/main/samples/Comparison.png" width="1200" height="600">](https://huggingface.co/xenon3134-mc/empty-eyes-LoRAs/resolve/main/samples/Comparison.png) # yorime.safetensors [<img src="https://huggingface.co/xenon3134-mc/empty-eyes-LoRAs/resolve/main/samples/yorime.png" width="512" height="768">](https://huggingface.co/xenon3134-mc/empty-eyes-LoRAs/resolve/main/samples/yorime.png) <details> <summary>Sample Prompt</summary> <pre> masterpiece, best quality, 1girl, empty eyes, yorime, smlie, maid Negative prompt: (worst quality:1.4), (low quality:1.4), (nsfw: 1.3), blush Steps: 30, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 541777558, Size: 512x768, Model hash: 49576e83ad, Clip skip: 2, ENSD: 31337, AddNet Enabled: True, AddNet Module 1: LoRA, AddNet Model 1: yorime(5924d7962886), AddNet Weight A 1: 1, AddNet Weight B 1: 1</pre> </details> </br> # shirome.safetensors [<img src="https://huggingface.co/xenon3134-mc/empty-eyes-LoRAs/resolve/main/samples/shirome.png" width="512" height="768">](https://huggingface.co/xenon3134-mc/empty-eyes-LoRAs/resolve/main/samples/shirome.png) <details> <summary>Sample Prompt</summary> <pre> masterpiece, best quality, 1girl, shirome, blank eyes,upper body, Negative prompt: (worst quality:1.4), (low quality:1.4), (nsfw:1.3), blush, monochrome, Steps: 30, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 266137625, Size: 512x768, Model hash: 49576e83ad, Clip skip: 2, ENSD: 31337, AddNet Enabled: True, AddNet Module 1: LoRA, AddNet Model 1: shirome(b89df5356523), AddNet Weight A 1: 1, AddNet Weight B 1: 1</pre> </details>
ZhihongDeng/ppo-Huggy
ZhihongDeng
2023-02-17T03:25:19Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "unity-ml-agents", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2023-02-17T03:25:11Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy library_name: ml-agents --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy 2. Step 1: Write your model_id: ZhihongDeng/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Fred99774/valendra
Fred99774
2023-02-17T03:25:01Z
0
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-02-17T03:21:50Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### valendra Dreambooth model trained by Fred99774 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
rishabhjain16/whisper_large_v2_to_myst_cmu_pf_ot100
rishabhjain16
2023-02-17T03:23:21Z
15
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-02-15T23:14:13Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - wer model-index: - name: openai/whisper-large-v2 results: - task: type: automatic-speech-recognition name: Automatic Speech Recognition dataset: name: rishabhjain16/infer_myst type: rishabhjain16/infer_myst config: en split: test metrics: - type: wer value: 11.62 name: WER - task: type: automatic-speech-recognition name: Automatic Speech Recognition dataset: name: rishabhjain16/infer_pfs type: rishabhjain16/infer_pfs config: en split: test metrics: - type: wer value: 2.84 name: WER - task: type: automatic-speech-recognition name: Automatic Speech Recognition dataset: name: rishabhjain16/infer_cmu type: rishabhjain16/infer_cmu config: en split: test metrics: - type: wer value: 1.75 name: WER - task: type: automatic-speech-recognition name: Automatic Speech Recognition dataset: name: rishabhjain16/libritts_dev_clean type: rishabhjain16/libritts_dev_clean config: en split: test metrics: - type: wer value: 4.53 name: WER - task: type: automatic-speech-recognition name: Automatic Speech Recognition dataset: name: rishabhjain16/infer_pf_swedish type: rishabhjain16/infer_pf_swedish config: en split: test metrics: - type: wer value: 8.36 name: WER - task: type: automatic-speech-recognition name: Automatic Speech Recognition dataset: name: rishabhjain16/infer_pf_german type: rishabhjain16/infer_pf_german config: en split: test metrics: - type: wer value: 34.26 name: WER - task: type: automatic-speech-recognition name: Automatic Speech Recognition dataset: name: rishabhjain16/infer_pf_italian type: rishabhjain16/infer_pf_italian config: en split: test metrics: - type: wer value: 4.4 name: WER - task: type: automatic-speech-recognition name: Automatic Speech Recognition dataset: name: rishabhjain16/infer_so_chinese type: rishabhjain16/infer_so_chinese config: en split: test metrics: - type: wer value: 14.52 name: WER --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # openai/whisper-large-v2 This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1969 - Wer: 9.3970 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.6144 | 0.12 | 500 | 0.2795 | 14.0737 | | 0.1643 | 0.25 | 1000 | 0.2213 | 11.4916 | | 0.2175 | 0.38 | 1500 | 0.2009 | 10.0021 | | 0.1512 | 1.11 | 2000 | 0.1980 | 11.2632 | | 0.1527 | 1.24 | 2500 | 0.1916 | 10.8469 | | 0.0918 | 1.36 | 3000 | 0.1890 | 9.6498 | | 0.047 | 2.1 | 3500 | 0.2034 | 9.4274 | | 0.0822 | 2.23 | 4000 | 0.1969 | 9.3970 | ### Framework versions - Transformers 4.27.0.dev0 - Pytorch 1.13.1+cu117 - Datasets 2.9.1.dev0 - Tokenizers 0.13.2
RyuExcalibur/bart-large-mnli-aitools-7n
RyuExcalibur
2023-02-17T03:03:51Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "bart", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-02-17T02:18:47Z
--- license: mit tags: - generated_from_trainer metrics: - accuracy model-index: - name: bart-large-mnli-aitools-7n results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-large-mnli-aitools-7n This model is a fine-tuned version of [facebook/bart-large-mnli](https://huggingface.co/facebook/bart-large-mnli) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2440 - Accuracy: 0.9653 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.08 | 50 | 0.3695 | 0.8819 | | No log | 0.15 | 100 | 0.5887 | 0.8819 | | No log | 0.23 | 150 | 0.4348 | 0.8819 | | No log | 0.31 | 200 | 0.5770 | 0.8819 | | No log | 0.38 | 250 | 0.3552 | 0.9306 | | No log | 0.46 | 300 | 0.2887 | 0.9306 | | No log | 0.54 | 350 | 0.3606 | 0.9444 | | No log | 0.62 | 400 | 0.3048 | 0.9444 | | No log | 0.69 | 450 | 0.3399 | 0.9028 | | 0.4278 | 0.77 | 500 | 0.3600 | 0.9236 | | 0.4278 | 0.85 | 550 | 0.3100 | 0.9375 | | 0.4278 | 0.92 | 600 | 0.3624 | 0.9444 | | 0.4278 | 1.0 | 650 | 0.3367 | 0.9444 | | 0.4278 | 1.08 | 700 | 0.2593 | 0.9444 | | 0.4278 | 1.15 | 750 | 0.3215 | 0.9236 | | 0.4278 | 1.23 | 800 | 0.3484 | 0.9306 | | 0.4278 | 1.31 | 850 | 0.3628 | 0.9167 | | 0.4278 | 1.38 | 900 | 0.3267 | 0.9444 | | 0.4278 | 1.46 | 950 | 0.3527 | 0.9375 | | 0.2206 | 1.54 | 1000 | 0.3661 | 0.9306 | | 0.2206 | 1.62 | 1050 | 0.2522 | 0.9514 | | 0.2206 | 1.69 | 1100 | 0.3929 | 0.9167 | | 0.2206 | 1.77 | 1150 | 0.2960 | 0.9306 | | 0.2206 | 1.85 | 1200 | 0.2918 | 0.9444 | | 0.2206 | 1.92 | 1250 | 0.2746 | 0.9514 | | 0.2206 | 2.0 | 1300 | 0.2954 | 0.9583 | | 0.2206 | 2.08 | 1350 | 0.2634 | 0.9375 | | 0.2206 | 2.15 | 1400 | 0.3141 | 0.9514 | | 0.2206 | 2.23 | 1450 | 0.2427 | 0.9514 | | 0.1761 | 2.31 | 1500 | 0.2440 | 0.9653 | | 0.1761 | 2.38 | 1550 | 0.2204 | 0.9653 | | 0.1761 | 2.46 | 1600 | 0.2171 | 0.9653 | | 0.1761 | 2.54 | 1650 | 0.2676 | 0.9583 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
jhsign/xlm-roberta-base-finetuned-panx-ko
jhsign
2023-02-17T02:58:21Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-02-17T02:44:53Z
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-ko results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme config: PAN-X.ko split: validation args: PAN-X.ko metrics: - name: F1 type: f1 value: 0.8620297699594046 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-ko This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1756 - F1: 0.8620 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.346 | 1.0 | 787 | 0.2067 | 0.8033 | | 0.172 | 2.0 | 1574 | 0.1835 | 0.8382 | | 0.1082 | 3.0 | 2361 | 0.1756 | 0.8620 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
andreids/en_textcat_website_expenses_out
andreids
2023-02-17T02:56:37Z
0
0
spacy
[ "spacy", "text-classification", "en", "region:us" ]
text-classification
2023-02-17T02:56:23Z
--- tags: - spacy - text-classification language: - en model-index: - name: en_textcat_website_expenses_out results: [] --- | Feature | Description | | --- | --- | | **Name** | `en_textcat_website_expenses_out` | | **Version** | `0.0.1` | | **spaCy** | `>=3.4.3,<3.5.0` | | **Default Pipeline** | `textcat` | | **Components** | `textcat` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | n/a | | **License** | n/a | | **Author** | [n/a]() | ### Label Scheme <details> <summary>View label scheme (2 labels for 1 components)</summary> | Component | Labels | | --- | --- | | **`textcat`** | `OTHER`, `5700 - Website expenses` | </details> ### Accuracy | Type | Score | | --- | --- | | `CATS_SCORE` | 72.62 | | `CATS_MICRO_P` | 98.77 | | `CATS_MICRO_R` | 98.77 | | `CATS_MICRO_F` | 98.77 | | `CATS_MACRO_P` | 85.79 | | `CATS_MACRO_R` | 66.66 | | `CATS_MACRO_F` | 72.62 | | `CATS_MACRO_AUC` | 86.53 | | `CATS_MACRO_AUC_PER_TYPE` | 0.00 | | `TEXTCAT_LOSS` | 152.97 |
andreids/en_textcat_transport_local_out
andreids
2023-02-17T02:54:06Z
0
0
spacy
[ "spacy", "text-classification", "en", "region:us" ]
text-classification
2023-02-17T02:53:45Z
--- tags: - spacy - text-classification language: - en model-index: - name: en_textcat_transport_local_out results: [] --- | Feature | Description | | --- | --- | | **Name** | `en_textcat_transport_local_out` | | **Version** | `0.0.1` | | **spaCy** | `>=3.4.3,<3.5.0` | | **Default Pipeline** | `textcat` | | **Components** | `textcat` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | n/a | | **License** | n/a | | **Author** | [n/a]() | ### Label Scheme <details> <summary>View label scheme (2 labels for 1 components)</summary> | Component | Labels | | --- | --- | | **`textcat`** | `OTHER`, `5650 - Transport - local` | </details> ### Accuracy | Type | Score | | --- | --- | | `CATS_SCORE` | 82.63 | | `CATS_MICRO_P` | 98.69 | | `CATS_MICRO_R` | 98.69 | | `CATS_MICRO_F` | 98.69 | | `CATS_MACRO_P` | 87.11 | | `CATS_MACRO_R` | 79.15 | | `CATS_MACRO_F` | 82.63 | | `CATS_MACRO_AUC` | 87.65 | | `CATS_MACRO_AUC_PER_TYPE` | 0.00 | | `TEXTCAT_LOSS` | 107.50 |
Madhana/distilroberta-base-finetuned-wikitext2
Madhana
2023-02-17T02:32:25Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-02-17T02:00:53Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilroberta-base-finetuned-wikitext2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilroberta-base-finetuned-wikitext2 This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8359 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.0852 | 1.0 | 2406 | 1.9225 | | 1.993 | 2.0 | 4812 | 1.8837 | | 1.9616 | 3.0 | 7218 | 1.8234 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
andreids/en_textcat_advertising_out
andreids
2023-02-17T02:16:44Z
1
0
spacy
[ "spacy", "text-classification", "en", "region:us" ]
text-classification
2023-02-17T02:16:30Z
--- tags: - spacy - text-classification language: - en model-index: - name: en_textcat_advertising_out results: [] --- | Feature | Description | | --- | --- | | **Name** | `en_textcat_advertising_out` | | **Version** | `0.0.1` | | **spaCy** | `>=3.4.3,<3.5.0` | | **Default Pipeline** | `textcat` | | **Components** | `textcat` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | n/a | | **License** | n/a | | **Author** | [n/a]() | ### Label Scheme <details> <summary>View label scheme (2 labels for 1 components)</summary> | Component | Labels | | --- | --- | | **`textcat`** | `OTHER`, `5020 - Advertising` | </details> ### Accuracy | Type | Score | | --- | --- | | `CATS_SCORE` | 84.23 | | `CATS_MICRO_P` | 98.76 | | `CATS_MICRO_R` | 98.76 | | `CATS_MICRO_F` | 98.76 | | `CATS_MACRO_P` | 89.84 | | `CATS_MACRO_R` | 80.05 | | `CATS_MACRO_F` | 84.23 | | `CATS_MACRO_AUC` | 91.10 | | `CATS_MACRO_AUC_PER_TYPE` | 0.00 | | `TEXTCAT_LOSS` | 111.55 |
nc33/my_awesome_qa_model
nc33
2023-02-17T02:14:11Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-02-16T01:45:34Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: my_awesome_qa_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_qa_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 3.3860 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 100 | 3.6478 | | No log | 2.0 | 200 | 3.4720 | | No log | 3.0 | 300 | 3.3860 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
shubhamgv/test-model
shubhamgv
2023-02-17T02:05:20Z
0
0
null
[ "arxiv:1910.09700", "region:us" ]
null
2023-02-17T02:04:55Z
--- # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1 # Doc / guide: https://huggingface.co/docs/hub/model-cards {} --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). # Model Details ## Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ## Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] # Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ## Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ## Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ## Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] # Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ## Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] # Training Details ## Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ## Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> ### Preprocessing [optional] [More Information Needed] ### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> ### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] # Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ## Testing Data, Factors & Metrics ### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] ### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] ### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ## Results [More Information Needed] ### Summary # Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] # Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] # Technical Specifications [optional] ## Model Architecture and Objective [More Information Needed] ## Compute Infrastructure [More Information Needed] ### Hardware [More Information Needed] ### Software [More Information Needed] # Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] # Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] # More Information [optional] [More Information Needed] # Model Card Authors [optional] [More Information Needed] # Model Card Contact [More Information Needed]
Madhana/distilgpt2-finetuned-wikitext2
Madhana
2023-02-17T01:57:09Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-02-17T01:18:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilgpt2-finetuned-wikitext2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilgpt2-finetuned-wikitext2 This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.6421 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.7602 | 1.0 | 2334 | 3.6669 | | 3.653 | 2.0 | 4668 | 3.6472 | | 3.6006 | 3.0 | 7002 | 3.6421 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2
rishabhjain16/whisper_medium_en_to_myst_cmu_pf_ot100
rishabhjain16
2023-02-17T01:52:06Z
8
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-02-15T23:13:55Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - wer model-index: - name: openai/whisper-medium.en results: - task: type: automatic-speech-recognition name: Automatic Speech Recognition dataset: name: rishabhjain16/infer_myst type: rishabhjain16/infer_myst config: en split: test metrics: - type: wer value: 11.88 name: WER - task: type: automatic-speech-recognition name: Automatic Speech Recognition dataset: name: rishabhjain16/infer_pfs type: rishabhjain16/infer_pfs config: en split: test metrics: - type: wer value: 3.28 name: WER - task: type: automatic-speech-recognition name: Automatic Speech Recognition dataset: name: rishabhjain16/infer_cmu type: rishabhjain16/infer_cmu config: en split: test metrics: - type: wer value: 1.98 name: WER - task: type: automatic-speech-recognition name: Automatic Speech Recognition dataset: name: rishabhjain16/libritts_dev_clean type: rishabhjain16/libritts_dev_clean config: en split: test metrics: - type: wer value: 5.15 name: WER - task: type: automatic-speech-recognition name: Automatic Speech Recognition dataset: name: rishabhjain16/infer_pf_swedish type: rishabhjain16/infer_pf_swedish config: en split: test metrics: - type: wer value: 8.16 name: WER - task: type: automatic-speech-recognition name: Automatic Speech Recognition dataset: name: rishabhjain16/infer_pf_german type: rishabhjain16/infer_pf_german config: en split: test metrics: - type: wer value: 34.99 name: WER - task: type: automatic-speech-recognition name: Automatic Speech Recognition dataset: name: rishabhjain16/infer_pf_italian type: rishabhjain16/infer_pf_italian config: en split: test metrics: - type: wer value: 4.65 name: WER - task: type: automatic-speech-recognition name: Automatic Speech Recognition dataset: name: rishabhjain16/infer_so_chinese type: rishabhjain16/infer_so_chinese config: en split: test metrics: - type: wer value: 15.87 name: WER --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # openai/whisper-medium.en This model is a fine-tuned version of [openai/whisper-medium.en](https://huggingface.co/openai/whisper-medium.en) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2994 - Wer: 9.7808 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.235 | 0.12 | 500 | 0.2735 | 11.0733 | | 0.1927 | 1.06 | 1000 | 0.2339 | 10.5575 | | 0.1119 | 1.18 | 1500 | 0.2280 | 9.6803 | | 0.0863 | 2.12 | 2000 | 0.2379 | 11.0621 | | 0.0322 | 3.05 | 2500 | 0.2614 | 9.9920 | | 0.0303 | 3.17 | 3000 | 0.2611 | 10.2742 | | 0.0161 | 4.11 | 3500 | 0.2885 | 10.4722 | | 0.0513 | 5.04 | 4000 | 0.2994 | 9.7808 | ### Framework versions - Transformers 4.27.0.dev0 - Pytorch 1.13.1+cu117 - Datasets 2.9.1.dev0 - Tokenizers 0.13.2
SayhoKim/sd-class-butterflies-32
SayhoKim
2023-02-17T01:46:52Z
2
0
diffusers
[ "diffusers", "pytorch", "unconditional-image-generation", "diffusion-models-class", "license:mit", "diffusers:DDPMPipeline", "region:us" ]
unconditional-image-generation
2023-02-16T02:45:39Z
--- license: mit tags: - pytorch - diffusers - unconditional-image-generation - diffusion-models-class --- # Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class) This model is a diffusion model for unconditional image generation of cute 🦋. ## Usage ```python from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('SayhoKim/sd-class-butterflies-32') image = pipeline().images[0] image ```
OpenAssistant/reward-model-deberta-v3-large
OpenAssistant
2023-02-17T01:36:23Z
979
21
transformers
[ "transformers", "pytorch", "deberta-v2", "text-classification", "reward-model", "reward_model", "RLHF", "en", "dataset:openai/summarize_from_feedback", "dataset:openai/webgpt_comparisons", "dataset:Dahoas/instruct-synthetic-prompt-responses", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-01-15T11:23:08Z
--- license: mit datasets: - openai/summarize_from_feedback - openai/webgpt_comparisons - Dahoas/instruct-synthetic-prompt-responses language: - en metrics: - accuracy tags: - reward-model - reward_model - RLHF --- # Reward model trained from human feedback Reward model (RM) trained to predict which generated answer is better judged by a human, given a question. RM are useful in these domain: - QA model evaluation - serves as reward score in RLHF All models are train on these dataset with a same split seed across datasets (if validation split wasn't available) - [webgpt_comparisons](https://huggingface.co/datasets/openai/webgpt_comparisons) - [summarize_from_feedback](https://huggingface.co/datasets/openai/summarize_from_feedback) - [synthetic-instruct-gptj-pairwise](https://huggingface.co/datasets/Dahoas/synthetic-instruct-gptj-pairwise) # How to use ```python from transformers import AutoModelForSequenceClassification, AutoTokenizer reward_name = "OpenAssistant/reward-model-deberta-v3-large" rank_model, tokenizer = AutoModelForSequenceClassification.from_pretrained(reward_name), AutoTokenizer.from_pretrained(reward_name) question, answer = "Explain nuclear fusion like I am five", "Nuclear fusion is the process by which two or more protons and neutrons combine to form a single nucleus. It is a very important process in the universe, as it is the source of energy for stars and galaxies. Nuclear fusion is also a key process in the production of energy for nuclear power plants." inputs = tokenizer(question, answer, return_tensors='pt') score = rank_model(**inputs).logits[0].cpu().detach() print(score) ``` # Performance Validation split accuracy | Model | [WebGPT](https://huggingface.co/datasets/openai/webgpt_comparisons) | [Summary](https://huggingface.co/datasets/openai/summarize_from_feedback) | [SytheticGPT](https://huggingface.co/datasets/Dahoas/synthetic-instruct-gptj-pairwise) | |---|---|---|---| | [electra-large-discriminator](https://huggingface.co/OpenAssistant/reward-model-electra-large-discriminator) | 59.30 | 68.66 | 99.85 | | [deberta-v3-large](https://huggingface.co/OpenAssistant/reward-model-deberta-v3-large) | 61.13 | 72.23 | 99.94 | | [deberta-v3-base](https://huggingface.co/OpenAssistant/reward-model-deberta-v3-base) | 59.07 | 66.84 | 99.85 | Its likely SytheticGPT has somekind of surface pattern on the choosen-rejected pair which makes it trivial to differentiate between better the answer.
wavymulder/lomo-diffusion
wavymulder
2023-02-17T01:21:58Z
63
25
diffusers
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "safetensors", "en", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-02-04T19:41:30Z
--- language: - en thumbnail: "https://huggingface.co/wavymulder/lomo-diffusion/resolve/main/images/page1.jpg" license: creativeml-openrail-m tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - safetensors - diffusers inference: true --- **Lomo Diffusion** ![Header](https://huggingface.co/wavymulder/lomo-diffusion/resolve/main/images/page1.jpg) [*CKPT DOWNLOAD LINK*](https://huggingface.co/wavymulder/lomo-diffusion/resolve/main/lomo-1.0.ckpt) - - - [*SAFETENSORS DOWNLOAD LINK*](https://huggingface.co/wavymulder/lomo-diffusion/resolve/main/lomo-1.0.safetensors) This is a dreambooth model trained on a diverse set of stylized photographs. Use the activation token **lomo style** in your prompt (I recommend at the start) This model is inspired by the Lomography movement, which embraces the imperfections and style of old LOMO cameras. The model excels at producing bright saturated colors as well as a variety of film artifacts that add to the illusion of a real photograph. When using most models, I typically use **blur haze** in my negative prompt. I encourage you to experiment and see what works well for you. Trained from 1.5 with VAE. Please see [this document where I share the parameters (prompt, sampler, seed, etc.) used for all example images.](https://huggingface.co/wavymulder/lomo-diffusion/resolve/main/paramets_for_samples.txt) You can [see here a non-cherrypicked batch of 49 images here.](https://i.imgur.com/cfIj3iq.jpg) And you can [see here a direct comparison between Analog Style and Lomo Style.](https://i.imgur.com/ugdFzPI.jpg) ![Environments Example](https://huggingface.co/wavymulder/lomo-diffusion/resolve/main/images/page2.jpg)
g8a9/roberta-tiny-8l-10M
g8a9
2023-02-17T01:15:57Z
9
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "fill-mask", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-02-16T22:06:15Z
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: roberta-tiny-8l-10M results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-tiny-8l-10M This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 7.3389 - Accuracy: 0.0516 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0004 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 512 - optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 50 - num_epochs: 100.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 7.8102 | 1.04 | 50 | 7.3747 | 0.0514 | | 7.805 | 2.08 | 100 | 7.3699 | 0.0517 | | 7.7907 | 3.12 | 150 | 7.3595 | 0.0517 | | 7.7838 | 4.16 | 200 | 7.3617 | 0.0514 | | 7.7706 | 5.21 | 250 | 7.3586 | 0.0514 | | 7.2933 | 6.25 | 300 | 7.3566 | 0.0513 | | 7.2932 | 7.29 | 350 | 7.3527 | 0.0516 | | 7.2986 | 8.33 | 400 | 7.3561 | 0.0516 | | 7.289 | 9.37 | 450 | 7.3495 | 0.0515 | | 7.2879 | 10.41 | 500 | 7.3455 | 0.0514 | | 7.276 | 11.45 | 550 | 7.3477 | 0.0513 | | 7.3072 | 12.49 | 600 | 7.3446 | 0.0516 | | 7.2978 | 13.53 | 650 | 7.3463 | 0.0514 | | 7.2857 | 14.58 | 700 | 7.3426 | 0.0515 | | 7.2868 | 15.62 | 750 | 7.3438 | 0.0515 | | 7.2973 | 16.66 | 800 | 7.3442 | 0.0517 | | 7.2988 | 17.7 | 850 | 7.3437 | 0.0512 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.11.0+cu113 - Datasets 2.6.1 - Tokenizers 0.12.1
Seiriryu/stable-diffusion-v1-4
Seiriryu
2023-02-17T01:15:50Z
2
0
diffusers
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "arxiv:2207.12598", "arxiv:2112.10752", "arxiv:2103.00020", "arxiv:2205.11487", "arxiv:1910.09700", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-02-16T07:45:55Z
--- license: creativeml-openrail-m tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image widget: - text: "A high tech solarpunk utopia in the Amazon rainforest" example_title: Amazon rainforest - text: "A pikachu fine dining with a view to the Eiffel Tower" example_title: Pikachu in Paris - text: "A mecha robot in a favela in expressionist style" example_title: Expressionist robot - text: "an insect robot preparing a delicious meal" example_title: Insect robot - text: "A small cabin on top of a snowy mountain in the style of Disney, artstation" example_title: Snowy disney cabin extra_gated_prompt: |- This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content 2. The authors claim no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license carefully here: https://huggingface.co/spaces/CompVis/stable-diffusion-license extra_gated_heading: Please read the LICENSE to access this model --- # Stable Diffusion v1-4 Model Card Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input. For more information about how Stable Diffusion functions, please have a look at [🤗's Stable Diffusion with 🧨Diffusers blog](https://huggingface.co/blog/stable_diffusion). The **Stable-Diffusion-v1-4** checkpoint was initialized with the weights of the [Stable-Diffusion-v1-2](https:/steps/huggingface.co/CompVis/stable-diffusion-v1-2) checkpoint and subsequently fine-tuned on 225k steps at resolution 512x512 on "laion-aesthetics v2 5+" and 10% dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598). This weights here are intended to be used with the 🧨 Diffusers library. If you are looking for the weights to be loaded into the CompVis Stable Diffusion codebase, [come here](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original) ## Model Details - **Developed by:** Robin Rombach, Patrick Esser - **Model type:** Diffusion-based text-to-image generation model - **Language(s):** English - **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based. - **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([CLIP ViT-L/14](https://arxiv.org/abs/2103.00020)) as suggested in the [Imagen paper](https://arxiv.org/abs/2205.11487). - **Resources for more information:** [GitHub Repository](https://github.com/CompVis/stable-diffusion), [Paper](https://arxiv.org/abs/2112.10752). - **Cite as:** @InProceedings{Rombach_2022_CVPR, author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn}, title = {High-Resolution Image Synthesis With Latent Diffusion Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {10684-10695} } ## Examples We recommend using [🤗's Diffusers library](https://github.com/huggingface/diffusers) to run Stable Diffusion. ### PyTorch ```bash pip install --upgrade diffusers transformers scipy ``` Running the pipeline with the default PNDM scheduler: ```python import torch from diffusers import StableDiffusionPipeline model_id = "CompVis/stable-diffusion-v1-4" device = "cuda" pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) pipe = pipe.to(device) prompt = "a photo of an astronaut riding a horse on mars" image = pipe(prompt).images[0] image.save("astronaut_rides_horse.png") ``` **Note**: If you are limited by GPU memory and have less than 4GB of GPU RAM available, please make sure to load the StableDiffusionPipeline in float16 precision instead of the default float32 precision as done above. You can do so by telling diffusers to expect the weights to be in float16 precision: ```py import torch pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) pipe = pipe.to(device) pipe.enable_attention_slicing() prompt = "a photo of an astronaut riding a horse on mars" image = pipe(prompt).images[0] image.save("astronaut_rides_horse.png") ``` To swap out the noise scheduler, pass it to `from_pretrained`: ```python from diffusers import StableDiffusionPipeline, EulerDiscreteScheduler model_id = "CompVis/stable-diffusion-v1-4" # Use the Euler scheduler here instead scheduler = EulerDiscreteScheduler.from_pretrained(model_id, subfolder="scheduler") pipe = StableDiffusionPipeline.from_pretrained(model_id, scheduler=scheduler, torch_dtype=torch.float16) pipe = pipe.to("cuda") prompt = "a photo of an astronaut riding a horse on mars" image = pipe(prompt).images[0] image.save("astronaut_rides_horse.png") ``` ### JAX/Flax To use StableDiffusion on TPUs and GPUs for faster inference you can leverage JAX/Flax. Running the pipeline with default PNDMScheduler ```python import jax import numpy as np from flax.jax_utils import replicate from flax.training.common_utils import shard from diffusers import FlaxStableDiffusionPipeline pipeline, params = FlaxStableDiffusionPipeline.from_pretrained( "CompVis/stable-diffusion-v1-4", revision="flax", dtype=jax.numpy.bfloat16 ) prompt = "a photo of an astronaut riding a horse on mars" prng_seed = jax.random.PRNGKey(0) num_inference_steps = 50 num_samples = jax.device_count() prompt = num_samples * [prompt] prompt_ids = pipeline.prepare_inputs(prompt) # shard inputs and rng params = replicate(params) prng_seed = jax.random.split(prng_seed, num_samples) prompt_ids = shard(prompt_ids) images = pipeline(prompt_ids, params, prng_seed, num_inference_steps, jit=True).images images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:]))) ``` **Note**: If you are limited by TPU memory, please make sure to load the `FlaxStableDiffusionPipeline` in `bfloat16` precision instead of the default `float32` precision as done above. You can do so by telling diffusers to load the weights from "bf16" branch. ```python import jax import numpy as np from flax.jax_utils import replicate from flax.training.common_utils import shard from diffusers import FlaxStableDiffusionPipeline pipeline, params = FlaxStableDiffusionPipeline.from_pretrained( "CompVis/stable-diffusion-v1-4", revision="bf16", dtype=jax.numpy.bfloat16 ) prompt = "a photo of an astronaut riding a horse on mars" prng_seed = jax.random.PRNGKey(0) num_inference_steps = 50 num_samples = jax.device_count() prompt = num_samples * [prompt] prompt_ids = pipeline.prepare_inputs(prompt) # shard inputs and rng params = replicate(params) prng_seed = jax.random.split(prng_seed, num_samples) prompt_ids = shard(prompt_ids) images = pipeline(prompt_ids, params, prng_seed, num_inference_steps, jit=True).images images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:]))) ``` # Uses ## Direct Use The model is intended for research purposes only. Possible research areas and tasks include - Safe deployment of models which have the potential to generate harmful content. - Probing and understanding the limitations and biases of generative models. - Generation of artworks and use in design and other artistic processes. - Applications in educational or creative tools. - Research on generative models. Excluded uses are described below. ### Misuse, Malicious Use, and Out-of-Scope Use _Note: This section is taken from the [DALLE-MINI model card](https://huggingface.co/dalle-mini/dalle-mini), but applies in the same way to Stable Diffusion v1_. The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes. #### Out-of-Scope Use The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model. #### Misuse and Malicious Use Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to: - Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc. - Intentionally promoting or propagating discriminatory content or harmful stereotypes. - Impersonating individuals without their consent. - Sexual content without consent of the people who might see it. - Mis- and disinformation - Representations of egregious violence and gore - Sharing of copyrighted or licensed material in violation of its terms of use. - Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use. ## Limitations and Bias ### Limitations - The model does not achieve perfect photorealism - The model cannot render legible text - The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere” - Faces and people in general may not be generated properly. - The model was trained mainly with English captions and will not work as well in other languages. - The autoencoding part of the model is lossy - The model was trained on a large-scale dataset [LAION-5B](https://laion.ai/blog/laion-5b/) which contains adult material and is not fit for product use without additional safety mechanisms and considerations. - No additional measures were used to deduplicate the dataset. As a result, we observe some degree of memorization for images that are duplicated in the training data. The training data can be searched at [https://rom1504.github.io/clip-retrieval/](https://rom1504.github.io/clip-retrieval/) to possibly assist in the detection of memorized images. ### Bias While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases. Stable Diffusion v1 was trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/), which consists of images that are primarily limited to English descriptions. Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for. This affects the overall output of the model, as white and western cultures are often set as the default. Further, the ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts. ### Safety Module The intended use of this model is with the [Safety Checker](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/safety_checker.py) in Diffusers. This checker works by checking model outputs against known hard-coded NSFW concepts. The concepts are intentionally hidden to reduce the likelihood of reverse-engineering this filter. Specifically, the checker compares the class probability of harmful concepts in the embedding space of the `CLIPTextModel` *after generation* of the images. The concepts are passed into the model with the generated image and compared to a hand-engineered weight for each NSFW concept. ## Training **Training Data** The model developers used the following dataset for training the model: - LAION-2B (en) and subsets thereof (see next section) **Training Procedure** Stable Diffusion v1-4 is a latent diffusion model which combines an autoencoder with a diffusion model that is trained in the latent space of the autoencoder. During training, - Images are encoded through an encoder, which turns images into latent representations. The autoencoder uses a relative downsampling factor of 8 and maps images of shape H x W x 3 to latents of shape H/f x W/f x 4 - Text prompts are encoded through a ViT-L/14 text-encoder. - The non-pooled output of the text encoder is fed into the UNet backbone of the latent diffusion model via cross-attention. - The loss is a reconstruction objective between the noise that was added to the latent and the prediction made by the UNet. We currently provide four checkpoints, which were trained as follows. - [`stable-diffusion-v1-1`](https://huggingface.co/CompVis/stable-diffusion-v1-1): 237,000 steps at resolution `256x256` on [laion2B-en](https://huggingface.co/datasets/laion/laion2B-en). 194,000 steps at resolution `512x512` on [laion-high-resolution](https://huggingface.co/datasets/laion/laion-high-resolution) (170M examples from LAION-5B with resolution `>= 1024x1024`). - [`stable-diffusion-v1-2`](https://huggingface.co/CompVis/stable-diffusion-v1-2): Resumed from `stable-diffusion-v1-1`. 515,000 steps at resolution `512x512` on "laion-improved-aesthetics" (a subset of laion2B-en, filtered to images with an original size `>= 512x512`, estimated aesthetics score `> 5.0`, and an estimated watermark probability `< 0.5`. The watermark estimate is from the LAION-5B metadata, the aesthetics score is estimated using an [improved aesthetics estimator](https://github.com/christophschuhmann/improved-aesthetic-predictor)). - [`stable-diffusion-v1-3`](https://huggingface.co/CompVis/stable-diffusion-v1-3): Resumed from `stable-diffusion-v1-2`. 195,000 steps at resolution `512x512` on "laion-improved-aesthetics" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598). - [`stable-diffusion-v1-4`](https://huggingface.co/CompVis/stable-diffusion-v1-4) Resumed from `stable-diffusion-v1-2`.225,000 steps at resolution `512x512` on "laion-aesthetics v2 5+" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598). - **Hardware:** 32 x 8 x A100 GPUs - **Optimizer:** AdamW - **Gradient Accumulations**: 2 - **Batch:** 32 x 8 x 2 x 4 = 2048 - **Learning rate:** warmup to 0.0001 for 10,000 steps and then kept constant ## Evaluation Results Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0) and 50 PLMS sampling steps show the relative improvements of the checkpoints: ![pareto](https://huggingface.co/CompVis/stable-diffusion/resolve/main/v1-variants-scores.jpg) Evaluated using 50 PLMS steps and 10000 random prompts from the COCO2017 validation set, evaluated at 512x512 resolution. Not optimized for FID scores. ## Environmental Impact **Stable Diffusion v1** **Estimated Emissions** Based on that information, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact. - **Hardware Type:** A100 PCIe 40GB - **Hours used:** 150000 - **Cloud Provider:** AWS - **Compute Region:** US-east - **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** 11250 kg CO2 eq. ## Citation ```bibtex @InProceedings{Rombach_2022_CVPR, author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn}, title = {High-Resolution Image Synthesis With Latent Diffusion Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {10684-10695} } ``` *This model card was written by: Robin Rombach and Patrick Esser and is based on the [DALL-E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).*
EnD-Diffusers/Slime_Tutorial
EnD-Diffusers
2023-02-17T01:11:02Z
1
1
diffusers
[ "diffusers", "tensorboard", "text-to-image", "en", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-01-31T07:43:49Z
--- license: creativeml-openrail-m tags: - text-to-image widget: - text: deetz1 language: - en --- [![Open In Spaces](https://camo.githubusercontent.com/00380c35e60d6b04be65d3d94a58332be5cc93779f630bcdfc18ab9a3a7d3388/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f25463025394625413425393725323048756767696e67253230466163652d5370616365732d626c7565)](https://huggingface.co/spaces/Duskfallcrew/Duskfalls_Slime_Tutorial) ### Duskfall's Slime Tutorial Dreambooth model trained by Duskfallcrew with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts! Information on this model will be here: https://civitai.com/models/5985/duskfalls-slime-tutorial If you want to donate towards costs and don't want to subscribe: https://ko-fi.com/DUSKFALLcrew If you want to monthly support the EARTH & DUSK media projects and not just AI: https://www.patreon.com/earthndusk DO NOT SELL THIS MODEL, OR MERGES Do merge, and do enjoy. Generative images for commercial use are fine. Credit in your merges would be great.
Alex48/ppo-LunarLander-v2
Alex48
2023-02-17T01:10:55Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-02-16T23:21:52Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 286.56 +/- 21.61 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
huggingtweets/hidden1337
huggingtweets
2023-02-17T00:53:12Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-02-17T00:52:18Z
--- language: en thumbnail: http://www.huggingtweets.com/hidden1337/1676595186873/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1604811921747853315/ZH5mghDX_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Clouke</div> <div style="text-align: center; font-size: 14px;">@hidden1337</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Clouke. | Data | Clouke | | --- | --- | | Tweets downloaded | 637 | | Retweets | 124 | | Short tweets | 210 | | Tweets kept | 303 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/x9nqovxf/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @hidden1337's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/5akipke4) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/5akipke4/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/hidden1337') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
jonathang/Protein_Family_Models
jonathang
2023-02-17T00:44:58Z
0
0
null
[ "protein", "nlp", "cnn", "lstm", "region:us" ]
null
2023-01-27T00:34:21Z
--- tags: - protein - nlp - cnn - lstm --- Meant to be model store for https://huggingface.co/spaces/jonathang/Protein-Family-CNN Read more here: https://github.com/MLE10-Protein/Research/
BobMcDear/convmixer20_1536d_patch7_kernel9
BobMcDear
2023-02-17T00:14:47Z
0
0
null
[ "region:us" ]
null
2023-02-17T00:06:07Z
Please refer to [flaim](https://github.com/bobmcdear/flaim) for sample usage and more information.
stinoco/cartpole-reinforce
stinoco
2023-02-17T00:08:34Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-02-17T00:08:22Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: cartpole-reinforce results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 445.40 +/- 52.01 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
BobMcDear/convmixer20_1024d_patch14_kernel9
BobMcDear
2023-02-17T00:07:19Z
0
0
null
[ "region:us" ]
null
2023-02-17T00:06:08Z
Please refer to [flaim](https://github.com/bobmcdear/flaim) for sample usage and more information.
amjadfqs/vit-base-patch16-224-in21k-finetuned-brain-tumor
amjadfqs
2023-02-16T22:40:11Z
20
1
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-02-16T16:46:09Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: vit-base-patch16-224-in21k-finetuned-brain-tumor results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: dataset split: test args: dataset metrics: - name: Accuracy type: accuracy value: 0.9316455696202531 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-patch16-224-in21k-finetuned-brain-tumor This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.2753 - Accuracy: 0.9316 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 40 - eval_batch_size: 40 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 160 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2735 | 1.0 | 44 | 0.3369 | 0.9092 | | 0.2229 | 2.0 | 88 | 0.3022 | 0.9199 | | 0.2078 | 3.0 | 132 | 0.2753 | 0.9316 | | 0.1734 | 4.0 | 176 | 0.2753 | 0.9316 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu117 - Datasets 2.9.0 - Tokenizers 0.13.2
nossal/rl-q-taxi-v3
nossal
2023-02-16T22:26:53Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-02-16T22:26:50Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: rl-q-taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="nossal/rl-q-taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
nossal/q-FrozenLake-v1-4x4-noSlippery
nossal
2023-02-16T22:18:55Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-02-16T22:18:52Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="nossal/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
jcramirezpr/ppo-LunarLander-v2
jcramirezpr
2023-02-16T22:08:13Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-02-16T22:07:48Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 264.20 +/- 21.88 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
sergey-antonov/a2c-PandaReachDense-v2
sergey-antonov
2023-02-16T21:34:18Z
0
0
stable-baselines3
[ "stable-baselines3", "PandaReachDense-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-02-16T21:28:27Z
--- library_name: stable-baselines3 tags: - PandaReachDense-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v2 type: PandaReachDense-v2 metrics: - type: mean_reward value: -2.58 +/- 0.47 name: mean_reward verified: false --- # **A2C** Agent playing **PandaReachDense-v2** This is a trained model of a **A2C** agent playing **PandaReachDense-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Elytum/tiny-classification-fast-5
Elytum
2023-02-16T21:27:51Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-02-16T17:13:40Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: tiny-classification-fast-5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tiny-classification-fast-5 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2045 - Accuracy: 0.9519 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.1943 | 1.0 | 8793 | 0.2026 | 0.9421 | | 0.1241 | 2.0 | 17586 | 0.2045 | 0.9519 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu117 - Datasets 2.9.0 - Tokenizers 0.13.2
fgmckee/poca-SoccerTwos100m
fgmckee
2023-02-16T21:25:06Z
5
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "unity-ml-agents", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SoccerTwos", "region:us" ]
reinforcement-learning
2023-02-16T21:24:58Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos library_name: ml-agents --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos 2. Step 1: Write your model_id: fgmckee/poca-SoccerTwos100m 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀