modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-08-29 18:27:06
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
526 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-08-29 18:26:56
card
stringlengths
11
1.01M
jeveuxaider/missions-report-camembert
jeveuxaider
2023-03-14T21:26:39Z
7
0
transformers
[ "transformers", "tf", "tensorboard", "camembert", "text-classification", "generated_from_keras_callback", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-02-10T09:30:06Z
--- license: mit tags: - generated_from_keras_callback model-index: - name: huynhdoo/camembert-base-finetuned-jva-missions-report results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # huynhdoo/camembert-base-finetuned-jva-missions-report This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1540 - Train Accuracy: 0.9462 - Validation Loss: 0.4751 - Validation Accuracy: 0.8255 - Epoch: 1 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 2838, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.1540 | 0.9468 | 0.4751 | 0.8255 | 0 | | 0.1540 | 0.9462 | 0.4751 | 0.8255 | 1 | ### Framework versions - Transformers 4.26.1 - TensorFlow 2.11.0 - Datasets 2.10.1 - Tokenizers 0.13.2
facebook/fasttext-english-nearest-neighbors
facebook
2023-03-14T21:15:13Z
3
0
fasttext
[ "fasttext", "text-classification", "en", "license:mit", "region:us" ]
text-classification
2023-03-06T12:45:09Z
--- license: mit tags: - text-classification language: - en library_name: fasttext pipeline_tag: text-classification widget: - text: apple example_title: apple - text: cat example_title: cat - text: sunny example_title: sunny - text: water example_title: water ---
krplt/VanyaKeshevSD1.5
krplt
2023-03-14T20:57:38Z
0
0
diffusers
[ "diffusers", "tensorboard", "safetensors", "art", " stable-diffusion", "dreambooth", "text-to-image", "en", "license:cc-by-4.0", "region:us" ]
text-to-image
2023-03-12T15:43:58Z
--- license: cc-by-4.0 pipeline_tag: text-to-image tags: - art - ' stable-diffusion' - dreambooth library_name: diffusers language: - en --- <h1>This is a diffusion model for generating SD 1.5 images, trained on 11 pictures of my friend Vanya using Dreambooth.</h1> <h3><i>Fine-tuned for ~14k steps using NVIDIA TESLA V100. Good similarity has been achieved despite a small dataset of 11 512 by 512 images</i></h3> Usage of the token below: | Token | Description | |-----------------------|--------------------------------------| | 👤 `VanyaKeshev` | Uses concept trained on Vanya | ## Examples <h1>Original face sample from dataset</h1> <img src="https://huggingface.co/MarkK/VanyaKeshevSD1.5/resolve/main/%D0%92%D0%B0%D0%BD%D0%B8%20%D0%9A%D1%83%D1%88%D0%B5%D0%B2%D1%8B/photo_2023-03-13_19-06-02.jpg" width="300"> <h1>Result</h1> <img src="https://huggingface.co/MarkK/VanyaKeshevSD1.5/resolve/main/%D0%92%D0%B0%D0%BD%D0%B8%20%D0%9A%D1%83%D1%88%D0%B5%D0%B2%D1%8B/00037-868393631.png" width="250"> <img src="https://huggingface.co/MarkK/VanyaKeshevSD1.5/resolve/main/%D0%92%D0%B0%D0%BD%D0%B8%20%D0%9A%D1%83%D1%88%D0%B5%D0%B2%D1%8B/00026-1460256817.png" width="250">
baran-cengiz/sd-class-butterflies-32
baran-cengiz
2023-03-14T20:45:08Z
30
0
diffusers
[ "diffusers", "pytorch", "unconditional-image-generation", "diffusion-models-class", "license:mit", "diffusers:DDPMPipeline", "region:us" ]
unconditional-image-generation
2023-03-14T20:44:30Z
--- license: mit tags: - pytorch - diffusers - unconditional-image-generation - diffusion-models-class --- # Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class) This model is a diffusion model for unconditional image generation of cute 🦋. ## Usage ```python from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('baran-cengiz/sd-class-butterflies-32') image = pipeline().images[0] image ```
austinphamm/netflix_rating_classifier
austinphamm
2023-03-14T20:21:23Z
107
1
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-03-10T01:06:56Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: netflix_rating_classifier results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # netflix_rating_classifier This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.7214 - Accuracy: 0.4921 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 16 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 317 | 1.2448 | 0.4692 | | 1.2581 | 2.0 | 634 | 1.1866 | 0.4976 | | 1.2581 | 3.0 | 951 | 1.2496 | 0.4968 | | 0.9032 | 4.0 | 1268 | 1.3886 | 0.5024 | | 0.511 | 5.0 | 1585 | 1.6567 | 0.4842 | | 0.511 | 6.0 | 1902 | 1.9508 | 0.4858 | | 0.2425 | 7.0 | 2219 | 2.2587 | 0.4921 | | 0.1197 | 8.0 | 2536 | 2.5835 | 0.4819 | | 0.1197 | 9.0 | 2853 | 2.9177 | 0.4921 | | 0.0571 | 10.0 | 3170 | 3.2303 | 0.4803 | | 0.0571 | 11.0 | 3487 | 3.3902 | 0.4787 | | 0.0245 | 12.0 | 3804 | 3.5701 | 0.4826 | | 0.0124 | 13.0 | 4121 | 3.6457 | 0.4756 | | 0.0124 | 14.0 | 4438 | 3.6836 | 0.4937 | | 0.0112 | 15.0 | 4755 | 3.7015 | 0.4897 | | 0.0073 | 16.0 | 5072 | 3.7214 | 0.4921 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.10.1 - Tokenizers 0.13.2
swang2000/distilroberta-base-finetuned-wikitext2
swang2000
2023-03-14T20:09:21Z
194
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-03-14T19:23:29Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilroberta-base-finetuned-wikitext2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilroberta-base-finetuned-wikitext2 This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8359 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.0852 | 1.0 | 2406 | 1.9225 | | 1.993 | 2.0 | 4812 | 1.8837 | | 1.9616 | 3.0 | 7218 | 1.8234 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.10.1 - Tokenizers 0.13.2
KarosY/lianjia_3l2l_668per200_1e-3
KarosY
2023-03-14T20:05:34Z
3
0
diffusers
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "lora", "base_model:stabilityai/stable-diffusion-2-1-base", "base_model:adapter:stabilityai/stable-diffusion-2-1-base", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-03-14T08:51:25Z
--- license: creativeml-openrail-m base_model: stabilityai/stable-diffusion-2-1-base tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA text2image fine-tuning - https://huggingface.co/KarosY/lianjia_3l2l_668per200_1e-3 These are LoRA adaption weights for stabilityai/stable-diffusion-2-1-base. The weights were fine-tuned on the None dataset. You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png)
LarryAIDraw/salutemix_v1
LarryAIDraw
2023-03-14T19:48:59Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-03-14T19:22:12Z
--- license: creativeml-openrail-m ---
OMARS200/PPO-LunarLander-v2
OMARS200
2023-03-14T19:46:40Z
2
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-03-14T19:44:56Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 269.53 +/- 20.90 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
LarryAIDraw/KirasakaSayakaStrike_v10
LarryAIDraw
2023-03-14T19:33:01Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-03-14T19:29:42Z
--- license: creativeml-openrail-m ---
LarryAIDraw/swordArtOnlineSinon_snV1
LarryAIDraw
2023-03-14T19:32:22Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-03-14T19:28:43Z
--- license: creativeml-openrail-m ---
sgoodfriend/ppo-procgen-starpilot-hard-2xIMPALA
sgoodfriend
2023-03-14T19:16:50Z
0
0
rl-algo-impls
[ "rl-algo-impls", "starpilot", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-03-14T19:16:47Z
--- library_name: rl-algo-impls tags: - starpilot - ppo - deep-reinforcement-learning - reinforcement-learning model-index: - name: ppo results: - metrics: - type: mean_reward value: 33.72 +/- 13.7 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: starpilot type: starpilot --- # **PPO** Agent playing **starpilot** This is a trained model of a **PPO** agent playing **starpilot** using the [/sgoodfriend/rl-algo-impls](https://github.com/sgoodfriend/rl-algo-impls) repo. All models trained at this commit can be found at https://api.wandb.ai/links/sgoodfriend/v1p4976e. ## Training Results This model was trained from 3 trainings of **PPO** agents using different initial seeds. These agents were trained by checking out [227aa2f](https://github.com/sgoodfriend/rl-algo-impls/tree/227aa2fbde36e688a09d8ad309b0947721eef160). The best and last models were kept from each training. This submission has loaded the best models from each training, reevaluates them, and selects the best model from these latest evaluations (mean - std). | algo | env | seed | reward_mean | reward_std | eval_episodes | best | wandb_url | |:-------|:----------|-------:|--------------:|-------------:|----------------:|:-------|:-----------------------------------------------------------------------------| | ppo | starpilot | 1 | 34.2461 | 14.551 | 256 | | [wandb](https://wandb.ai/sgoodfriend/rl-algo-impls-benchmarks/runs/ts4pdvx2) | | ppo | starpilot | 2 | 32.8086 | 14.4265 | 256 | | [wandb](https://wandb.ai/sgoodfriend/rl-algo-impls-benchmarks/runs/ihmwg4gz) | | ppo | starpilot | 3 | 33.7227 | 13.6975 | 256 | * | [wandb](https://wandb.ai/sgoodfriend/rl-algo-impls-benchmarks/runs/rnhma1ou) | ### Prerequisites: Weights & Biases (WandB) Training and benchmarking assumes you have a Weights & Biases project to upload runs to. By default training goes to a rl-algo-impls project while benchmarks go to rl-algo-impls-benchmarks. During training and benchmarking runs, videos of the best models and the model weights are uploaded to WandB. Before doing anything below, you'll need to create a wandb account and run `wandb login`. ## Usage /sgoodfriend/rl-algo-impls: https://github.com/sgoodfriend/rl-algo-impls Note: While the model state dictionary and hyperaparameters are saved, the latest implementation could be sufficiently different to not be able to reproduce similar results. You might need to checkout the commit the agent was trained on: [227aa2f](https://github.com/sgoodfriend/rl-algo-impls/tree/227aa2fbde36e688a09d8ad309b0947721eef160). ``` # Downloads the model, sets hyperparameters, and runs agent for 3 episodes python enjoy.py --wandb-run-path=sgoodfriend/rl-algo-impls-benchmarks/rnhma1ou ``` Setup hasn't been completely worked out yet, so you might be best served by using Google Colab starting from the [colab_enjoy.ipynb](https://github.com/sgoodfriend/rl-algo-impls/blob/main/colab_enjoy.ipynb) notebook. ## Training If you want the highest chance to reproduce these results, you'll want to checkout the commit the agent was trained on: [227aa2f](https://github.com/sgoodfriend/rl-algo-impls/tree/227aa2fbde36e688a09d8ad309b0947721eef160). While training is deterministic, different hardware will give different results. ``` python train.py --algo ppo --env starpilot --seed 3 ``` Setup hasn't been completely worked out yet, so you might be best served by using Google Colab starting from the [colab_train.ipynb](https://github.com/sgoodfriend/rl-algo-impls/blob/main/colab_train.ipynb) notebook. ## Benchmarking (with Lambda Labs instance) This and other models from https://api.wandb.ai/links/sgoodfriend/v1p4976e were generated by running a script on a Lambda Labs instance. In a Lambda Labs instance terminal: ``` git clone git@github.com:sgoodfriend/rl-algo-impls.git cd rl-algo-impls bash ./lambda_labs/setup.sh wandb login bash ./lambda_labs/benchmark.sh ``` ### Alternative: Google Colab Pro+ As an alternative, [colab_benchmark.ipynb](https://github.com/sgoodfriend/rl-algo-impls/tree/main/benchmarks#:~:text=colab_benchmark.ipynb), can be used. However, this requires a Google Colab Pro+ subscription and running across 4 separate instances because otherwise running all jobs will exceed the 24-hour limit. ## Hyperparameters This isn't exactly the format of hyperparams in hyperparams/ppo.yml, but instead the Wandb Run Config. However, it's very close and has some additional data: ``` algo: ppo algo_hyperparams: batch_size: 8192 clip_range: 0.2 clip_range_decay: linear clip_range_vf: 0.2 ent_coef: 0.01 gae_lambda: 0.95 gamma: 0.999 learning_rate: 0.00033 learning_rate_decay: linear n_epochs: 3 n_steps: 256 vf_coef: 0.5 env: procgen-starpilot-hard-2xIMPALA env_hyperparams: is_procgen: true make_kwargs: distribution_mode: hard num_threads: 8 n_envs: 256 normalize: true env_id: starpilot eval_params: ignore_first_episode: true step_freq: 500000 n_timesteps: 200000000 policy_hyperparams: activation_fn: relu cnn_feature_dim: 256 cnn_layers_init_orthogonal: false cnn_style: impala impala_channels: - 32 - 64 - 64 init_layers_orthogonal: true seed: 3 use_deterministic_algorithms: true wandb_entity: null wandb_project_name: rl-algo-impls-benchmarks wandb_tags: - benchmark_227aa2f - host_129-146-179-31 ```
dshin/flan-t5-ppo-user-h-batch-size-64-use-violation
dshin
2023-03-14T19:06:02Z
47
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "trl", "reinforcement-learning", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
reinforcement-learning
2023-03-14T19:05:36Z
--- license: apache-2.0 tags: - trl - transformers - reinforcement-learning --- # TRL Model This is a [TRL language model](https://github.com/lvwerra/trl) that has been fine-tuned with reinforcement learning to guide the model outputs according to a value, function, or human feedback. The model can be used for text generation. ## Usage To use this model for inference, first install the TRL library: ```bash python -m pip install trl ``` You can then generate text as follows: ```python from transformers import pipeline generator = pipeline("text-generation", model="dshin//tmp/tmpy4e0bk5v/dshin/flan-t5-ppo-user-h-batch-size-64-use-violation") outputs = generator("Hello, my llama is cute") ``` If you want to use the model for training or to obtain the outputs from the value head, load the model as follows: ```python from transformers import AutoTokenizer from trl import AutoModelForCausalLMWithValueHead tokenizer = AutoTokenizer.from_pretrained("dshin//tmp/tmpy4e0bk5v/dshin/flan-t5-ppo-user-h-batch-size-64-use-violation") model = AutoModelForCausalLMWithValueHead.from_pretrained("dshin//tmp/tmpy4e0bk5v/dshin/flan-t5-ppo-user-h-batch-size-64-use-violation") inputs = tokenizer("Hello, my llama is cute", return_tensors="pt") outputs = model(**inputs, labels=inputs["input_ids"]) ```
aj-data/AP2223-P7
aj-data
2023-03-14T19:00:12Z
0
0
fastai
[ "fastai", "region:us" ]
null
2023-03-14T19:00:07Z
--- tags: - fastai --- # Amazing! 🥳 Congratulations on hosting your fastai model on the Hugging Face Hub! # Some next steps 1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))! 2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)). 3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)! Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card. --- # Model card ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed
BoschAI/ppo-LunarLander-v2-TEST
BoschAI
2023-03-14T18:47:25Z
5
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-03-14T18:47:02Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 256.69 +/- 18.15 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Suhas-G/PPO-LunarLander-v2
Suhas-G
2023-03-14T18:38:30Z
4
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-03-14T18:37:25Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 262.63 +/- 21.53 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
dshin/flan-t5-ppo-user-f-batch-size-64
dshin
2023-03-14T18:38:15Z
46
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "trl", "reinforcement-learning", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
reinforcement-learning
2023-03-14T18:37:48Z
--- license: apache-2.0 tags: - trl - transformers - reinforcement-learning --- # TRL Model This is a [TRL language model](https://github.com/lvwerra/trl) that has been fine-tuned with reinforcement learning to guide the model outputs according to a value, function, or human feedback. The model can be used for text generation. ## Usage To use this model for inference, first install the TRL library: ```bash python -m pip install trl ``` You can then generate text as follows: ```python from transformers import pipeline generator = pipeline("text-generation", model="dshin//tmp/tmpk3oegfki/dshin/flan-t5-ppo-user-f-batch-size-64") outputs = generator("Hello, my llama is cute") ``` If you want to use the model for training or to obtain the outputs from the value head, load the model as follows: ```python from transformers import AutoTokenizer from trl import AutoModelForCausalLMWithValueHead tokenizer = AutoTokenizer.from_pretrained("dshin//tmp/tmpk3oegfki/dshin/flan-t5-ppo-user-f-batch-size-64") model = AutoModelForCausalLMWithValueHead.from_pretrained("dshin//tmp/tmpk3oegfki/dshin/flan-t5-ppo-user-f-batch-size-64") inputs = tokenizer("Hello, my llama is cute", return_tensors="pt") outputs = model(**inputs, labels=inputs["input_ids"]) ```
yumingyi/q-taxi-v3
yumingyi
2023-03-14T18:33:15Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-03-14T16:57:49Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="yumingyi/q-taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
joshnielsen876/distilbert-base-uncased-finetuned-cola
joshnielsen876
2023-03-14T18:14:14Z
106
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-03-14T18:03:09Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: distilbert-base-uncased-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue config: cola split: validation args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.5294395294021531 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.5703 - Matthews Correlation: 0.5294 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5256 | 1.0 | 535 | 0.5099 | 0.4384 | | 0.3465 | 2.0 | 1070 | 0.4924 | 0.4952 | | 0.2326 | 3.0 | 1605 | 0.5703 | 0.5294 | | 0.1752 | 4.0 | 2140 | 0.7855 | 0.4936 | | 0.1271 | 5.0 | 2675 | 0.8336 | 0.5242 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.10.1 - Tokenizers 0.13.2
ShreyasM/PyramidsRND
ShreyasM
2023-03-14T18:01:51Z
2
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "unity-ml-agents", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2023-03-14T18:00:41Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids library_name: ml-agents --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids 2. Step 1: Write your model_id: ShreyasM/PyramidsRND 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
mekjr1/guildistilbert-base-uncasedv2
mekjr1
2023-03-14T18:00:47Z
3
0
transformers
[ "transformers", "tf", "distilbert", "fill-mask", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-03-13T20:05:17Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: mekjr1/guildistilbert-base-uncasedv2 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # mekjr1/guildistilbert-base-uncasedv2 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.2192 - Validation Loss: 2.1282 - Epoch: 7 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 7167, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 2.3526 | 2.1221 | 0 | | 2.2175 | 2.1288 | 1 | | 2.2160 | 2.1139 | 2 | | 2.2200 | 2.1199 | 3 | | 2.2186 | 2.1007 | 4 | | 2.2177 | 2.1503 | 5 | | 2.2185 | 2.1395 | 6 | | 2.2192 | 2.1282 | 7 | ### Framework versions - Transformers 4.26.1 - TensorFlow 2.11.0 - Datasets 2.10.1 - Tokenizers 0.13.2
Bennet1996/finetuning-ESG-sentiment-model-bert_new_data
Bennet1996
2023-03-14T17:40:37Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-03-14T12:48:02Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuning-ESG-sentiment-model-bert_new_data results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-ESG-sentiment-model-bert_new_data This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0785 - Accuracy: 0.99 - F1: 0.99 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cpu - Datasets 2.9.0 - Tokenizers 0.13.2
henryscheible/xlnet-base-cased_stereoset_classifieronly
henryscheible
2023-03-14T17:38:59Z
3
0
transformers
[ "transformers", "pytorch", "xlnet", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-03-06T04:01:37Z
--- license: mit tags: - generated_from_trainer model-index: - name: xlnet-base-cased_stereoset_classifieronly results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlnet-base-cased_stereoset_classifieronly This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1 - Datasets 2.10.1 - Tokenizers 0.13.2
FabienDaniel/q-FrozenLake-v1-4x4-noSlippery
FabienDaniel
2023-03-14T17:38:48Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-03-14T17:34:43Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="FabienDaniel/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
afaji/fine-tuned-DatasetQAS-IDK-MRC-with-indobert-large-p2-with-ITTL-without-freeze-LR-1e-05
afaji
2023-03-14T17:36:18Z
15
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2023-03-07T17:49:35Z
--- license: mit tags: - generated_from_trainer metrics: - f1 - precision - recall model-index: - name: fine-tuned-DatasetQAS-IDK-MRC-with-indobert-large-p2-with-ITTL-without-freeze-LR-1e-05 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # fine-tuned-DatasetQAS-IDK-MRC-with-indobert-large-p2-with-ITTL-without-freeze-LR-1e-05 This model is a fine-tuned version of [indobenchmark/indobert-large-p2](https://huggingface.co/indobenchmark/indobert-large-p2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.4144 - Exact Match: 54.9738 - F1: 61.7773 - Precision: 63.1273 - Recall: 66.0715 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 16 ### Training results | Training Loss | Epoch | Step | Validation Loss | Exact Match | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:-----------:|:-------:|:---------:|:-------:| | 3.4922 | 0.49 | 73 | 2.1228 | 17.8010 | 26.7821 | 24.6611 | 46.2405 | | 2.3015 | 0.99 | 146 | 1.7236 | 31.9372 | 39.3632 | 39.3151 | 50.4983 | | 1.5627 | 1.49 | 219 | 1.3562 | 43.9791 | 50.3363 | 50.6305 | 59.6967 | | 1.3703 | 1.98 | 292 | 1.3352 | 43.0628 | 50.9390 | 51.7207 | 58.2556 | | 1.0433 | 2.48 | 365 | 1.2210 | 46.7277 | 54.3203 | 55.5971 | 60.2780 | | 1.0456 | 2.97 | 438 | 1.1553 | 50.3927 | 58.4862 | 59.5577 | 65.4513 | | 0.8656 | 3.47 | 511 | 1.1815 | 50.3927 | 57.6228 | 58.5436 | 62.8284 | | 0.8838 | 3.97 | 584 | 1.2030 | 49.0838 | 56.4395 | 57.7457 | 61.5960 | | 0.6994 | 4.47 | 657 | 1.1820 | 51.9634 | 59.1479 | 59.9674 | 64.7123 | | 0.7335 | 4.96 | 730 | 1.1825 | 52.6178 | 60.0014 | 61.3988 | 64.7995 | | 0.596 | 5.46 | 803 | 1.2962 | 52.2251 | 59.6942 | 61.1135 | 63.7633 | | 0.6165 | 5.95 | 876 | 1.2169 | 53.0105 | 60.3582 | 61.5312 | 65.2088 | | 0.5917 | 6.45 | 949 | 1.3939 | 53.0105 | 60.1105 | 61.5127 | 64.4837 | | 0.5275 | 6.95 | 1022 | 1.3169 | 54.8429 | 62.1060 | 63.5898 | 66.5208 | | 0.5058 | 7.45 | 1095 | 1.3237 | 55.6283 | 62.4607 | 63.7170 | 67.3387 | | 0.4651 | 7.94 | 1168 | 1.3677 | 53.0105 | 59.7708 | 60.9283 | 64.5730 | | 0.4616 | 8.44 | 1241 | 1.4120 | 57.4607 | 63.9364 | 65.2036 | 67.4919 | | 0.4053 | 8.93 | 1314 | 1.3799 | 56.2827 | 62.8043 | 63.9601 | 66.7283 | | 0.4061 | 9.43 | 1387 | 1.4736 | 55.7592 | 62.3147 | 63.7404 | 66.0129 | | 0.4037 | 9.93 | 1460 | 1.4144 | 54.9738 | 61.7773 | 63.1273 | 66.0715 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu117 - Datasets 2.2.0 - Tokenizers 0.13.2
andrea-t94/roberta-fine-tuned-twitter
andrea-t94
2023-03-14T17:28:23Z
110
0
transformers
[ "transformers", "pytorch", "roberta", "fill-mask", "distillroberta-base", "twitter", "en", "dataset:andrea-t94/TwitterSentiment140", "arxiv:1801.06146", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-02-08T16:36:19Z
--- license: apache-2.0 datasets: - andrea-t94/TwitterSentiment140 language: - en metrics: - perplexity library_name: transformers tags: - distillroberta-base - twitter pipeline_tag: fill-mask --- ## Twitter-roBERTa-base fine-tuned using masked language modelling This is a RoBERTa-base model finetuned (domain adaptation) on ~2M tweets from Jin 2009 (sentiment140). This is the first step of a two steps approach to finetune for sentiment analysis (ULMFit) This model is suitable for English. Main charachetistics: - pretrained model and tokenizer: distillroberta-base - no cleaning/processing applied to the data Reference Paper: [ULMFit](https://arxiv.org/abs/1801.06146). Reference dataset: [Sentiment140](https://www.kaggle.com/datasets/kazanova/sentiment140?resource=download) Git Repo: TBD Labels: 0 -> Negative; 1 -> Positive
yujiepan/internal.wav2vec2-base-superb-ks-int8-structured79
yujiepan
2023-03-14T17:26:18Z
158
0
transformers
[ "transformers", "pytorch", "openvino", "wav2vec2", "audio-classification", "generated_from_trainer", "dataset:superb", "license:apache-2.0", "endpoints_compatible", "region:us" ]
audio-classification
2023-03-14T17:20:28Z
--- license: apache-2.0 tags: - audio-classification - generated_from_trainer datasets: - superb metrics: - accuracy model-index: - name: w2v2-ks-jpqd-finetuned-student results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # w2v2-ks-jpqd-finetuned-student This model is a fine-tuned version of [anton-l/wav2vec2-base-ft-keyword-spotting](https://huggingface.co/anton-l/wav2vec2-base-ft-keyword-spotting) on the superb dataset. It achieves the following results on the evaluation set: - Loss: 0.0641 - Accuracy: 0.9815 The model is quantized and structurally pruned (sparisty=80 in transformer block linear layers) ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 15.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.4606 | 1.0 | 399 | 0.1543 | 0.9723 | | 14.8746 | 2.0 | 798 | 14.9490 | 0.9681 | | 24.7043 | 3.0 | 1197 | 24.6662 | 0.9706 | | 30.626 | 4.0 | 1596 | 30.4279 | 0.9732 | | 33.4796 | 5.0 | 1995 | 33.3182 | 0.9750 | | 34.4405 | 6.0 | 2394 | 34.2327 | 0.9744 | | 34.1743 | 7.0 | 2793 | 34.0161 | 0.9741 | | 33.47 | 8.0 | 3192 | 33.2669 | 0.9748 | | 0.2278 | 9.0 | 3591 | 0.1125 | 0.9757 | | 0.2259 | 10.0 | 3990 | 0.0848 | 0.9778 | | 0.1629 | 11.0 | 4389 | 0.0734 | 0.9788 | | 0.1658 | 12.0 | 4788 | 0.0736 | 0.9803 | | 0.2264 | 13.0 | 5187 | 0.0658 | 0.9803 | | 0.1564 | 14.0 | 5586 | 0.0677 | 0.9819 | | 0.1716 | 15.0 | 5985 | 0.0641 | 0.9815 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
chcaa/da_dacy_large_ner_fine_grained
chcaa
2023-03-14T17:21:24Z
5
0
spacy
[ "spacy", "token-classification", "da", "dataset:chcaa/DANSK", "license:apache-2.0", "model-index", "region:us" ]
token-classification
2023-03-11T18:10:23Z
--- tags: - spacy - token-classification language: - da license: apache-2.0 model-index: - name: da_dacy_large_ner_fine_grained results: - task: name: NER type: token-classification metrics: - name: NER Precision type: precision value: 0.813029316 - name: NER Recall type: recall value: 0.8336673347 - name: NER F Score type: f_score value: 0.8232189974 datasets: - chcaa/DANSK --- <a href="https://github.com/centre-for-humanities-computing/Dacy"><img src="https://centre-for-humanities-computing.github.io/DaCy/_static/icon.png" width="175" height="175" align="right" /></a> # DaCy_large_ner_fine_grained DaCy is a Danish language processing framework with state-of-the-art pipelines as well as functionality for analyzing Danish pipelines. At the time of publishing this model, also included in DaCy encorporates the only models for fine-grained NER using DANSK dataset - a dataset containing 18 annotation types in the same format as Ontonotes. Moreover, DaCy's largest pipeline has achieved State-of-the-Art performance on Named entity recognition, part-of-speech tagging and dependency parsing for Danish on the DaNE dataset. Check out the [DaCy repository](https://github.com/centre-for-humanities-computing/DaCy) for material on how to use DaCy and reproduce the results. DaCy also contains guides on usage of the package as well as behavioural test for biases and robustness of Danish NLP pipelines. For information about the use of this model as well as guides to its use, please refer to [DaCys documentation](https://centre-for-humanities-computing.github.io/DaCy/using_dacy.html). | Feature | Description | | --- | --- | | **Name** | `da_dacy_large_ner_fine_grained` | | **Version** | `0.1.0` | | **spaCy** | `>=3.5.0,<3.6.0` | | **Default Pipeline** | `transformer`, `ner` | | **Components** | `transformer`, `ner` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | [DANSK - Danish Annotations for NLP Specific TasKs](https://huggingface.co/datasets/chcaa/DANSK) (chcaa)<br />[chcaa/dfm-encoder-large-v1](https://huggingface.co/chcaa/dfm-encoder-large-v1) (CHCAA) | | **License** | `apache-2.0` | | **Author** | [Centre for Humanities Computing Aarhus](https://chcaa.io/#/) | ### Label Scheme <details> <summary>View label scheme (18 labels for 1 components)</summary> | Component | Labels | | --- | --- | | **`ner`** | `CARDINAL`, `DATE`, `EVENT`, `FACILITY`, `GPE`, `LANGUAGE`, `LAW`, `LOCATION`, `MONEY`, `NORP`, `ORDINAL`, `ORGANIZATION`, `PERCENT`, `PERSON`, `PRODUCT`, `QUANTITY`, `TIME`, `WORK OF ART` | </details> ### Accuracy | Type | Score | | --- | --- | | `ENTS_F` | 82.32 | | `ENTS_P` | 81.30 | | `ENTS_R` | 83.37 | | `TRANSFORMER_LOSS` | 41138.73 | | `NER_LOSS` | 103772.53 | ### Training For progression in loss and performance on the dev set during training, please refer to the Weights and Biases run, [HERE](https://wandb.ai/emil-tj/dacy-an-efficient-pipeline-for-danish/runs/b2wv5ah9?workspace=user-emil-tj)
YisusLn/q-FrozenLake-v1-4x4-noSlippery
YisusLn
2023-03-14T17:16:20Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-03-14T17:16:16Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="YisusLn/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
henryscheible/bert-large-uncased_winobias_classifieronly
henryscheible
2023-03-14T17:09:56Z
3
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-03-06T16:16:24Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: bert-large-uncased_winobias_classifieronly results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-large-uncased_winobias_classifieronly This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1 - Datasets 2.10.1 - Tokenizers 0.13.2
ShreyasM/ppo-SnowballTarget
ShreyasM
2023-03-14T17:09:50Z
4
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "unity-ml-agents", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget", "region:us" ]
reinforcement-learning
2023-03-14T17:09:44Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget library_name: ml-agents --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget 2. Step 1: Write your model_id: ShreyasM/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
nlp-waseda/comet-v2-gpt2-small-japanese
nlp-waseda
2023-03-14T16:56:19Z
134
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "ja", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-03-05T13:39:03Z
--- language: ja widget: - text: X が 部屋 で ゲーム するxEffect --- # COMET-GPT2 ja v2 Finetuned GPT-2 on the large version of [ATOMIC ja](https://github.com/nlp-waseda/comet-atomic-ja) using a causal language modeling (CLM) objective. The original version and the large version of ATOMIC ja were introduced in [this paper](https://www.anlp.jp/proceedings/annual_meeting/2023/pdf_dir/B2-5.pdf) and in [this paper](https://www.anlp.jp/proceedings/annual_meeting/2023/pdf_dir/B9-1.pdf), respectively. ### How to use You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility: ```python >>> from transformers import pipeline, set_seed >>> generator = pipeline('text-generation', model='nlp-waseda/comet-v2-gpt2-small-japanese') >>> set_seed(42) >>> generator('X が 副業 を 始めるxEffect', max_length=30, num_return_sequences=5, do_sample=True) [{'generated_text': 'X が 副業 を 始めるxEffect X が 収入 を 得る'}, {'generated_text': 'X が 副業 を 始めるxEffect X が 時間 を 失う'}, {'generated_text': 'X が 副業 を 始めるxEffect X が 儲かる'}, {'generated_text': 'X が 副業 を 始めるxEffect X が 稼ぐ'}, {'generated_text': 'X が 副業 を 始めるxEffect X が 稼げる ように なる'}] ``` ### Preprocessing The texts are segmented into words using Juman++ and tokenized using SentencePiece. ## Evaluation results The model achieves the following results: | BLEU | BERTScore | |:-----:|:---------:| | - | - | ### BibTeX entry and citation info ```bibtex @InProceedings{ide_nlp2023_event, author = "井手竜也 and 村田栄樹 and 堀尾海斗 and 河原大輔 and 山崎天 and 李聖哲 and 新里顕大 and 佐藤敏紀", title = "人間と言語モデルに対するプロンプトを用いたゼロからのイベント常識知識グラフ構築", booktitle = "言語処理学会第29回年次大会", year = "2023", url = "https://www.anlp.jp/proceedings/annual_meeting/2023/pdf_dir/B2-5.pdf" note = "in Japanese" } @InProceedings{murata_nlp2023, author = "村田栄樹 and 井手竜也 and 榮田亮真 and 河原大輔 and 山崎天 and 李聖哲 and 新里顕大 and 佐藤敏紀", title = "大規模言語モデルによって構築された常識知識グラフの拡大と低コストフィルタリング", booktitle = "言語処理学会第29回年次大会", year = "2023", url = "https://www.anlp.jp/proceedings/annual_meeting/2023/pdf_dir/B9-1.pdf" note = "in Japanese" } ```
yumingyi/q-FrozenLake-v1-4x4-noSlippery
yumingyi
2023-03-14T16:55:25Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-03-14T16:55:22Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="yumingyi/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
maderix/llama-65b-4bit
maderix
2023-03-14T16:49:38Z
0
69
transformers
[ "transformers", "en", "endpoints_compatible", "region:us" ]
null
2023-03-11T16:43:09Z
--- language: - en library_name: transformers --- Converted with https://github.com/qwopqwop200/GPTQ-for-LLaMa All models tested on A100-80G *Conversion may require lot of RAM, LLaMA-7b takes ~12 GB, 13b around 21 GB, 30b around 62 and 65b takes more than 120 GB of RAM. Installation instructions as mentioned in above repo: 1. Install Anaconda and create a venv with python 3.8 2. Install pytorch(tested with torch-1.13-cu116) 3. Install Transformers library (you'll need the latest transformers with this PR : https://github.com/huggingface/transformers/pull/21955 ). 4. Install sentencepiece from pip 5. Run python cuda_setup.py install in venv 6. You can either convert the llama models yourself with the instructions from GPTQ-for-llama repo 7. or directly use these weights by individually downloading them following these instructions (https://huggingface.co/docs/huggingface_hub/guides/download) 8. Profit! 9. Best results are obtained by putting a repetition_penalty(~1/0.85),temperature=0.7 in model.generate() for most LLaMA models
henryscheible/bert-large-uncased_crows_pairs_classifieronly
henryscheible
2023-03-14T16:41:06Z
3
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-03-06T16:15:19Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: bert-large-uncased_crows_pairs_classifieronly results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-large-uncased_crows_pairs_classifieronly This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1 - Datasets 2.10.1 - Tokenizers 0.13.2
henryscheible/gpt2_crows_pairs_classifieronly
henryscheible
2023-03-14T16:39:55Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2023-03-06T16:15:28Z
--- license: mit tags: - generated_from_trainer model-index: - name: gpt2_crows_pairs_classifieronly results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2_crows_pairs_classifieronly This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1 - Datasets 2.10.1 - Tokenizers 0.13.2
henryscheible/gpt2_winobias_classifieronly
henryscheible
2023-03-14T16:31:32Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2023-03-06T03:57:10Z
--- license: mit tags: - generated_from_trainer model-index: - name: gpt2_winobias_classifieronly results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2_winobias_classifieronly This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1 - Datasets 2.10.1 - Tokenizers 0.13.2
TobiTob/decision_transformer_random_230
TobiTob
2023-03-14T16:19:59Z
33
0
transformers
[ "transformers", "pytorch", "tensorboard", "decision_transformer", "generated_from_trainer", "dataset:city_learn", "endpoints_compatible", "region:us" ]
null
2023-03-14T14:57:07Z
--- tags: - generated_from_trainer datasets: - city_learn model-index: - name: decision_transformer_random_230 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # decision_transformer_random_230 This model is a fine-tuned version of [](https://huggingface.co/) on the city_learn dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 64 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 20 ### Training results ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.10.1 - Tokenizers 0.13.2
ChechkovEugene/a2c-AntBulletEnv-v0
ChechkovEugene
2023-03-14T16:18:21Z
3
0
stable-baselines3
[ "stable-baselines3", "AntBulletEnv-v0", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-03-14T16:11:09Z
--- library_name: stable-baselines3 tags: - AntBulletEnv-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: AntBulletEnv-v0 type: AntBulletEnv-v0 metrics: - type: mean_reward value: 1369.78 +/- 106.81 name: mean_reward verified: false --- # **A2C** Agent playing **AntBulletEnv-v0** This is a trained model of a **A2C** agent playing **AntBulletEnv-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
lipee/q-FrozenLake-v1-4x4-noSlippery
lipee
2023-03-14T15:49:55Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-03-14T15:49:47Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="lipee/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
nouman-10/fine-tune-roberta-exist-mlm
nouman-10
2023-03-14T15:38:17Z
107
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-03-14T15:24:11Z
--- license: mit tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: unsupervised-fine-tune-roberta-exist results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # unsupervised-fine-tune-roberta-exist This model is a fine-tuned version of [nouman-10/unsupervised-exist-rb](https://huggingface.co/nouman-10/unsupervised-exist-rb) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.9911 - Accuracy: 0.7238 - F1: 0.7262 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 194 | 0.5147 | 0.7471 | 0.7434 | | No log | 2.0 | 388 | 0.5395 | 0.7384 | 0.7458 | | 0.4616 | 3.0 | 582 | 0.6484 | 0.75 | 0.7440 | | 0.4616 | 4.0 | 776 | 0.9610 | 0.7355 | 0.7407 | | 0.4616 | 5.0 | 970 | 1.2414 | 0.7326 | 0.7262 | | 0.1786 | 6.0 | 1164 | 1.7050 | 0.7209 | 0.7209 | | 0.1786 | 7.0 | 1358 | 1.7930 | 0.7384 | 0.7273 | | 0.0557 | 8.0 | 1552 | 1.8999 | 0.7355 | 0.7378 | | 0.0557 | 9.0 | 1746 | 1.9886 | 0.7209 | 0.7225 | | 0.0557 | 10.0 | 1940 | 1.9911 | 0.7238 | 0.7262 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.10.1 - Tokenizers 0.13.2
MarkieMark1/Reinforce-PixelCopter-PLE-v0
MarkieMark1
2023-03-14T15:26:23Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-03-14T15:20:37Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-PixelCopter-PLE-v0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 36.10 +/- 30.07 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
arrandi/a2c-AntBulletEnv-v0
arrandi
2023-03-14T15:24:48Z
4
0
stable-baselines3
[ "stable-baselines3", "AntBulletEnv-v0", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-03-14T15:23:51Z
--- library_name: stable-baselines3 tags: - AntBulletEnv-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: AntBulletEnv-v0 type: AntBulletEnv-v0 metrics: - type: mean_reward value: 1296.70 +/- 120.44 name: mean_reward verified: false --- # **A2C** Agent playing **AntBulletEnv-v0** This is a trained model of a **A2C** agent playing **AntBulletEnv-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Christian90/q-FrozenLake-v1-8x8-Slippery-try2
Christian90
2023-03-14T15:20:02Z
0
0
null
[ "FrozenLake-v1-8x8", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-03-14T15:19:57Z
--- tags: - FrozenLake-v1-8x8 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-8x8-Slippery-try2 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-8x8 type: FrozenLake-v1-8x8 metrics: - type: mean_reward value: 0.18 +/- 0.38 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="Christian90/q-FrozenLake-v1-8x8-Slippery-try2", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Art-phys/a2c-AntBulletEnv-v0
Art-phys
2023-03-14T15:09:36Z
2
0
stable-baselines3
[ "stable-baselines3", "AntBulletEnv-v0", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-03-14T15:08:28Z
--- library_name: stable-baselines3 tags: - AntBulletEnv-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: AntBulletEnv-v0 type: AntBulletEnv-v0 metrics: - type: mean_reward value: 1976.04 +/- 38.16 name: mean_reward verified: false --- # **A2C** Agent playing **AntBulletEnv-v0** This is a trained model of a **A2C** agent playing **AntBulletEnv-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
ThoDum/ppo-Pyramids
ThoDum
2023-03-14T15:07:49Z
4
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2023-03-14T15:07:44Z
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids 2. Step 1: Find your model_id: ThoDum/ppo-Pyramids 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
ViktorDo/EcoBERT-POWO_Lifecycle_Pretrained
ViktorDo
2023-03-14T15:07:23Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-03-14T11:53:52Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: EcoBERT-POWO_Lifecycle_Pretrained results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # EcoBERT-POWO_Lifecycle_Pretrained This model is a fine-tuned version of [ViktorDo/EcoBERT-Pretrained](https://huggingface.co/ViktorDo/EcoBERT-Pretrained) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0782 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.0895 | 1.0 | 1704 | 0.0798 | | 0.0795 | 2.0 | 3408 | 0.0769 | | 0.065 | 3.0 | 5112 | 0.0782 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.10.1 - Tokenizers 0.13.2
Christian90/q-FrozenLake-v1-8x8-Slippery
Christian90
2023-03-14T15:05:28Z
0
0
null
[ "FrozenLake-v1-8x8", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-03-14T15:05:22Z
--- tags: - FrozenLake-v1-8x8 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-8x8-Slippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-8x8 type: FrozenLake-v1-8x8 metrics: - type: mean_reward value: 0.44 +/- 0.50 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="Christian90/q-FrozenLake-v1-8x8-Slippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Haaniya-Iram17/sd-1-5-hira
Haaniya-Iram17
2023-03-14T14:56:44Z
1
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-03-14T14:54:00Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### SD-1-5-Hira Dreambooth model trained by Haaniya-Iram17 with [buildspace's DreamBooth](https://colab.research.google.com/github/buildspace/diffusers/blob/main/examples/dreambooth/DreamBooth_Stable_Diffusion.ipynb) notebook Build your own using the [AI Avatar project](https://buildspace.so/builds/ai-avatar)! To get started head over to the [project dashboard](https://buildspace.so/p/build-ai-avatars). Sample pictures of this concept:
ViktorDo/EcoBERT-POWO_Climber_Pretrained
ViktorDo
2023-03-14T14:55:07Z
107
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-03-14T11:37:06Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: EcoBERT-POWO_Climber_Pretrained results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # EcoBERT-POWO_Climber_Pretrained This model is a fine-tuned version of [ViktorDo/EcoBERT-Pretrained](https://huggingface.co/ViktorDo/EcoBERT-Pretrained) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1006 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.0984 | 1.0 | 2133 | 0.1009 | | 0.082 | 2.0 | 4266 | 0.0979 | | 0.0769 | 3.0 | 6399 | 0.1006 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.10.1 - Tokenizers 0.13.2
NTCAL/norbert2_sentiment_norec_en_gpu_500_rader_max_noder_task
NTCAL
2023-03-14T14:49:06Z
109
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-03-14T14:36:30Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - recall - precision model-index: - name: norbert2_sentiment_norec_en_gpu_500_rader_max_noder_task results: [] --- # KJøretid #SBATCH --nodes=2 #SBATCH --ntasks-per-node=3 #SBATCH --gres=gpu:A100m40:1 {'train_runtime': 60.0918, 'train_samples_per_second': 41.603, 'train_steps_per_second': 0.166, 'train_loss': 0.6561894416809082, 'epoch': 5.0} Time: 60.09 Samples/second: 41.60 <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # norbert2_sentiment_norec_en_gpu_500_rader_max_noder_task This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6280 - Compute Metrics: : - Accuracy: 0.678 - Balanced Accuracy: 0.4889 - F1 Score: 0.8076 - Recall: 0.9713 - Precision: 0.6912 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Compute Metrics | Accuracy | Balanced Accuracy | F1 Score | Recall | Precision | |:-------------:|:-----:|:----:|:---------------:|:---------------:|:--------:|:-----------------:|:--------:|:------:|:---------:| | No log | 1.0 | 2 | 0.6324 | : | 0.696 | 0.5 | 0.8208 | 1.0 | 0.696 | | No log | 2.0 | 4 | 0.6264 | : | 0.692 | 0.4971 | 0.8180 | 0.9943 | 0.6948 | | No log | 3.0 | 6 | 0.6180 | : | 0.696 | 0.5 | 0.8208 | 1.0 | 0.696 | | No log | 4.0 | 8 | 0.6236 | : | 0.694 | 0.5023 | 0.8185 | 0.9914 | 0.6970 | | 0.6562 | 5.0 | 10 | 0.6280 | : | 0.678 | 0.4889 | 0.8076 | 0.9713 | 0.6912 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu117 - Datasets 2.9.0 - Tokenizers 0.13.2
Christian90/Taxi-v3
Christian90
2023-03-14T14:39:43Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-03-14T14:39:39Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.52 +/- 2.73 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="Christian90/Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Christian90/q-FrozenLake-v1-4x4-noSlippery
Christian90
2023-03-14T14:36:32Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-03-14T14:36:29Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="Christian90/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Kittitouch/rl_course_vizdoom_health_gathering_supreme
Kittitouch
2023-03-14T14:34:52Z
0
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-03-12T07:52:20Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_health_gathering_supreme type: doom_health_gathering_supreme metrics: - type: mean_reward value: 12.10 +/- 6.57 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r Kittitouch/rl_course_vizdoom_health_gathering_supreme ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m <path.to.enjoy.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m <path.to.train.module> --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
AustinCarthy/GPT2_10M_benign_URLs
AustinCarthy
2023-03-14T14:17:48Z
115
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-03-14T01:33:12Z
--- license: mit tags: - generated_from_trainer model-index: - name: GPT2_10M_benign_URLs results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GPT2_10M_benign_URLs This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.26.0 - Pytorch 1.9.0+cu111 - Datasets 2.9.0 - Tokenizers 0.13.2
Kaspar/vit-base-railspace
Kaspar
2023-03-14T14:16:28Z
226
2
transformers
[ "transformers", "pytorch", "vit", "image-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2023-03-13T16:52:25Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: vit-base-railspace results: [] widget: - src: https://huggingface.co/davanstrien/autotrain-mapreader-5000-40830105612/resolve/main/1.png example_title: patch - src: https://huggingface.co/davanstrien/autotrain-mapreader-5000-40830105612/resolve/main/271.png example_title: patch --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-beans-demo-v5 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0292 - Accuracy: 0.9926 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data precision recall f1-score support 0 1.00 1.00 1.00 11315 1 0.92 0.94 0.93 204 2 0.95 0.97 0.96 714 3 0.87 0.98 0.92 171 macro avg 0.93 0.97 0.95 12404 weighted avg 0.99 0.99 0.99 12404 accuracy 0.99 12404 ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 64 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0206 | 1.72 | 1000 | 0.0422 | 0.9854 | | 0.0008 | 3.44 | 2000 | 0.0316 | 0.9918 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.10.1 - Tokenizers 0.13.2
joheras/sentiment-analysis
joheras
2023-03-14T14:15:03Z
0
0
fastai
[ "fastai", "region:us" ]
null
2023-03-14T14:14:56Z
--- tags: - fastai --- # Amazing! 🥳 Congratulations on hosting your fastai model on the Hugging Face Hub! # Some next steps 1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))! 2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)). 3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)! Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card. --- # Model card ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed
NTCAL/norbert2_sentiment_norec_en_gpu_500_rader_max_1
NTCAL
2023-03-14T14:12:25Z
105
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-03-14T14:03:16Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - recall - precision model-index: - name: norbert2_sentiment_norec_en_gpu_500_rader_max_1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Kjøretid {'train_runtime': 291.2967, 'train_samples_per_second': 51.494, 'train_steps_per_second': 0.189, 'train_loss': 0.6998663252050227, 'epoch': 4.94} Time: 291.30 Samples/second: 51.49 #SBATCH --nodes=1 #SBATCH --ntasks-per-node=1 #SBATCH --gres=gpu:A100m40:1 # norbert2_sentiment_norec_en_gpu_500_rader_max_1 This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6269 - Compute Metrics: : - Accuracy: 0.682 - Balanced Accuracy: 0.5048 - F1 Score: 0.8073 - Recall: 0.9569 - Precision: 0.6981 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Compute Metrics | Accuracy | Balanced Accuracy | F1 Score | Recall | Precision | |:-------------:|:-----:|:----:|:---------------:|:---------------:|:--------:|:-----------------:|:--------:|:------:|:---------:| | No log | 1.0 | 2 | 0.6311 | : | 0.688 | 0.4943 | 0.8152 | 0.9885 | 0.6935 | | No log | 2.0 | 4 | 0.6316 | : | 0.674 | 0.5268 | 0.7939 | 0.9023 | 0.7088 | | No log | 3.0 | 6 | 0.6199 | : | 0.686 | 0.5002 | 0.8120 | 0.9741 | 0.6961 | | No log | 4.0 | 8 | 0.6475 | : | 0.652 | 0.5277 | 0.7717 | 0.8448 | 0.7101 | | 0.6559 | 5.0 | 10 | 0.6269 | : | 0.682 | 0.5048 | 0.8073 | 0.9569 | 0.6981 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu117 - Datasets 2.9.0 - Tokenizers 0.13.2
aidiary/dqn-SpaceInvadersNoFrameskip-v4
aidiary
2023-03-14T14:09:12Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-03-14T14:08:29Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 633.50 +/- 244.70 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga aidiary -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga aidiary -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga aidiary ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
kurohige/poca
kurohige
2023-03-14T14:08:59Z
3
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "unity-ml-agents", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SoccerTwos", "region:us" ]
reinforcement-learning
2023-03-14T14:08:48Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos library_name: ml-agents --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos 2. Step 1: Write your model_id: kurohige/poca 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
intanm/xlm-roberta-clickbait-spoiling
intanm
2023-03-14T13:51:11Z
134
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "question-answering", "generated_from_trainer", "license:cc-by-4.0", "endpoints_compatible", "region:us" ]
question-answering
2023-03-14T13:07:34Z
--- license: cc-by-4.0 tags: - generated_from_trainer model-index: - name: xlm-roberta-clickbait-spoiling results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-clickbait-spoiling This model is a fine-tuned version of [deepset/xlm-roberta-base-squad2](https://huggingface.co/deepset/xlm-roberta-base-squad2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.8556 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 200 | 2.7484 | | No log | 2.0 | 400 | 2.7115 | | 2.656 | 3.0 | 600 | 2.8556 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.10.1 - Tokenizers 0.13.2
soBeauty/distilroberta-base-ThaiCLM-Thairath
soBeauty
2023-03-14T13:49:45Z
161
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-03-14T13:38:31Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilroberta-base-ThaiCLM-Thairath results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilroberta-base-ThaiCLM-Thairath This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.5079 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 17 | 1.4461 | | No log | 2.0 | 34 | 1.4651 | | No log | 3.0 | 51 | 1.9258 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.10.1 - Tokenizers 0.13.2
timmartin/my_awesome_eli5_clm-model
timmartin
2023-03-14T13:48:28Z
117
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-03-14T13:19:37Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: my_awesome_eli5_clm-model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_eli5_clm-model This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.7127 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.7237 | 1.0 | 1066 | 3.7168 | | 3.6706 | 2.0 | 2132 | 3.7143 | | 3.6374 | 3.0 | 3198 | 3.7127 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.10.0 - Tokenizers 0.13.2
Nelsonlin0321/poca-SoccerTwos-v4
Nelsonlin0321
2023-03-14T13:46:01Z
5
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "unity-ml-agents", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SoccerTwos", "region:us" ]
reinforcement-learning
2023-03-14T13:43:40Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos library_name: ml-agents --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos 2. Step 1: Write your model_id: Nelsonlin0321/poca-SoccerTwos-v4 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
AlekseyKorshuk/pyg-6b-edit-test
AlekseyKorshuk
2023-03-14T13:44:20Z
5
0
transformers
[ "transformers", "pytorch", "gptj", "text-generation", "generated_from_trainer", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-03-13T18:31:56Z
--- license: creativeml-openrail-m tags: - generated_from_trainer model-index: - name: pyg-6b-edit-test results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pyg-6b-edit-test This model is a fine-tuned version of [PygmalionAI/pygmalion-6b](https://huggingface.co/PygmalionAI/pygmalion-6b) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: nan ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - total_train_batch_size: 4 - total_eval_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.4302 | 1.0 | 9635 | nan | ### Framework versions - Transformers 4.27.0.dev0 - Pytorch 1.12.1+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
gyronee/SpaceInvadersNoFrameskip-v4
gyronee
2023-03-14T13:41:40Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-03-14T13:40:48Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 731.00 +/- 272.18 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga gyronee -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga gyronee -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga gyronee ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
soBeauty/distilgpt2-ThaiCLM-Thairath
soBeauty
2023-03-14T13:35:42Z
174
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-03-14T13:23:27Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilgpt2-ThaiCLM-Thairath results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilgpt2-ThaiCLM-Thairath This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.9806 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 17 | 2.0238 | | No log | 2.0 | 34 | 1.9877 | | No log | 3.0 | 51 | 1.9806 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.10.1 - Tokenizers 0.13.2
hoanglongvn/Taxi-unit2
hoanglongvn
2023-03-14T13:32:54Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-03-14T13:32:53Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-unit2 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.50 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="hoanglongvn/Taxi-unit2", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
hoanglongvn/q-FrozenLake-v1-4x4-noSlippery
hoanglongvn
2023-03-14T13:30:46Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-03-14T13:30:44Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="hoanglongvn/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
peterdamn/Reinforce-cartpole
peterdamn
2023-03-14T13:18:45Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-03-14T13:18:37Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-cartpole results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
NTCAL/norbert2_sentiment_norec_to_gpu_500_rader_8
NTCAL
2023-03-14T13:16:38Z
105
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-03-14T12:47:56Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - recall - precision model-index: - name: norbert2_sentiment_norec_to_gpu_500_rader_8 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Kjøretid: {'train_runtime': 432.2459, 'train_samples_per_second': 5.784, 'train_steps_per_second': 0.012, 'train_loss': 0.6640925884246827, 'epoch': 5.0} Time: 432.25 Samples/second: 5.78 GPU memory occupied: 11314 MB. # norbert2_sentiment_norec_to_gpu_500_rader_8 This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6252 - Compute Metrics: : - Accuracy: 0.692 - Balanced Accuracy: 0.4971 - F1 Score: 0.8180 - Recall: 0.9943 - Precision: 0.6948 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 128 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 512 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Compute Metrics | Accuracy | Balanced Accuracy | F1 Score | Recall | Precision | |:-------------:|:-----:|:----:|:---------------:|:---------------:|:--------:|:-----------------:|:--------:|:------:|:---------:| | No log | 1.0 | 1 | 0.6370 | : | 0.696 | 0.5 | 0.8208 | 1.0 | 0.696 | | No log | 2.0 | 2 | 0.6319 | : | 0.684 | 0.4932 | 0.8119 | 0.9799 | 0.6931 | | No log | 3.0 | 3 | 0.6415 | : | 0.692 | 0.4971 | 0.8180 | 0.9943 | 0.6948 | | No log | 4.0 | 4 | 0.6299 | : | 0.692 | 0.4971 | 0.8180 | 0.9943 | 0.6948 | | No log | 5.0 | 5 | 0.6252 | : | 0.692 | 0.4971 | 0.8180 | 0.9943 | 0.6948 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu117 - Datasets 2.9.0 - Tokenizers 0.13.2
Felipe474/dqn-SpaceInvadersNoFrameskip-v4
Felipe474
2023-03-14T13:04:02Z
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-03-14T13:03:44Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 539.00 +/- 165.50 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Felipe474 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Felipe474 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Felipe474 ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
LarryAIDraw/shinjoAkaneSSSS_v1
LarryAIDraw
2023-03-14T13:03:09Z
0
1
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-03-14T13:01:18Z
--- license: creativeml-openrail-m ---
LarryAIDraw/katouMegumiSaekano_v1
LarryAIDraw
2023-03-14T12:59:42Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2023-03-14T12:50:49Z
--- license: creativeml-openrail-m --- https://civitai.com/models/19371/katou-megumi-saekano
NTCAL/norbert2_sentiment_norec_en_gpu_3000_rader_2_test
NTCAL
2023-03-14T12:04:41Z
105
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-03-14T11:57:23Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - recall - precision model-index: - name: norbert2_sentiment_norec_en_gpu_3000_rader_2_test results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # norbert2_sentiment_norec_en_gpu_3000_rader_2_test This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6243 - Compute Metrics: : - Accuracy: 0.6887 - Balanced Accuracy: 0.5020 - F1 Score: 0.8149 - Recall: 0.9932 - Precision: 0.6909 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Compute Metrics | Accuracy | Balanced Accuracy | F1 Score | Recall | Precision | |:-------------:|:-----:|:----:|:---------------:|:---------------:|:--------:|:-----------------:|:--------:|:------:|:---------:| | 0.6753 | 0.94 | 11 | 0.6527 | : | 0.669 | 0.5064 | 0.7957 | 0.9343 | 0.6929 | | 0.7261 | 1.94 | 22 | 0.6292 | : | 0.6813 | 0.5032 | 0.8080 | 0.9720 | 0.6914 | | 0.7124 | 2.94 | 33 | 0.6263 | : | 0.688 | 0.5012 | 0.8145 | 0.9928 | 0.6905 | | 0.7036 | 3.94 | 44 | 0.6271 | : | 0.686 | 0.5015 | 0.8126 | 0.9870 | 0.6907 | | 0.7035 | 4.94 | 55 | 0.6243 | : | 0.6887 | 0.5020 | 0.8149 | 0.9932 | 0.6909 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu117 - Datasets 2.9.0 - Tokenizers 0.13.2
MarkieMark1/Reinforce-CartPole-v1
MarkieMark1
2023-03-14T11:55:25Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-03-14T11:55:12Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-CartPole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
nouman-10/fine-tune-roberta-sem-exist
nouman-10
2023-03-14T11:51:44Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-03-14T11:24:46Z
--- license: mit tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: fine-tune-roberta-sem-exist results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # fine-tune-roberta-sem-exist This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2788 - Accuracy: 0.7413 - F1: 0.7192 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.4016 | 1.0 | 1194 | 0.6726 | 0.6948 | 0.6602 | | 0.3213 | 2.0 | 2388 | 0.8774 | 0.6948 | 0.6263 | | 0.2326 | 3.0 | 3582 | 0.8233 | 0.7209 | 0.7055 | | 0.1785 | 4.0 | 4776 | 1.0899 | 0.7267 | 0.6968 | | 0.1319 | 5.0 | 5970 | 1.2788 | 0.7413 | 0.7192 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.10.1 - Tokenizers 0.13.2
uisikdag/42000news_turkish_convbert_uncased_finetune
uisikdag
2023-03-14T11:51:26Z
108
0
transformers
[ "transformers", "pytorch", "tensorboard", "convbert", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-03-14T09:31:35Z
--- license: mit tags: - generated_from_trainer metrics: - accuracy model-index: - name: umit_42000news_convbert_uncased results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # umit_42000news_convbert_uncased This model is a fine-tuned version of [dbmdz/convbert-base-turkish-mc4-uncased](https://huggingface.co/dbmdz/convbert-base-turkish-mc4-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0049 - Accuracy: 0.6654 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.1237 | 1.0 | 1584 | 1.1773 | 0.5974 | | 1.1288 | 2.0 | 3168 | 1.0300 | 0.6521 | | 0.6861 | 3.0 | 4752 | 1.0049 | 0.6654 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu117 - Datasets 2.10.1 - Tokenizers 0.13.2
theolee/ppo-LunarLander-v2
theolee
2023-03-14T11:49:58Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-03-14T09:20:03Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 269.26 +/- 19.81 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Youngdal/Reinforce-cartpole_v1
Youngdal
2023-03-14T11:45:22Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-03-14T10:18:44Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-cartpole_v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
kaliputra/ppo-SnowballTarget
kaliputra
2023-03-14T11:40:31Z
6
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget", "region:us" ]
reinforcement-learning
2023-03-14T11:40:24Z
--- library_name: ml-agents tags: - SnowballTarget - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget 2. Step 1: Find your model_id: kaliputra/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Rishu115/mlm-bert-train_finalTraining_changedLR
Rishu115
2023-03-14T11:38:51Z
0
0
tf-keras
[ "tf-keras", "tf", "bert", "generated_from_keras_callback", "region:us" ]
null
2023-03-14T04:14:09Z
--- tags: - generated_from_keras_callback model-index: - name: Rishu115/mlm-bert-train_finalTraining_changedLR results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Rishu115/mlm-bert-train_finalTraining_changedLR This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.9469 - Validation Loss: 0.8665 - Epoch: 1 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 47396, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 1.1016 | 0.8681 | 0 | | 0.9469 | 0.8665 | 1 | ### Framework versions - Transformers 4.23.1 - TensorFlow 2.10.0 - Datasets 2.10.1 - Tokenizers 0.13.2
abbiekeats/Taxi-v3
abbiekeats
2023-03-14T11:32:33Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-03-14T11:32:28Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.52 +/- 2.69 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="abbiekeats/Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Christian90/ppo-LunarLander-v2-try5
Christian90
2023-03-14T11:03:57Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-03-14T11:03:40Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 282.47 +/- 22.42 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
ViktorDo/EcoBERT-Pretrained
ViktorDo
2023-03-14T10:58:25Z
167
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2023-03-14T09:38:15Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: EcoBERT-Pretrained results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # EcoBERT-Pretrained This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.2796 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 5 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 40 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.3457 | 0.66 | 500 | 2.2838 | | 2.3622 | 1.32 | 1000 | 2.2896 | | 2.3474 | 1.98 | 1500 | 2.2877 | | 2.3606 | 2.64 | 2000 | 2.2821 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.10.1 - Tokenizers 0.13.2
lipee/ppo-LunarLander-v2
lipee
2023-03-14T10:57:54Z
3
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-03-14T09:15:47Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 283.98 +/- 15.75 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Luisfrdz/a2c-PandaReachDense-v2
Luisfrdz
2023-03-14T10:56:29Z
0
0
stable-baselines3
[ "stable-baselines3", "PandaReachDense-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2023-03-13T14:44:17Z
--- library_name: stable-baselines3 tags: - PandaReachDense-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v2 type: PandaReachDense-v2 metrics: - type: mean_reward value: -9.33 +/- 0.92 name: mean_reward verified: false --- # **A2C** Agent playing **PandaReachDense-v2** This is a trained model of a **A2C** agent playing **PandaReachDense-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
ybelkada/fonts
ybelkada
2023-03-14T10:56:01Z
0
0
null
[ "region:us" ]
null
2023-03-14T10:07:40Z
# Fonts An utility repo to load conveniently fonts using `hf_hub_download`: ```python from huggingface_hub import hf_hub_download from PIL import ImageFont font_path = hf_hub_download("ybelkada/fonts", "Arial.TTF") font_obj = ImageFont(font_path, encoding="UTF-8") ```
nahorh/text_summarization_48_91_rouge_knowdocument
nahorh
2023-03-14T10:49:56Z
108
0
transformers
[ "transformers", "pytorch", "pegasus", "text2text-generation", "autotrain", "summarization", "unk", "dataset:nahorh/autotrain-data-text_summarization_knowdocument", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
summarization
2023-03-14T09:38:31Z
--- tags: - autotrain - summarization language: - unk widget: - text: "I love AutoTrain 🤗" datasets: - nahorh/autotrain-data-text_summarization_knowdocument co2_eq_emissions: emissions: 27.263457456233834 --- # Model Trained Using AutoTrain - Problem type: Summarization - Model ID: 40969105857 - CO2 Emissions (in grams): 27.2635 ## Validation Metrics - Loss: 0.753 - Rouge1: 48.910 - Rouge2: 28.780 - RougeL: 38.796 - RougeLsum: 46.262 - Gen Len: 68.490 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/nahorh/autotrain-text_summarization_knowdocument-40969105857 ```
charmquark/Reinforce-Pixelcopter-PLE-v0
charmquark
2023-03-14T10:30:35Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2023-03-14T10:30:32Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Pixelcopter-PLE-v0 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 42.80 +/- 34.87 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
zhengudaoer/Wenzhong-GPT2-110M-finetuned-wikitext2-2
zhengudaoer
2023-03-14T10:24:08Z
173
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-03-14T10:04:30Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: Wenzhong-GPT2-110M-finetuned-wikitext2-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Wenzhong-GPT2-110M-finetuned-wikitext2-2 This model is a fine-tuned version of [IDEA-CCNL/Wenzhong-GPT2-110M](https://huggingface.co/IDEA-CCNL/Wenzhong-GPT2-110M) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8460 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 54 | 1.8208 | | No log | 2.0 | 108 | 1.8271 | | No log | 3.0 | 162 | 1.8460 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1 - Datasets 2.10.1 - Tokenizers 0.13.2
merve/distilbert-base-uncased-finetuned-cola
merve
2023-03-14T09:55:07Z
121
1
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2023-03-14T09:42:11Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - matthews_correlation model-index: - name: distilbert-base-uncased-finetuned-cola results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue config: cola split: validation args: cola metrics: - name: Matthews Correlation type: matthews_correlation value: 0.47078712112764887 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.5252 - Matthews Correlation: 0.4708 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.3373 | 1.0 | 535 | 0.5252 | 0.4708 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.12.1+cu102 - Datasets 2.10.1 - Tokenizers 0.13.2
Darsh12/custom-bert-finetuned-squad
Darsh12
2023-03-14T09:53:47Z
61
0
transformers
[ "transformers", "tf", "bert", "question-answering", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2023-03-14T07:20:40Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Darsh12/custom-bert-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Darsh12/custom-bert-finetuned-squad This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.5661 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 16635, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000} - training_precision: mixed_float16 ### Training results | Train Loss | Epoch | |:----------:|:-----:| | 1.2728 | 0 | | 0.7757 | 1 | | 0.5661 | 2 | ### Framework versions - Transformers 4.26.1 - TensorFlow 2.11.0 - Datasets 2.10.1 - Tokenizers 0.13.2
alexbalandi/taxi-v3
alexbalandi
2023-03-14T09:49:10Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-03-14T09:49:05Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.54 +/- 2.74 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="alexbalandi/taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Humayoun/Donut4
Humayoun
2023-03-14T09:44:46Z
47
1
transformers
[ "transformers", "pytorch", "tensorboard", "vision-encoder-decoder", "image-text-to-text", "generated_from_trainer", "dataset:imagefolder", "license:mit", "endpoints_compatible", "region:us" ]
image-text-to-text
2023-03-14T08:29:38Z
--- license: mit tags: - generated_from_trainer datasets: - imagefolder model-index: - name: Donut4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Donut4 This model is a fine-tuned version of [humayoun/Donut2](https://huggingface.co/humayoun/Donut2) on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.27.0.dev0 - Pytorch 1.13.1+cu116 - Datasets 2.10.1 - Tokenizers 0.13.2
alexbalandi/q-FrozenLake-v1-4x4-noSlippery
alexbalandi
2023-03-14T09:37:55Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2023-03-14T09:18:38Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="alexbalandi/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
theintuitiveye/modernartstyle
theintuitiveye
2023-03-14T09:35:50Z
56
10
diffusers
[ "diffusers", "stable-diffusion", "text-to-image", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-01-02T09:24:46Z
--- title: modernartstyle colorFrom: green colorTo: indigo sdk: gradio sdk_version: 3.11.0 app_file: app.py pinned: false license: creativeml-openrail-m tags: - stable-diffusion - text-to-image inference: true --- # **ModernArt Diffusion** You can use this model to generate modernart style images. ## Dataset ~100 modern art images. ## Usage Use stability ai VAE for better results. For majority of prompts trigger phrase is not required; use *"modernartst"* to force the style *samples* ![image](https://drive.google.com/uc?export=view&id=1Wib7w07Ly99ymXCSAAvLUsyZUkTkgPei) Help us to be able to create models of professional standards. Consider supporting us on [Patreon](https://www.patreon.com/intuitiveai) / [Ko-fi](https://ko-fi.com/intuitiveai) / [Paypal](https://www.paypal.com/paypalme/theintuitiveye) ## *Demo* We support a [Gradio](https://github.com/gradio-app/gradio) Web UI to run ModernArt Diffusion : [![Open In Spaces](https://camo.githubusercontent.com/00380c35e60d6b04be65d3d94a58332be5cc93779f630bcdfc18ab9a3a7d3388/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f25463025394625413425393725323048756767696e67253230466163652d5370616365732d626c7565)](https://huggingface.co/spaces/theintuitiveye/modernartstyle) ## *License* This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies : - You can't use the model to deliberately produce nor share illegal or harmful outputs or content - The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license - You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license [here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
TiborUdvari/distilgpt2-finetuned-wikitext2
TiborUdvari
2023-03-14T09:32:12Z
179
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-03-14T09:15:42Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilgpt2-finetuned-wikitext2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilgpt2-finetuned-wikitext2 This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.6421 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.7599 | 1.0 | 2334 | 3.6655 | | 3.6518 | 2.0 | 4668 | 3.6463 | | 3.6008 | 3.0 | 7002 | 3.6421 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.10.1 - Tokenizers 0.13.2
starktoney81/dfb
starktoney81
2023-03-14T09:21:44Z
0
0
null
[ "dataset:yizhongw/self_instruct", "arxiv:1910.09700", "license:openrail", "region:us" ]
null
2023-03-14T09:20:32Z
--- license: openrail datasets: - yizhongw/self_instruct --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ### How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
nrshoudi/wav2vec2-large-xls-r-300m-Arabic-phonemeIPA
nrshoudi
2023-03-14T09:18:45Z
12
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-03-13T01:07:08Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-large-xls-r-300m-Arabic-phonemeIPA results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-Arabic-phonemeIPA This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0398 - Per: 0.0833 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 250 - num_epochs: 9 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Per | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.1294 | 1.0 | 102 | 0.0673 | 0.1060 | | 0.1964 | 2.0 | 204 | 0.0753 | 0.1167 | | 0.2395 | 3.0 | 306 | 0.0851 | 0.1134 | | 0.2317 | 4.0 | 408 | 0.0849 | 0.1152 | | 0.2082 | 5.0 | 510 | 0.0853 | 0.1085 | | 0.1856 | 6.0 | 612 | 0.0626 | 0.0946 | | 0.1616 | 7.0 | 714 | 0.0635 | 0.0892 | | 0.1426 | 8.0 | 816 | 0.0554 | 0.0863 | | 0.1284 | 9.0 | 918 | 0.0555 | 0.0846 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.10.1 - Tokenizers 0.13.2