modelId
string
author
string
last_modified
timestamp[us, tz=UTC]
downloads
int64
likes
int64
library_name
string
tags
list
pipeline_tag
string
createdAt
timestamp[us, tz=UTC]
card
string
rach405/test_trainer6
rach405
2022-11-23T22:42:58Z
103
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-11-23T18:19:23Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: test_trainer6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test_trainer6 This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.0525 - Accuracy: 0.3229 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.0672 | 1.0 | 88 | 2.0811 | 0.3229 | | 1.9813 | 2.0 | 176 | 2.0715 | 0.3229 | | 2.1212 | 3.0 | 264 | 2.0525 | 0.3229 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cpu - Tokenizers 0.11.6
huggingtweets/josephflaherty
huggingtweets
2022-11-23T22:21:56Z
113
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-11-23T22:20:04Z
--- language: en thumbnail: http://www.huggingtweets.com/josephflaherty/1669242112755/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1529933319919616011/mEzYnY5Z_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Joe Flaherty – Venture Capital Scribe</div> <div style="text-align: center; font-size: 14px;">@josephflaherty</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Joe Flaherty – Venture Capital Scribe. | Data | Joe Flaherty – Venture Capital Scribe | | --- | --- | | Tweets downloaded | 3247 | | Retweets | 150 | | Short tweets | 154 | | Tweets kept | 2943 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/h0zhab8z/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @josephflaherty's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2hw29ydt) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2hw29ydt/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/josephflaherty') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
NobodyX23/Pablobato
NobodyX23
2022-11-23T22:19:27Z
0
0
null
[ "region:us" ]
null
2022-11-23T16:41:57Z
Pablo Lobato style model --- Text prompt: Pablobato style --- license: creativeml-openrail-m ---
Guizmus/SD_DreamerCommunities_Collection
Guizmus
2022-11-23T22:17:55Z
0
29
EveryDream
[ "EveryDream", "diffusers", "stable-diffusion", "text-to-image", "image-to-image", "en", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2022-11-13T17:51:54Z
--- language: - en license: creativeml-openrail-m thumbnail: "https://huggingface.co/Guizmus/SD_DreamerCommunities_Collection/resolve/main/images/showcase_main.jpg" tags: - stable-diffusion - text-to-image - image-to-image - diffusers library_name: "EveryDream" inference: false --- # Introduction This is a collection of models made from and for the users of the Stable Diffusion Discord server. Different categories of channel exist, the "Dreamers Communities" presenting a panel of subjects, like Anime, 3D, or Architectural. Each of these channels has users posting images made through the use of Stable diffusion. After asking the users, and, depending on the activity of each channel, collecting a dataset from new submissions or from the history of the channel, I intend to build multiple models representing the style of each, so that users can produce things in the style they like and mix it with other things more easily. Those are mainly done through the use of EveryDream, and should result in a Mega Model towards the end for the datasets that are compatible. Some model like the Anime one require to stay on a different starting point, and may not get merged. # CharacterChan Style ## Dataset & training This model was based on [RunwayML SD 1.5](https://huggingface.co/runwayml/stable-diffusion-v1-5) model with updated VAE. The dataset was a collaborative effort of the Stable Diffusion #CharacterChan channel, made of pictures from the users themselves using their different techniques. 50 total pictures in the dataset, 160 repeats total each, over 4 Epoch on LR1e-6. This was trained using EveryDream with a full caption of all training pictures. The style will be called by the use of the token **CharacterChan Style**. ## Showcase & Downloads v1 ![Showcase](https://huggingface.co/Guizmus/SD_DreamerCommunities_Collection/resolve/main/images/showcase_CharacterChanStyle-v1.jpg) [CKPT (2GB)](https://huggingface.co/Guizmus/SD_DreamerCommunities_Collection/resolve/main/diffusers/CharacterChan/CharacterChanStyle-v1.ckpt) [CKPT with training optimizers (11GB)](https://huggingface.co/Guizmus/SD_DreamerCommunities_Collection/resolve/main/ckpt/CharacterChanStyle-v1_with_optimizers.ckpt) [Diffusers](https://huggingface.co/Guizmus/SD_DreamerCommunities_Collection/tree/main/diffusers/CharacterChan) [Dataset](https://huggingface.co/Guizmus/SD_DreamerCommunities_Collection/resolve/main/datasets/CharacterChanStyle-v1.zip) # CreatureChan Style ## Dataset & training This model was based on [RunwayML SD 1.5](https://huggingface.co/runwayml/stable-diffusion-v1-5) model with updated VAE. The dataset was a collaborative effort of the Stable Diffusion #CreatureChan channel, made of pictures from the users themselves using their different techniques. 50 total pictures in the dataset, 160 repeats total each, over 4 Epoch on LR1e-6. This was trained using EveryDream with a full caption of all training pictures. The style will be called by the use of the token **CreatureChan Style**. ## Showcase & Downloads v1 ![Showcase](https://huggingface.co/Guizmus/SD_DreamerCommunities_Collection/resolve/main/images/showcase_CreatureChanStyle-v1.jpg) [CKPT (2GB)](https://huggingface.co/Guizmus/SD_DreamerCommunities_Collection/resolve/main/diffusers/CreatureChan/CreatureChanStyle-v1.ckpt) [CKPT with training optimizers (11GB)](https://huggingface.co/Guizmus/SD_DreamerCommunities_Collection/resolve/main/ckpt/CreatureChanStyle-v1_with_optimizers.ckpt) [Diffusers](https://huggingface.co/Guizmus/SD_DreamerCommunities_Collection/tree/main/diffusers/CreatureChan) [Dataset](https://huggingface.co/Guizmus/SD_DreamerCommunities_Collection/resolve/main/datasets/CreatureChanStyle-v1.zip) # 3DChan Style ## Dataset & training This model was based on [RunwayML SD 1.5](https://huggingface.co/runwayml/stable-diffusion-v1-5) model with updated VAE. The dataset was a collaborative effort of the Stable Diffusion #3D channel, made of pictures from the users themselves using their different techniques. 120 total pictures in the dataset, 500 repeats total each, over 10 Epoch on LR1e-6. This was trained using EveryDream with a full caption of all training pictures. The style will be called by the use of the token **3D Style**. Other significant tokens : rick roll, fullbody shot, bad cosplay man ## Showcase & Downloads v1 ![Showcase](https://huggingface.co/Guizmus/SD_DreamerCommunities_Collection/resolve/main/images/showcase_3DChanStyle-v1.jpg) [CKPT (2GB)](https://huggingface.co/Guizmus/SD_DreamerCommunities_Collection/resolve/main/diffusers/3DStyle/3DStyle-v1.ckpt) [CKPT with training optimizers (11GB)](https://huggingface.co/Guizmus/SD_DreamerCommunities_Collection/resolve/main/ckpt/3DStyle-v1_with_optimizers.ckpt) [Diffusers](https://huggingface.co/Guizmus/SD_DreamerCommunities_Collection/tree/main/diffusers/3DStyle) [Dataset](https://huggingface.co/Guizmus/SD_DreamerCommunities_Collection/resolve/main/datasets/3DChanStyle-v1.zip) # AnimeChan Style ## Dataset & training This model was based on [Trinart](https://huggingface.co/naclbit/trinart_stable_diffusion_v2) model. The dataset was a collaborative effort of the Stable Diffusion #anime channel, made of pictures from the users themselves using their different techniques. 100 total pictures in the dataset, 300 repeats total each, over 6 Epoch on LR1e-6. This was trained using EveryDream with a full caption of all training pictures. The style will be called by the use of the token **AnimeChan Style**. ## Showcase & Downloads v2 ![Showcase](https://huggingface.co/Guizmus/SD_DreamerCommunities_Collection/resolve/main/images/showcase_AnimeChan-v2.jpg) [CKPT (2GB)](https://huggingface.co/Guizmus/SD_DreamerCommunities_Collection/resolve/main/diffusers/AnimeStyle/AnimeChanStyle-v2.ckpt) [Diffusers](https://huggingface.co/Guizmus/SD_DreamerCommunities_Collection/tree/main/diffusers/AnimeStyle) [Dataset](https://huggingface.co/Guizmus/SD_DreamerCommunities_Collection/resolve/main/datasets/AnimeChanStyle-v2.zip) ## Showcase & Downloads v1 ![Showcase](https://huggingface.co/Guizmus/SD_DreamerCommunities_Collection/resolve/main/images/showcase_AnimeChan-v1.jpg) [CKPT (2GB)](https://huggingface.co/Guizmus/SD_DreamerCommunities_Collection/resolve/main/ckpt/AnimeChanStyle-v1.ckpt) [Dataset](https://huggingface.co/Guizmus/SD_DreamerCommunities_Collection/resolve/main/datasets/AnimeChanStyle-v1.zip) # License These models are open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content 2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) [Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
TUMxudashuai/DQN-LunarLander-v2
TUMxudashuai
2022-11-23T21:02:30Z
5
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-11-23T21:01:50Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -83.37 +/- 29.36 name: mean_reward verified: false --- # **DQN** Agent playing **LunarLander-v2** This is a trained model of a **DQN** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
SherlockHolmes/ddpm-butterflies-128
SherlockHolmes
2022-11-23T21:02:17Z
3
0
diffusers
[ "diffusers", "tensorboard", "en", "dataset:huggan/smithsonian_butterflies_subset", "license:apache-2.0", "diffusers:DDPMPipeline", "region:us" ]
null
2022-11-23T19:48:55Z
--- language: en license: apache-2.0 library_name: diffusers tags: [] datasets: huggan/smithsonian_butterflies_subset metrics: [] --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # ddpm-butterflies-128 ## Model description This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library on the `huggan/smithsonian_butterflies_subset` dataset. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training data [TODO: describe the data used to train the model] ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - gradient_accumulation_steps: 1 - optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None - lr_scheduler: None - lr_warmup_steps: 500 - ema_inv_gamma: None - ema_inv_gamma: None - ema_inv_gamma: None - mixed_precision: fp16 ### Training results 📈 [TensorBoard logs](https://huggingface.co/SherlockHolmes/ddpm-butterflies-128/tensorboard?#scalars)
tomekkorbak/agitated_jones
tomekkorbak
2022-11-23T19:45:02Z
0
0
null
[ "generated_from_trainer", "en", "dataset:tomekkorbak/detoxify-pile-chunk3-0-50000", "dataset:tomekkorbak/detoxify-pile-chunk3-50000-100000", "dataset:tomekkorbak/detoxify-pile-chunk3-100000-150000", "dataset:tomekkorbak/detoxify-pile-chunk3-150000-200000", "dataset:tomekkorbak/detoxify-pile-chunk3-200000-250000", "dataset:tomekkorbak/detoxify-pile-chunk3-250000-300000", "dataset:tomekkorbak/detoxify-pile-chunk3-300000-350000", "dataset:tomekkorbak/detoxify-pile-chunk3-350000-400000", "dataset:tomekkorbak/detoxify-pile-chunk3-400000-450000", "dataset:tomekkorbak/detoxify-pile-chunk3-450000-500000", "dataset:tomekkorbak/detoxify-pile-chunk3-500000-550000", "dataset:tomekkorbak/detoxify-pile-chunk3-550000-600000", "dataset:tomekkorbak/detoxify-pile-chunk3-600000-650000", "dataset:tomekkorbak/detoxify-pile-chunk3-650000-700000", "dataset:tomekkorbak/detoxify-pile-chunk3-700000-750000", "dataset:tomekkorbak/detoxify-pile-chunk3-750000-800000", "dataset:tomekkorbak/detoxify-pile-chunk3-800000-850000", "dataset:tomekkorbak/detoxify-pile-chunk3-850000-900000", "dataset:tomekkorbak/detoxify-pile-chunk3-900000-950000", "dataset:tomekkorbak/detoxify-pile-chunk3-950000-1000000", "dataset:tomekkorbak/detoxify-pile-chunk3-1000000-1050000", "dataset:tomekkorbak/detoxify-pile-chunk3-1050000-1100000", "dataset:tomekkorbak/detoxify-pile-chunk3-1100000-1150000", "dataset:tomekkorbak/detoxify-pile-chunk3-1150000-1200000", "dataset:tomekkorbak/detoxify-pile-chunk3-1200000-1250000", "dataset:tomekkorbak/detoxify-pile-chunk3-1250000-1300000", "dataset:tomekkorbak/detoxify-pile-chunk3-1300000-1350000", "dataset:tomekkorbak/detoxify-pile-chunk3-1350000-1400000", "dataset:tomekkorbak/detoxify-pile-chunk3-1400000-1450000", "dataset:tomekkorbak/detoxify-pile-chunk3-1450000-1500000", "dataset:tomekkorbak/detoxify-pile-chunk3-1500000-1550000", "dataset:tomekkorbak/detoxify-pile-chunk3-1550000-1600000", "dataset:tomekkorbak/detoxify-pile-chunk3-1600000-1650000", "dataset:tomekkorbak/detoxify-pile-chunk3-1650000-1700000", "dataset:tomekkorbak/detoxify-pile-chunk3-1700000-1750000", "dataset:tomekkorbak/detoxify-pile-chunk3-1750000-1800000", "dataset:tomekkorbak/detoxify-pile-chunk3-1800000-1850000", "dataset:tomekkorbak/detoxify-pile-chunk3-1850000-1900000", "dataset:tomekkorbak/detoxify-pile-chunk3-1900000-1950000", "license:mit", "region:us" ]
null
2022-11-23T19:37:18Z
--- language: - en license: mit tags: - generated_from_trainer datasets: - tomekkorbak/detoxify-pile-chunk3-0-50000 - tomekkorbak/detoxify-pile-chunk3-50000-100000 - tomekkorbak/detoxify-pile-chunk3-100000-150000 - tomekkorbak/detoxify-pile-chunk3-150000-200000 - tomekkorbak/detoxify-pile-chunk3-200000-250000 - tomekkorbak/detoxify-pile-chunk3-250000-300000 - tomekkorbak/detoxify-pile-chunk3-300000-350000 - tomekkorbak/detoxify-pile-chunk3-350000-400000 - tomekkorbak/detoxify-pile-chunk3-400000-450000 - tomekkorbak/detoxify-pile-chunk3-450000-500000 - tomekkorbak/detoxify-pile-chunk3-500000-550000 - tomekkorbak/detoxify-pile-chunk3-550000-600000 - tomekkorbak/detoxify-pile-chunk3-600000-650000 - tomekkorbak/detoxify-pile-chunk3-650000-700000 - tomekkorbak/detoxify-pile-chunk3-700000-750000 - tomekkorbak/detoxify-pile-chunk3-750000-800000 - tomekkorbak/detoxify-pile-chunk3-800000-850000 - tomekkorbak/detoxify-pile-chunk3-850000-900000 - tomekkorbak/detoxify-pile-chunk3-900000-950000 - tomekkorbak/detoxify-pile-chunk3-950000-1000000 - tomekkorbak/detoxify-pile-chunk3-1000000-1050000 - tomekkorbak/detoxify-pile-chunk3-1050000-1100000 - tomekkorbak/detoxify-pile-chunk3-1100000-1150000 - tomekkorbak/detoxify-pile-chunk3-1150000-1200000 - tomekkorbak/detoxify-pile-chunk3-1200000-1250000 - tomekkorbak/detoxify-pile-chunk3-1250000-1300000 - tomekkorbak/detoxify-pile-chunk3-1300000-1350000 - tomekkorbak/detoxify-pile-chunk3-1350000-1400000 - tomekkorbak/detoxify-pile-chunk3-1400000-1450000 - tomekkorbak/detoxify-pile-chunk3-1450000-1500000 - tomekkorbak/detoxify-pile-chunk3-1500000-1550000 - tomekkorbak/detoxify-pile-chunk3-1550000-1600000 - tomekkorbak/detoxify-pile-chunk3-1600000-1650000 - tomekkorbak/detoxify-pile-chunk3-1650000-1700000 - tomekkorbak/detoxify-pile-chunk3-1700000-1750000 - tomekkorbak/detoxify-pile-chunk3-1750000-1800000 - tomekkorbak/detoxify-pile-chunk3-1800000-1850000 - tomekkorbak/detoxify-pile-chunk3-1850000-1900000 - tomekkorbak/detoxify-pile-chunk3-1900000-1950000 model-index: - name: agitated_jones results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # agitated_jones This model was trained from scratch on the tomekkorbak/detoxify-pile-chunk3-0-50000, the tomekkorbak/detoxify-pile-chunk3-50000-100000, the tomekkorbak/detoxify-pile-chunk3-100000-150000, the tomekkorbak/detoxify-pile-chunk3-150000-200000, the tomekkorbak/detoxify-pile-chunk3-200000-250000, the tomekkorbak/detoxify-pile-chunk3-250000-300000, the tomekkorbak/detoxify-pile-chunk3-300000-350000, the tomekkorbak/detoxify-pile-chunk3-350000-400000, the tomekkorbak/detoxify-pile-chunk3-400000-450000, the tomekkorbak/detoxify-pile-chunk3-450000-500000, the tomekkorbak/detoxify-pile-chunk3-500000-550000, the tomekkorbak/detoxify-pile-chunk3-550000-600000, the tomekkorbak/detoxify-pile-chunk3-600000-650000, the tomekkorbak/detoxify-pile-chunk3-650000-700000, the tomekkorbak/detoxify-pile-chunk3-700000-750000, the tomekkorbak/detoxify-pile-chunk3-750000-800000, the tomekkorbak/detoxify-pile-chunk3-800000-850000, the tomekkorbak/detoxify-pile-chunk3-850000-900000, the tomekkorbak/detoxify-pile-chunk3-900000-950000, the tomekkorbak/detoxify-pile-chunk3-950000-1000000, the tomekkorbak/detoxify-pile-chunk3-1000000-1050000, the tomekkorbak/detoxify-pile-chunk3-1050000-1100000, the tomekkorbak/detoxify-pile-chunk3-1100000-1150000, the tomekkorbak/detoxify-pile-chunk3-1150000-1200000, the tomekkorbak/detoxify-pile-chunk3-1200000-1250000, the tomekkorbak/detoxify-pile-chunk3-1250000-1300000, the tomekkorbak/detoxify-pile-chunk3-1300000-1350000, the tomekkorbak/detoxify-pile-chunk3-1350000-1400000, the tomekkorbak/detoxify-pile-chunk3-1400000-1450000, the tomekkorbak/detoxify-pile-chunk3-1450000-1500000, the tomekkorbak/detoxify-pile-chunk3-1500000-1550000, the tomekkorbak/detoxify-pile-chunk3-1550000-1600000, the tomekkorbak/detoxify-pile-chunk3-1600000-1650000, the tomekkorbak/detoxify-pile-chunk3-1650000-1700000, the tomekkorbak/detoxify-pile-chunk3-1700000-1750000, the tomekkorbak/detoxify-pile-chunk3-1750000-1800000, the tomekkorbak/detoxify-pile-chunk3-1800000-1850000, the tomekkorbak/detoxify-pile-chunk3-1850000-1900000 and the tomekkorbak/detoxify-pile-chunk3-1900000-1950000 datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 64 - total_train_batch_size: 1024 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.01 - training_steps: 3147 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.5.1 - Tokenizers 0.11.6 # Full config {'dataset': {'datasets': ['tomekkorbak/detoxify-pile-chunk3-0-50000', 'tomekkorbak/detoxify-pile-chunk3-50000-100000', 'tomekkorbak/detoxify-pile-chunk3-100000-150000', 'tomekkorbak/detoxify-pile-chunk3-150000-200000', 'tomekkorbak/detoxify-pile-chunk3-200000-250000', 'tomekkorbak/detoxify-pile-chunk3-250000-300000', 'tomekkorbak/detoxify-pile-chunk3-300000-350000', 'tomekkorbak/detoxify-pile-chunk3-350000-400000', 'tomekkorbak/detoxify-pile-chunk3-400000-450000', 'tomekkorbak/detoxify-pile-chunk3-450000-500000', 'tomekkorbak/detoxify-pile-chunk3-500000-550000', 'tomekkorbak/detoxify-pile-chunk3-550000-600000', 'tomekkorbak/detoxify-pile-chunk3-600000-650000', 'tomekkorbak/detoxify-pile-chunk3-650000-700000', 'tomekkorbak/detoxify-pile-chunk3-700000-750000', 'tomekkorbak/detoxify-pile-chunk3-750000-800000', 'tomekkorbak/detoxify-pile-chunk3-800000-850000', 'tomekkorbak/detoxify-pile-chunk3-850000-900000', 'tomekkorbak/detoxify-pile-chunk3-900000-950000', 'tomekkorbak/detoxify-pile-chunk3-950000-1000000', 'tomekkorbak/detoxify-pile-chunk3-1000000-1050000', 'tomekkorbak/detoxify-pile-chunk3-1050000-1100000', 'tomekkorbak/detoxify-pile-chunk3-1100000-1150000', 'tomekkorbak/detoxify-pile-chunk3-1150000-1200000', 'tomekkorbak/detoxify-pile-chunk3-1200000-1250000', 'tomekkorbak/detoxify-pile-chunk3-1250000-1300000', 'tomekkorbak/detoxify-pile-chunk3-1300000-1350000', 'tomekkorbak/detoxify-pile-chunk3-1350000-1400000', 'tomekkorbak/detoxify-pile-chunk3-1400000-1450000', 'tomekkorbak/detoxify-pile-chunk3-1450000-1500000', 'tomekkorbak/detoxify-pile-chunk3-1500000-1550000', 'tomekkorbak/detoxify-pile-chunk3-1550000-1600000', 'tomekkorbak/detoxify-pile-chunk3-1600000-1650000', 'tomekkorbak/detoxify-pile-chunk3-1650000-1700000', 'tomekkorbak/detoxify-pile-chunk3-1700000-1750000', 'tomekkorbak/detoxify-pile-chunk3-1750000-1800000', 'tomekkorbak/detoxify-pile-chunk3-1800000-1850000', 'tomekkorbak/detoxify-pile-chunk3-1850000-1900000', 'tomekkorbak/detoxify-pile-chunk3-1900000-1950000'], 'is_split_by_sentences': True}, 'generation': {'force_call_on': [25354], 'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}], 'scenario_configs': [{'generate_kwargs': {'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'unconditional', 'num_samples': 2048}, {'generate_kwargs': {'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'challenging_rtp', 'num_samples': 2048, 'prompts_path': 'resources/challenging_rtp.jsonl'}], 'scorer_config': {'device': 'cuda:0'}}, 'kl_gpt3_callback': {'force_call_on': [25354], 'max_tokens': 64, 'num_samples': 4096}, 'model': {'from_scratch': True, 'gpt2_config_kwargs': {'reorder_and_upcast_attn': True, 'scale_attn_by': True}, 'model_kwargs': {'value_head_config': {'is_detached': False}}, 'path_or_name': 'gpt2'}, 'objective': {'alpha': 1, 'beta': 10, 'name': 'AWR'}, 'tokenizer': {'path_or_name': 'gpt2'}, 'training': {'dataloader_num_workers': 0, 'effective_batch_size': 1024, 'evaluation_strategy': 'no', 'fp16': True, 'hub_model_id': 'agitated_jones', 'hub_strategy': 'all_checkpoints', 'learning_rate': 0.0005, 'logging_first_step': True, 'logging_steps': 1, 'num_tokens': 3300000000, 'output_dir': 'training_output104340', 'per_device_train_batch_size': 16, 'push_to_hub': True, 'remove_unused_columns': False, 'save_steps': 25354, 'save_strategy': 'steps', 'seed': 42, 'warmup_ratio': 0.01, 'weight_decay': 0.1}} # Wandb URL: https://wandb.ai/tomekkorbak/apo/runs/3t7xpujc
tomekkorbak/wonderful_engelbart
tomekkorbak
2022-11-23T19:38:30Z
0
0
null
[ "generated_from_trainer", "en", "dataset:tomekkorbak/detoxify-pile-chunk3-0-50000", "dataset:tomekkorbak/detoxify-pile-chunk3-50000-100000", "dataset:tomekkorbak/detoxify-pile-chunk3-100000-150000", "dataset:tomekkorbak/detoxify-pile-chunk3-150000-200000", "dataset:tomekkorbak/detoxify-pile-chunk3-200000-250000", "dataset:tomekkorbak/detoxify-pile-chunk3-250000-300000", "dataset:tomekkorbak/detoxify-pile-chunk3-300000-350000", "dataset:tomekkorbak/detoxify-pile-chunk3-350000-400000", "dataset:tomekkorbak/detoxify-pile-chunk3-400000-450000", "dataset:tomekkorbak/detoxify-pile-chunk3-450000-500000", "dataset:tomekkorbak/detoxify-pile-chunk3-500000-550000", "dataset:tomekkorbak/detoxify-pile-chunk3-550000-600000", "dataset:tomekkorbak/detoxify-pile-chunk3-600000-650000", "dataset:tomekkorbak/detoxify-pile-chunk3-650000-700000", "dataset:tomekkorbak/detoxify-pile-chunk3-700000-750000", "dataset:tomekkorbak/detoxify-pile-chunk3-750000-800000", "dataset:tomekkorbak/detoxify-pile-chunk3-800000-850000", "dataset:tomekkorbak/detoxify-pile-chunk3-850000-900000", "dataset:tomekkorbak/detoxify-pile-chunk3-900000-950000", "dataset:tomekkorbak/detoxify-pile-chunk3-950000-1000000", "dataset:tomekkorbak/detoxify-pile-chunk3-1000000-1050000", "dataset:tomekkorbak/detoxify-pile-chunk3-1050000-1100000", "dataset:tomekkorbak/detoxify-pile-chunk3-1100000-1150000", "dataset:tomekkorbak/detoxify-pile-chunk3-1150000-1200000", "dataset:tomekkorbak/detoxify-pile-chunk3-1200000-1250000", "dataset:tomekkorbak/detoxify-pile-chunk3-1250000-1300000", "dataset:tomekkorbak/detoxify-pile-chunk3-1300000-1350000", "dataset:tomekkorbak/detoxify-pile-chunk3-1350000-1400000", "dataset:tomekkorbak/detoxify-pile-chunk3-1400000-1450000", "dataset:tomekkorbak/detoxify-pile-chunk3-1450000-1500000", "dataset:tomekkorbak/detoxify-pile-chunk3-1500000-1550000", "dataset:tomekkorbak/detoxify-pile-chunk3-1550000-1600000", "dataset:tomekkorbak/detoxify-pile-chunk3-1600000-1650000", "dataset:tomekkorbak/detoxify-pile-chunk3-1650000-1700000", "dataset:tomekkorbak/detoxify-pile-chunk3-1700000-1750000", "dataset:tomekkorbak/detoxify-pile-chunk3-1750000-1800000", "dataset:tomekkorbak/detoxify-pile-chunk3-1800000-1850000", "dataset:tomekkorbak/detoxify-pile-chunk3-1850000-1900000", "dataset:tomekkorbak/detoxify-pile-chunk3-1900000-1950000", "license:mit", "region:us" ]
null
2022-11-23T19:34:25Z
--- language: - en license: mit tags: - generated_from_trainer datasets: - tomekkorbak/detoxify-pile-chunk3-0-50000 - tomekkorbak/detoxify-pile-chunk3-50000-100000 - tomekkorbak/detoxify-pile-chunk3-100000-150000 - tomekkorbak/detoxify-pile-chunk3-150000-200000 - tomekkorbak/detoxify-pile-chunk3-200000-250000 - tomekkorbak/detoxify-pile-chunk3-250000-300000 - tomekkorbak/detoxify-pile-chunk3-300000-350000 - tomekkorbak/detoxify-pile-chunk3-350000-400000 - tomekkorbak/detoxify-pile-chunk3-400000-450000 - tomekkorbak/detoxify-pile-chunk3-450000-500000 - tomekkorbak/detoxify-pile-chunk3-500000-550000 - tomekkorbak/detoxify-pile-chunk3-550000-600000 - tomekkorbak/detoxify-pile-chunk3-600000-650000 - tomekkorbak/detoxify-pile-chunk3-650000-700000 - tomekkorbak/detoxify-pile-chunk3-700000-750000 - tomekkorbak/detoxify-pile-chunk3-750000-800000 - tomekkorbak/detoxify-pile-chunk3-800000-850000 - tomekkorbak/detoxify-pile-chunk3-850000-900000 - tomekkorbak/detoxify-pile-chunk3-900000-950000 - tomekkorbak/detoxify-pile-chunk3-950000-1000000 - tomekkorbak/detoxify-pile-chunk3-1000000-1050000 - tomekkorbak/detoxify-pile-chunk3-1050000-1100000 - tomekkorbak/detoxify-pile-chunk3-1100000-1150000 - tomekkorbak/detoxify-pile-chunk3-1150000-1200000 - tomekkorbak/detoxify-pile-chunk3-1200000-1250000 - tomekkorbak/detoxify-pile-chunk3-1250000-1300000 - tomekkorbak/detoxify-pile-chunk3-1300000-1350000 - tomekkorbak/detoxify-pile-chunk3-1350000-1400000 - tomekkorbak/detoxify-pile-chunk3-1400000-1450000 - tomekkorbak/detoxify-pile-chunk3-1450000-1500000 - tomekkorbak/detoxify-pile-chunk3-1500000-1550000 - tomekkorbak/detoxify-pile-chunk3-1550000-1600000 - tomekkorbak/detoxify-pile-chunk3-1600000-1650000 - tomekkorbak/detoxify-pile-chunk3-1650000-1700000 - tomekkorbak/detoxify-pile-chunk3-1700000-1750000 - tomekkorbak/detoxify-pile-chunk3-1750000-1800000 - tomekkorbak/detoxify-pile-chunk3-1800000-1850000 - tomekkorbak/detoxify-pile-chunk3-1850000-1900000 - tomekkorbak/detoxify-pile-chunk3-1900000-1950000 model-index: - name: wonderful_engelbart results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wonderful_engelbart This model was trained from scratch on the tomekkorbak/detoxify-pile-chunk3-0-50000, the tomekkorbak/detoxify-pile-chunk3-50000-100000, the tomekkorbak/detoxify-pile-chunk3-100000-150000, the tomekkorbak/detoxify-pile-chunk3-150000-200000, the tomekkorbak/detoxify-pile-chunk3-200000-250000, the tomekkorbak/detoxify-pile-chunk3-250000-300000, the tomekkorbak/detoxify-pile-chunk3-300000-350000, the tomekkorbak/detoxify-pile-chunk3-350000-400000, the tomekkorbak/detoxify-pile-chunk3-400000-450000, the tomekkorbak/detoxify-pile-chunk3-450000-500000, the tomekkorbak/detoxify-pile-chunk3-500000-550000, the tomekkorbak/detoxify-pile-chunk3-550000-600000, the tomekkorbak/detoxify-pile-chunk3-600000-650000, the tomekkorbak/detoxify-pile-chunk3-650000-700000, the tomekkorbak/detoxify-pile-chunk3-700000-750000, the tomekkorbak/detoxify-pile-chunk3-750000-800000, the tomekkorbak/detoxify-pile-chunk3-800000-850000, the tomekkorbak/detoxify-pile-chunk3-850000-900000, the tomekkorbak/detoxify-pile-chunk3-900000-950000, the tomekkorbak/detoxify-pile-chunk3-950000-1000000, the tomekkorbak/detoxify-pile-chunk3-1000000-1050000, the tomekkorbak/detoxify-pile-chunk3-1050000-1100000, the tomekkorbak/detoxify-pile-chunk3-1100000-1150000, the tomekkorbak/detoxify-pile-chunk3-1150000-1200000, the tomekkorbak/detoxify-pile-chunk3-1200000-1250000, the tomekkorbak/detoxify-pile-chunk3-1250000-1300000, the tomekkorbak/detoxify-pile-chunk3-1300000-1350000, the tomekkorbak/detoxify-pile-chunk3-1350000-1400000, the tomekkorbak/detoxify-pile-chunk3-1400000-1450000, the tomekkorbak/detoxify-pile-chunk3-1450000-1500000, the tomekkorbak/detoxify-pile-chunk3-1500000-1550000, the tomekkorbak/detoxify-pile-chunk3-1550000-1600000, the tomekkorbak/detoxify-pile-chunk3-1600000-1650000, the tomekkorbak/detoxify-pile-chunk3-1650000-1700000, the tomekkorbak/detoxify-pile-chunk3-1700000-1750000, the tomekkorbak/detoxify-pile-chunk3-1750000-1800000, the tomekkorbak/detoxify-pile-chunk3-1800000-1850000, the tomekkorbak/detoxify-pile-chunk3-1850000-1900000 and the tomekkorbak/detoxify-pile-chunk3-1900000-1950000 datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.01 - training_steps: 50354 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.5.1 - Tokenizers 0.11.6 # Full config {'dataset': {'conditional_training_config': {'aligned_prefix': '<|aligned|>', 'drop_token_fraction': 0.01, 'misaligned_prefix': '<|misaligned|>', 'threshold': 0.00056}, 'datasets': ['tomekkorbak/detoxify-pile-chunk3-0-50000', 'tomekkorbak/detoxify-pile-chunk3-50000-100000', 'tomekkorbak/detoxify-pile-chunk3-100000-150000', 'tomekkorbak/detoxify-pile-chunk3-150000-200000', 'tomekkorbak/detoxify-pile-chunk3-200000-250000', 'tomekkorbak/detoxify-pile-chunk3-250000-300000', 'tomekkorbak/detoxify-pile-chunk3-300000-350000', 'tomekkorbak/detoxify-pile-chunk3-350000-400000', 'tomekkorbak/detoxify-pile-chunk3-400000-450000', 'tomekkorbak/detoxify-pile-chunk3-450000-500000', 'tomekkorbak/detoxify-pile-chunk3-500000-550000', 'tomekkorbak/detoxify-pile-chunk3-550000-600000', 'tomekkorbak/detoxify-pile-chunk3-600000-650000', 'tomekkorbak/detoxify-pile-chunk3-650000-700000', 'tomekkorbak/detoxify-pile-chunk3-700000-750000', 'tomekkorbak/detoxify-pile-chunk3-750000-800000', 'tomekkorbak/detoxify-pile-chunk3-800000-850000', 'tomekkorbak/detoxify-pile-chunk3-850000-900000', 'tomekkorbak/detoxify-pile-chunk3-900000-950000', 'tomekkorbak/detoxify-pile-chunk3-950000-1000000', 'tomekkorbak/detoxify-pile-chunk3-1000000-1050000', 'tomekkorbak/detoxify-pile-chunk3-1050000-1100000', 'tomekkorbak/detoxify-pile-chunk3-1100000-1150000', 'tomekkorbak/detoxify-pile-chunk3-1150000-1200000', 'tomekkorbak/detoxify-pile-chunk3-1200000-1250000', 'tomekkorbak/detoxify-pile-chunk3-1250000-1300000', 'tomekkorbak/detoxify-pile-chunk3-1300000-1350000', 'tomekkorbak/detoxify-pile-chunk3-1350000-1400000', 'tomekkorbak/detoxify-pile-chunk3-1400000-1450000', 'tomekkorbak/detoxify-pile-chunk3-1450000-1500000', 'tomekkorbak/detoxify-pile-chunk3-1500000-1550000', 'tomekkorbak/detoxify-pile-chunk3-1550000-1600000', 'tomekkorbak/detoxify-pile-chunk3-1600000-1650000', 'tomekkorbak/detoxify-pile-chunk3-1650000-1700000', 'tomekkorbak/detoxify-pile-chunk3-1700000-1750000', 'tomekkorbak/detoxify-pile-chunk3-1750000-1800000', 'tomekkorbak/detoxify-pile-chunk3-1800000-1850000', 'tomekkorbak/detoxify-pile-chunk3-1850000-1900000', 'tomekkorbak/detoxify-pile-chunk3-1900000-1950000'], 'is_split_by_sentences': True}, 'generation': {'force_call_on': [25354], 'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}], 'scenario_configs': [{'generate_kwargs': {'bad_words_ids': [[50257], [50258]], 'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'unconditional', 'num_samples': 2048, 'prefix': '<|aligned|>'}, {'generate_kwargs': {'bad_words_ids': [[50257], [50258]], 'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'challenging_rtp', 'num_samples': 2048, 'prefix': '<|aligned|>', 'prompt_before_control': True, 'prompts_path': 'resources/challenging_rtp.jsonl'}], 'scorer_config': {'device': 'cuda:0'}}, 'kl_gpt3_callback': {'force_call_on': [25354], 'max_tokens': 64, 'num_samples': 4096, 'prefix': '<|aligned|>'}, 'model': {'from_scratch': True, 'gpt2_config_kwargs': {'reorder_and_upcast_attn': True, 'scale_attn_by': True}, 'num_additional_tokens': 2, 'path_or_name': 'gpt2'}, 'objective': {'name': 'MLE'}, 'tokenizer': {'path_or_name': 'gpt2', 'special_tokens': ['<|aligned|>', '<|misaligned|>']}, 'training': {'dataloader_num_workers': 0, 'effective_batch_size': 64, 'evaluation_strategy': 'no', 'fp16': True, 'hub_model_id': 'wonderful_engelbart', 'hub_strategy': 'all_checkpoints', 'learning_rate': 0.0005, 'logging_first_step': True, 'logging_steps': 1, 'num_tokens': 3300000000, 'output_dir': 'training_output104340', 'per_device_train_batch_size': 16, 'push_to_hub': True, 'remove_unused_columns': False, 'save_steps': 25354, 'save_strategy': 'steps', 'seed': 42, 'warmup_ratio': 0.01, 'weight_decay': 0.1}} # Wandb URL: https://wandb.ai/tomekkorbak/apo/runs/2fqdlqy2
NehalJani/fin_sentiment
NehalJani
2022-11-23T18:11:11Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-11-23T18:04:57Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: fin_sentiment results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # fin_sentiment This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 125 | 0.4801 | 0.8006 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.7.1 - Tokenizers 0.13.2
whatlurks/test
whatlurks
2022-11-23T17:24:28Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2022-11-23T17:24:28Z
--- license: creativeml-openrail-m ---
monakth/bert-base-multilingual-uncased-sv2
monakth
2022-11-23T17:03:27Z
117
0
transformers
[ "transformers", "pytorch", "bert", "question-answering", "generated_from_trainer", "dataset:squad_v2", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-11-23T17:01:03Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad_v2 model-index: - name: bert-base-multilingual-uncased-svv results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-multilingual-uncased-svv This model is a fine-tuned version of [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) on the squad_v2 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0 - Datasets 2.1.0 - Tokenizers 0.12.1
jamiehudson/579-STmodel-v1a
jamiehudson
2022-11-23T16:46:09Z
1
0
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-11-23T16:45:56Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 300 with parameters: ``` {'batch_size': 4, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 300, "warmup_steps": 30, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
tomekkorbak/wizardly_dubinsky
tomekkorbak
2022-11-23T16:20:10Z
0
0
null
[ "generated_from_trainer", "en", "dataset:tomekkorbak/detoxify-pile-chunk3-0-50000", "dataset:tomekkorbak/detoxify-pile-chunk3-50000-100000", "dataset:tomekkorbak/detoxify-pile-chunk3-100000-150000", "dataset:tomekkorbak/detoxify-pile-chunk3-150000-200000", "dataset:tomekkorbak/detoxify-pile-chunk3-200000-250000", "dataset:tomekkorbak/detoxify-pile-chunk3-250000-300000", "dataset:tomekkorbak/detoxify-pile-chunk3-300000-350000", "dataset:tomekkorbak/detoxify-pile-chunk3-350000-400000", "dataset:tomekkorbak/detoxify-pile-chunk3-400000-450000", "dataset:tomekkorbak/detoxify-pile-chunk3-450000-500000", "dataset:tomekkorbak/detoxify-pile-chunk3-500000-550000", "dataset:tomekkorbak/detoxify-pile-chunk3-550000-600000", "dataset:tomekkorbak/detoxify-pile-chunk3-600000-650000", "dataset:tomekkorbak/detoxify-pile-chunk3-650000-700000", "dataset:tomekkorbak/detoxify-pile-chunk3-700000-750000", "dataset:tomekkorbak/detoxify-pile-chunk3-750000-800000", "dataset:tomekkorbak/detoxify-pile-chunk3-800000-850000", "dataset:tomekkorbak/detoxify-pile-chunk3-850000-900000", "dataset:tomekkorbak/detoxify-pile-chunk3-900000-950000", "dataset:tomekkorbak/detoxify-pile-chunk3-950000-1000000", "dataset:tomekkorbak/detoxify-pile-chunk3-1000000-1050000", "dataset:tomekkorbak/detoxify-pile-chunk3-1050000-1100000", "dataset:tomekkorbak/detoxify-pile-chunk3-1100000-1150000", "dataset:tomekkorbak/detoxify-pile-chunk3-1150000-1200000", "dataset:tomekkorbak/detoxify-pile-chunk3-1200000-1250000", "dataset:tomekkorbak/detoxify-pile-chunk3-1250000-1300000", "dataset:tomekkorbak/detoxify-pile-chunk3-1300000-1350000", "dataset:tomekkorbak/detoxify-pile-chunk3-1350000-1400000", "dataset:tomekkorbak/detoxify-pile-chunk3-1400000-1450000", "dataset:tomekkorbak/detoxify-pile-chunk3-1450000-1500000", "dataset:tomekkorbak/detoxify-pile-chunk3-1500000-1550000", "dataset:tomekkorbak/detoxify-pile-chunk3-1550000-1600000", "dataset:tomekkorbak/detoxify-pile-chunk3-1600000-1650000", "dataset:tomekkorbak/detoxify-pile-chunk3-1650000-1700000", "dataset:tomekkorbak/detoxify-pile-chunk3-1700000-1750000", "dataset:tomekkorbak/detoxify-pile-chunk3-1750000-1800000", "dataset:tomekkorbak/detoxify-pile-chunk3-1800000-1850000", "dataset:tomekkorbak/detoxify-pile-chunk3-1850000-1900000", "dataset:tomekkorbak/detoxify-pile-chunk3-1900000-1950000", "license:mit", "region:us" ]
null
2022-11-23T16:15:26Z
--- language: - en license: mit tags: - generated_from_trainer datasets: - tomekkorbak/detoxify-pile-chunk3-0-50000 - tomekkorbak/detoxify-pile-chunk3-50000-100000 - tomekkorbak/detoxify-pile-chunk3-100000-150000 - tomekkorbak/detoxify-pile-chunk3-150000-200000 - tomekkorbak/detoxify-pile-chunk3-200000-250000 - tomekkorbak/detoxify-pile-chunk3-250000-300000 - tomekkorbak/detoxify-pile-chunk3-300000-350000 - tomekkorbak/detoxify-pile-chunk3-350000-400000 - tomekkorbak/detoxify-pile-chunk3-400000-450000 - tomekkorbak/detoxify-pile-chunk3-450000-500000 - tomekkorbak/detoxify-pile-chunk3-500000-550000 - tomekkorbak/detoxify-pile-chunk3-550000-600000 - tomekkorbak/detoxify-pile-chunk3-600000-650000 - tomekkorbak/detoxify-pile-chunk3-650000-700000 - tomekkorbak/detoxify-pile-chunk3-700000-750000 - tomekkorbak/detoxify-pile-chunk3-750000-800000 - tomekkorbak/detoxify-pile-chunk3-800000-850000 - tomekkorbak/detoxify-pile-chunk3-850000-900000 - tomekkorbak/detoxify-pile-chunk3-900000-950000 - tomekkorbak/detoxify-pile-chunk3-950000-1000000 - tomekkorbak/detoxify-pile-chunk3-1000000-1050000 - tomekkorbak/detoxify-pile-chunk3-1050000-1100000 - tomekkorbak/detoxify-pile-chunk3-1100000-1150000 - tomekkorbak/detoxify-pile-chunk3-1150000-1200000 - tomekkorbak/detoxify-pile-chunk3-1200000-1250000 - tomekkorbak/detoxify-pile-chunk3-1250000-1300000 - tomekkorbak/detoxify-pile-chunk3-1300000-1350000 - tomekkorbak/detoxify-pile-chunk3-1350000-1400000 - tomekkorbak/detoxify-pile-chunk3-1400000-1450000 - tomekkorbak/detoxify-pile-chunk3-1450000-1500000 - tomekkorbak/detoxify-pile-chunk3-1500000-1550000 - tomekkorbak/detoxify-pile-chunk3-1550000-1600000 - tomekkorbak/detoxify-pile-chunk3-1600000-1650000 - tomekkorbak/detoxify-pile-chunk3-1650000-1700000 - tomekkorbak/detoxify-pile-chunk3-1700000-1750000 - tomekkorbak/detoxify-pile-chunk3-1750000-1800000 - tomekkorbak/detoxify-pile-chunk3-1800000-1850000 - tomekkorbak/detoxify-pile-chunk3-1850000-1900000 - tomekkorbak/detoxify-pile-chunk3-1900000-1950000 model-index: - name: wizardly_dubinsky results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wizardly_dubinsky This model was trained from scratch on the tomekkorbak/detoxify-pile-chunk3-0-50000, the tomekkorbak/detoxify-pile-chunk3-50000-100000, the tomekkorbak/detoxify-pile-chunk3-100000-150000, the tomekkorbak/detoxify-pile-chunk3-150000-200000, the tomekkorbak/detoxify-pile-chunk3-200000-250000, the tomekkorbak/detoxify-pile-chunk3-250000-300000, the tomekkorbak/detoxify-pile-chunk3-300000-350000, the tomekkorbak/detoxify-pile-chunk3-350000-400000, the tomekkorbak/detoxify-pile-chunk3-400000-450000, the tomekkorbak/detoxify-pile-chunk3-450000-500000, the tomekkorbak/detoxify-pile-chunk3-500000-550000, the tomekkorbak/detoxify-pile-chunk3-550000-600000, the tomekkorbak/detoxify-pile-chunk3-600000-650000, the tomekkorbak/detoxify-pile-chunk3-650000-700000, the tomekkorbak/detoxify-pile-chunk3-700000-750000, the tomekkorbak/detoxify-pile-chunk3-750000-800000, the tomekkorbak/detoxify-pile-chunk3-800000-850000, the tomekkorbak/detoxify-pile-chunk3-850000-900000, the tomekkorbak/detoxify-pile-chunk3-900000-950000, the tomekkorbak/detoxify-pile-chunk3-950000-1000000, the tomekkorbak/detoxify-pile-chunk3-1000000-1050000, the tomekkorbak/detoxify-pile-chunk3-1050000-1100000, the tomekkorbak/detoxify-pile-chunk3-1100000-1150000, the tomekkorbak/detoxify-pile-chunk3-1150000-1200000, the tomekkorbak/detoxify-pile-chunk3-1200000-1250000, the tomekkorbak/detoxify-pile-chunk3-1250000-1300000, the tomekkorbak/detoxify-pile-chunk3-1300000-1350000, the tomekkorbak/detoxify-pile-chunk3-1350000-1400000, the tomekkorbak/detoxify-pile-chunk3-1400000-1450000, the tomekkorbak/detoxify-pile-chunk3-1450000-1500000, the tomekkorbak/detoxify-pile-chunk3-1500000-1550000, the tomekkorbak/detoxify-pile-chunk3-1550000-1600000, the tomekkorbak/detoxify-pile-chunk3-1600000-1650000, the tomekkorbak/detoxify-pile-chunk3-1650000-1700000, the tomekkorbak/detoxify-pile-chunk3-1700000-1750000, the tomekkorbak/detoxify-pile-chunk3-1750000-1800000, the tomekkorbak/detoxify-pile-chunk3-1800000-1850000, the tomekkorbak/detoxify-pile-chunk3-1850000-1900000 and the tomekkorbak/detoxify-pile-chunk3-1900000-1950000 datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.01 - training_steps: 50354 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.5.1 - Tokenizers 0.11.6 # Full config {'dataset': {'datasets': ['tomekkorbak/detoxify-pile-chunk3-0-50000', 'tomekkorbak/detoxify-pile-chunk3-50000-100000', 'tomekkorbak/detoxify-pile-chunk3-100000-150000', 'tomekkorbak/detoxify-pile-chunk3-150000-200000', 'tomekkorbak/detoxify-pile-chunk3-200000-250000', 'tomekkorbak/detoxify-pile-chunk3-250000-300000', 'tomekkorbak/detoxify-pile-chunk3-300000-350000', 'tomekkorbak/detoxify-pile-chunk3-350000-400000', 'tomekkorbak/detoxify-pile-chunk3-400000-450000', 'tomekkorbak/detoxify-pile-chunk3-450000-500000', 'tomekkorbak/detoxify-pile-chunk3-500000-550000', 'tomekkorbak/detoxify-pile-chunk3-550000-600000', 'tomekkorbak/detoxify-pile-chunk3-600000-650000', 'tomekkorbak/detoxify-pile-chunk3-650000-700000', 'tomekkorbak/detoxify-pile-chunk3-700000-750000', 'tomekkorbak/detoxify-pile-chunk3-750000-800000', 'tomekkorbak/detoxify-pile-chunk3-800000-850000', 'tomekkorbak/detoxify-pile-chunk3-850000-900000', 'tomekkorbak/detoxify-pile-chunk3-900000-950000', 'tomekkorbak/detoxify-pile-chunk3-950000-1000000', 'tomekkorbak/detoxify-pile-chunk3-1000000-1050000', 'tomekkorbak/detoxify-pile-chunk3-1050000-1100000', 'tomekkorbak/detoxify-pile-chunk3-1100000-1150000', 'tomekkorbak/detoxify-pile-chunk3-1150000-1200000', 'tomekkorbak/detoxify-pile-chunk3-1200000-1250000', 'tomekkorbak/detoxify-pile-chunk3-1250000-1300000', 'tomekkorbak/detoxify-pile-chunk3-1300000-1350000', 'tomekkorbak/detoxify-pile-chunk3-1350000-1400000', 'tomekkorbak/detoxify-pile-chunk3-1400000-1450000', 'tomekkorbak/detoxify-pile-chunk3-1450000-1500000', 'tomekkorbak/detoxify-pile-chunk3-1500000-1550000', 'tomekkorbak/detoxify-pile-chunk3-1550000-1600000', 'tomekkorbak/detoxify-pile-chunk3-1600000-1650000', 'tomekkorbak/detoxify-pile-chunk3-1650000-1700000', 'tomekkorbak/detoxify-pile-chunk3-1700000-1750000', 'tomekkorbak/detoxify-pile-chunk3-1750000-1800000', 'tomekkorbak/detoxify-pile-chunk3-1800000-1850000', 'tomekkorbak/detoxify-pile-chunk3-1850000-1900000', 'tomekkorbak/detoxify-pile-chunk3-1900000-1950000'], 'is_split_by_sentences': True}, 'generation': {'force_call_on': [25354], 'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}], 'scenario_configs': [{'generate_kwargs': {'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'unconditional', 'num_samples': 2048}, {'generate_kwargs': {'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'challenging_rtp', 'num_samples': 2048, 'prompts_path': 'resources/challenging_rtp.jsonl'}], 'scorer_config': {'device': 'cuda:0'}}, 'kl_gpt3_callback': {'force_call_on': [25354], 'max_tokens': 64, 'num_samples': 4096}, 'model': {'from_scratch': True, 'gpt2_config_kwargs': {'reorder_and_upcast_attn': True, 'scale_attn_by': True}, 'path_or_name': 'gpt2'}, 'objective': {'alpha': 1, 'name': 'Unlikelihood', 'score_threshold': 0.00078}, 'tokenizer': {'path_or_name': 'gpt2'}, 'training': {'dataloader_num_workers': 0, 'effective_batch_size': 64, 'evaluation_strategy': 'no', 'fp16': True, 'hub_model_id': 'wizardly_dubinsky', 'hub_strategy': 'all_checkpoints', 'learning_rate': 0.0005, 'logging_first_step': True, 'logging_steps': 1, 'num_tokens': 3300000000, 'output_dir': 'training_output104340', 'per_device_train_batch_size': 16, 'push_to_hub': True, 'remove_unused_columns': False, 'save_steps': 25354, 'save_strategy': 'steps', 'seed': 42, 'warmup_ratio': 0.01, 'weight_decay': 0.1}} # Wandb URL: https://wandb.ai/tomekkorbak/apo/runs/2kewh3j9
tomekkorbak/cranky_jang
tomekkorbak
2022-11-23T16:18:30Z
0
0
null
[ "generated_from_trainer", "en", "dataset:tomekkorbak/detoxify-pile-chunk3-0-50000", "dataset:tomekkorbak/detoxify-pile-chunk3-50000-100000", "dataset:tomekkorbak/detoxify-pile-chunk3-100000-150000", "dataset:tomekkorbak/detoxify-pile-chunk3-150000-200000", "dataset:tomekkorbak/detoxify-pile-chunk3-200000-250000", "dataset:tomekkorbak/detoxify-pile-chunk3-250000-300000", "dataset:tomekkorbak/detoxify-pile-chunk3-300000-350000", "dataset:tomekkorbak/detoxify-pile-chunk3-350000-400000", "dataset:tomekkorbak/detoxify-pile-chunk3-400000-450000", "dataset:tomekkorbak/detoxify-pile-chunk3-450000-500000", "dataset:tomekkorbak/detoxify-pile-chunk3-500000-550000", "dataset:tomekkorbak/detoxify-pile-chunk3-550000-600000", "dataset:tomekkorbak/detoxify-pile-chunk3-600000-650000", "dataset:tomekkorbak/detoxify-pile-chunk3-650000-700000", "dataset:tomekkorbak/detoxify-pile-chunk3-700000-750000", "dataset:tomekkorbak/detoxify-pile-chunk3-750000-800000", "dataset:tomekkorbak/detoxify-pile-chunk3-800000-850000", "dataset:tomekkorbak/detoxify-pile-chunk3-850000-900000", "dataset:tomekkorbak/detoxify-pile-chunk3-900000-950000", "dataset:tomekkorbak/detoxify-pile-chunk3-950000-1000000", "dataset:tomekkorbak/detoxify-pile-chunk3-1000000-1050000", "dataset:tomekkorbak/detoxify-pile-chunk3-1050000-1100000", "dataset:tomekkorbak/detoxify-pile-chunk3-1100000-1150000", "dataset:tomekkorbak/detoxify-pile-chunk3-1150000-1200000", "dataset:tomekkorbak/detoxify-pile-chunk3-1200000-1250000", "dataset:tomekkorbak/detoxify-pile-chunk3-1250000-1300000", "dataset:tomekkorbak/detoxify-pile-chunk3-1300000-1350000", "dataset:tomekkorbak/detoxify-pile-chunk3-1350000-1400000", "dataset:tomekkorbak/detoxify-pile-chunk3-1400000-1450000", "dataset:tomekkorbak/detoxify-pile-chunk3-1450000-1500000", "dataset:tomekkorbak/detoxify-pile-chunk3-1500000-1550000", "dataset:tomekkorbak/detoxify-pile-chunk3-1550000-1600000", "dataset:tomekkorbak/detoxify-pile-chunk3-1600000-1650000", "dataset:tomekkorbak/detoxify-pile-chunk3-1650000-1700000", "dataset:tomekkorbak/detoxify-pile-chunk3-1700000-1750000", "dataset:tomekkorbak/detoxify-pile-chunk3-1750000-1800000", "dataset:tomekkorbak/detoxify-pile-chunk3-1800000-1850000", "dataset:tomekkorbak/detoxify-pile-chunk3-1850000-1900000", "dataset:tomekkorbak/detoxify-pile-chunk3-1900000-1950000", "license:mit", "region:us" ]
null
2022-11-23T16:17:34Z
--- language: - en license: mit tags: - generated_from_trainer datasets: - tomekkorbak/detoxify-pile-chunk3-0-50000 - tomekkorbak/detoxify-pile-chunk3-50000-100000 - tomekkorbak/detoxify-pile-chunk3-100000-150000 - tomekkorbak/detoxify-pile-chunk3-150000-200000 - tomekkorbak/detoxify-pile-chunk3-200000-250000 - tomekkorbak/detoxify-pile-chunk3-250000-300000 - tomekkorbak/detoxify-pile-chunk3-300000-350000 - tomekkorbak/detoxify-pile-chunk3-350000-400000 - tomekkorbak/detoxify-pile-chunk3-400000-450000 - tomekkorbak/detoxify-pile-chunk3-450000-500000 - tomekkorbak/detoxify-pile-chunk3-500000-550000 - tomekkorbak/detoxify-pile-chunk3-550000-600000 - tomekkorbak/detoxify-pile-chunk3-600000-650000 - tomekkorbak/detoxify-pile-chunk3-650000-700000 - tomekkorbak/detoxify-pile-chunk3-700000-750000 - tomekkorbak/detoxify-pile-chunk3-750000-800000 - tomekkorbak/detoxify-pile-chunk3-800000-850000 - tomekkorbak/detoxify-pile-chunk3-850000-900000 - tomekkorbak/detoxify-pile-chunk3-900000-950000 - tomekkorbak/detoxify-pile-chunk3-950000-1000000 - tomekkorbak/detoxify-pile-chunk3-1000000-1050000 - tomekkorbak/detoxify-pile-chunk3-1050000-1100000 - tomekkorbak/detoxify-pile-chunk3-1100000-1150000 - tomekkorbak/detoxify-pile-chunk3-1150000-1200000 - tomekkorbak/detoxify-pile-chunk3-1200000-1250000 - tomekkorbak/detoxify-pile-chunk3-1250000-1300000 - tomekkorbak/detoxify-pile-chunk3-1300000-1350000 - tomekkorbak/detoxify-pile-chunk3-1350000-1400000 - tomekkorbak/detoxify-pile-chunk3-1400000-1450000 - tomekkorbak/detoxify-pile-chunk3-1450000-1500000 - tomekkorbak/detoxify-pile-chunk3-1500000-1550000 - tomekkorbak/detoxify-pile-chunk3-1550000-1600000 - tomekkorbak/detoxify-pile-chunk3-1600000-1650000 - tomekkorbak/detoxify-pile-chunk3-1650000-1700000 - tomekkorbak/detoxify-pile-chunk3-1700000-1750000 - tomekkorbak/detoxify-pile-chunk3-1750000-1800000 - tomekkorbak/detoxify-pile-chunk3-1800000-1850000 - tomekkorbak/detoxify-pile-chunk3-1850000-1900000 - tomekkorbak/detoxify-pile-chunk3-1900000-1950000 model-index: - name: cranky_jang results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # cranky_jang This model was trained from scratch on the tomekkorbak/detoxify-pile-chunk3-0-50000, the tomekkorbak/detoxify-pile-chunk3-50000-100000, the tomekkorbak/detoxify-pile-chunk3-100000-150000, the tomekkorbak/detoxify-pile-chunk3-150000-200000, the tomekkorbak/detoxify-pile-chunk3-200000-250000, the tomekkorbak/detoxify-pile-chunk3-250000-300000, the tomekkorbak/detoxify-pile-chunk3-300000-350000, the tomekkorbak/detoxify-pile-chunk3-350000-400000, the tomekkorbak/detoxify-pile-chunk3-400000-450000, the tomekkorbak/detoxify-pile-chunk3-450000-500000, the tomekkorbak/detoxify-pile-chunk3-500000-550000, the tomekkorbak/detoxify-pile-chunk3-550000-600000, the tomekkorbak/detoxify-pile-chunk3-600000-650000, the tomekkorbak/detoxify-pile-chunk3-650000-700000, the tomekkorbak/detoxify-pile-chunk3-700000-750000, the tomekkorbak/detoxify-pile-chunk3-750000-800000, the tomekkorbak/detoxify-pile-chunk3-800000-850000, the tomekkorbak/detoxify-pile-chunk3-850000-900000, the tomekkorbak/detoxify-pile-chunk3-900000-950000, the tomekkorbak/detoxify-pile-chunk3-950000-1000000, the tomekkorbak/detoxify-pile-chunk3-1000000-1050000, the tomekkorbak/detoxify-pile-chunk3-1050000-1100000, the tomekkorbak/detoxify-pile-chunk3-1100000-1150000, the tomekkorbak/detoxify-pile-chunk3-1150000-1200000, the tomekkorbak/detoxify-pile-chunk3-1200000-1250000, the tomekkorbak/detoxify-pile-chunk3-1250000-1300000, the tomekkorbak/detoxify-pile-chunk3-1300000-1350000, the tomekkorbak/detoxify-pile-chunk3-1350000-1400000, the tomekkorbak/detoxify-pile-chunk3-1400000-1450000, the tomekkorbak/detoxify-pile-chunk3-1450000-1500000, the tomekkorbak/detoxify-pile-chunk3-1500000-1550000, the tomekkorbak/detoxify-pile-chunk3-1550000-1600000, the tomekkorbak/detoxify-pile-chunk3-1600000-1650000, the tomekkorbak/detoxify-pile-chunk3-1650000-1700000, the tomekkorbak/detoxify-pile-chunk3-1700000-1750000, the tomekkorbak/detoxify-pile-chunk3-1750000-1800000, the tomekkorbak/detoxify-pile-chunk3-1800000-1850000, the tomekkorbak/detoxify-pile-chunk3-1850000-1900000 and the tomekkorbak/detoxify-pile-chunk3-1900000-1950000 datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 64 - total_train_batch_size: 1024 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.01 - training_steps: 3147 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.5.1 - Tokenizers 0.11.6 # Full config {'dataset': {'datasets': ['tomekkorbak/detoxify-pile-chunk3-0-50000', 'tomekkorbak/detoxify-pile-chunk3-50000-100000', 'tomekkorbak/detoxify-pile-chunk3-100000-150000', 'tomekkorbak/detoxify-pile-chunk3-150000-200000', 'tomekkorbak/detoxify-pile-chunk3-200000-250000', 'tomekkorbak/detoxify-pile-chunk3-250000-300000', 'tomekkorbak/detoxify-pile-chunk3-300000-350000', 'tomekkorbak/detoxify-pile-chunk3-350000-400000', 'tomekkorbak/detoxify-pile-chunk3-400000-450000', 'tomekkorbak/detoxify-pile-chunk3-450000-500000', 'tomekkorbak/detoxify-pile-chunk3-500000-550000', 'tomekkorbak/detoxify-pile-chunk3-550000-600000', 'tomekkorbak/detoxify-pile-chunk3-600000-650000', 'tomekkorbak/detoxify-pile-chunk3-650000-700000', 'tomekkorbak/detoxify-pile-chunk3-700000-750000', 'tomekkorbak/detoxify-pile-chunk3-750000-800000', 'tomekkorbak/detoxify-pile-chunk3-800000-850000', 'tomekkorbak/detoxify-pile-chunk3-850000-900000', 'tomekkorbak/detoxify-pile-chunk3-900000-950000', 'tomekkorbak/detoxify-pile-chunk3-950000-1000000', 'tomekkorbak/detoxify-pile-chunk3-1000000-1050000', 'tomekkorbak/detoxify-pile-chunk3-1050000-1100000', 'tomekkorbak/detoxify-pile-chunk3-1100000-1150000', 'tomekkorbak/detoxify-pile-chunk3-1150000-1200000', 'tomekkorbak/detoxify-pile-chunk3-1200000-1250000', 'tomekkorbak/detoxify-pile-chunk3-1250000-1300000', 'tomekkorbak/detoxify-pile-chunk3-1300000-1350000', 'tomekkorbak/detoxify-pile-chunk3-1350000-1400000', 'tomekkorbak/detoxify-pile-chunk3-1400000-1450000', 'tomekkorbak/detoxify-pile-chunk3-1450000-1500000', 'tomekkorbak/detoxify-pile-chunk3-1500000-1550000', 'tomekkorbak/detoxify-pile-chunk3-1550000-1600000', 'tomekkorbak/detoxify-pile-chunk3-1600000-1650000', 'tomekkorbak/detoxify-pile-chunk3-1650000-1700000', 'tomekkorbak/detoxify-pile-chunk3-1700000-1750000', 'tomekkorbak/detoxify-pile-chunk3-1750000-1800000', 'tomekkorbak/detoxify-pile-chunk3-1800000-1850000', 'tomekkorbak/detoxify-pile-chunk3-1850000-1900000', 'tomekkorbak/detoxify-pile-chunk3-1900000-1950000'], 'is_split_by_sentences': True}, 'generation': {'force_call_on': [25354], 'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}], 'scenario_configs': [{'generate_kwargs': {'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'unconditional', 'num_samples': 2048}, {'generate_kwargs': {'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'challenging_rtp', 'num_samples': 2048, 'prompts_path': 'resources/challenging_rtp.jsonl'}], 'scorer_config': {'device': 'cuda:0'}}, 'kl_gpt3_callback': {'force_call_on': [25354], 'max_tokens': 64, 'num_samples': 4096}, 'model': {'from_scratch': True, 'gpt2_config_kwargs': {'reorder_and_upcast_attn': True, 'scale_attn_by': True}, 'model_kwargs': {'value_head_config': {'is_detached': False}}, 'path_or_name': 'gpt2'}, 'objective': {'alpha': 0.5, 'beta': 10, 'name': 'AWR'}, 'tokenizer': {'path_or_name': 'gpt2'}, 'training': {'dataloader_num_workers': 0, 'effective_batch_size': 1024, 'evaluation_strategy': 'no', 'fp16': True, 'hub_model_id': 'cranky_jang', 'hub_strategy': 'all_checkpoints', 'learning_rate': 0.001, 'logging_first_step': True, 'logging_steps': 1, 'num_tokens': 3300000000, 'output_dir': 'training_output104340', 'per_device_train_batch_size': 16, 'push_to_hub': True, 'remove_unused_columns': False, 'save_steps': 25354, 'save_strategy': 'steps', 'seed': 42, 'warmup_ratio': 0.01, 'weight_decay': 0.1}} # Wandb URL: https://wandb.ai/tomekkorbak/apo/runs/37cxyfb2
tomekkorbak/ecstatic_hoover
tomekkorbak
2022-11-23T16:14:21Z
0
0
null
[ "generated_from_trainer", "en", "dataset:tomekkorbak/detoxify-pile-chunk3-0-50000", "dataset:tomekkorbak/detoxify-pile-chunk3-50000-100000", "dataset:tomekkorbak/detoxify-pile-chunk3-100000-150000", "dataset:tomekkorbak/detoxify-pile-chunk3-150000-200000", "dataset:tomekkorbak/detoxify-pile-chunk3-200000-250000", "dataset:tomekkorbak/detoxify-pile-chunk3-250000-300000", "dataset:tomekkorbak/detoxify-pile-chunk3-300000-350000", "dataset:tomekkorbak/detoxify-pile-chunk3-350000-400000", "dataset:tomekkorbak/detoxify-pile-chunk3-400000-450000", "dataset:tomekkorbak/detoxify-pile-chunk3-450000-500000", "dataset:tomekkorbak/detoxify-pile-chunk3-500000-550000", "dataset:tomekkorbak/detoxify-pile-chunk3-550000-600000", "dataset:tomekkorbak/detoxify-pile-chunk3-600000-650000", "dataset:tomekkorbak/detoxify-pile-chunk3-650000-700000", "dataset:tomekkorbak/detoxify-pile-chunk3-700000-750000", "dataset:tomekkorbak/detoxify-pile-chunk3-750000-800000", "dataset:tomekkorbak/detoxify-pile-chunk3-800000-850000", "dataset:tomekkorbak/detoxify-pile-chunk3-850000-900000", "dataset:tomekkorbak/detoxify-pile-chunk3-900000-950000", "dataset:tomekkorbak/detoxify-pile-chunk3-950000-1000000", "dataset:tomekkorbak/detoxify-pile-chunk3-1000000-1050000", "dataset:tomekkorbak/detoxify-pile-chunk3-1050000-1100000", "dataset:tomekkorbak/detoxify-pile-chunk3-1100000-1150000", "dataset:tomekkorbak/detoxify-pile-chunk3-1150000-1200000", "dataset:tomekkorbak/detoxify-pile-chunk3-1200000-1250000", "dataset:tomekkorbak/detoxify-pile-chunk3-1250000-1300000", "dataset:tomekkorbak/detoxify-pile-chunk3-1300000-1350000", "dataset:tomekkorbak/detoxify-pile-chunk3-1350000-1400000", "dataset:tomekkorbak/detoxify-pile-chunk3-1400000-1450000", "dataset:tomekkorbak/detoxify-pile-chunk3-1450000-1500000", "dataset:tomekkorbak/detoxify-pile-chunk3-1500000-1550000", "dataset:tomekkorbak/detoxify-pile-chunk3-1550000-1600000", "dataset:tomekkorbak/detoxify-pile-chunk3-1600000-1650000", "dataset:tomekkorbak/detoxify-pile-chunk3-1650000-1700000", "dataset:tomekkorbak/detoxify-pile-chunk3-1700000-1750000", "dataset:tomekkorbak/detoxify-pile-chunk3-1750000-1800000", "dataset:tomekkorbak/detoxify-pile-chunk3-1800000-1850000", "dataset:tomekkorbak/detoxify-pile-chunk3-1850000-1900000", "dataset:tomekkorbak/detoxify-pile-chunk3-1900000-1950000", "license:mit", "region:us" ]
null
2022-11-23T16:13:50Z
--- language: - en license: mit tags: - generated_from_trainer datasets: - tomekkorbak/detoxify-pile-chunk3-0-50000 - tomekkorbak/detoxify-pile-chunk3-50000-100000 - tomekkorbak/detoxify-pile-chunk3-100000-150000 - tomekkorbak/detoxify-pile-chunk3-150000-200000 - tomekkorbak/detoxify-pile-chunk3-200000-250000 - tomekkorbak/detoxify-pile-chunk3-250000-300000 - tomekkorbak/detoxify-pile-chunk3-300000-350000 - tomekkorbak/detoxify-pile-chunk3-350000-400000 - tomekkorbak/detoxify-pile-chunk3-400000-450000 - tomekkorbak/detoxify-pile-chunk3-450000-500000 - tomekkorbak/detoxify-pile-chunk3-500000-550000 - tomekkorbak/detoxify-pile-chunk3-550000-600000 - tomekkorbak/detoxify-pile-chunk3-600000-650000 - tomekkorbak/detoxify-pile-chunk3-650000-700000 - tomekkorbak/detoxify-pile-chunk3-700000-750000 - tomekkorbak/detoxify-pile-chunk3-750000-800000 - tomekkorbak/detoxify-pile-chunk3-800000-850000 - tomekkorbak/detoxify-pile-chunk3-850000-900000 - tomekkorbak/detoxify-pile-chunk3-900000-950000 - tomekkorbak/detoxify-pile-chunk3-950000-1000000 - tomekkorbak/detoxify-pile-chunk3-1000000-1050000 - tomekkorbak/detoxify-pile-chunk3-1050000-1100000 - tomekkorbak/detoxify-pile-chunk3-1100000-1150000 - tomekkorbak/detoxify-pile-chunk3-1150000-1200000 - tomekkorbak/detoxify-pile-chunk3-1200000-1250000 - tomekkorbak/detoxify-pile-chunk3-1250000-1300000 - tomekkorbak/detoxify-pile-chunk3-1300000-1350000 - tomekkorbak/detoxify-pile-chunk3-1350000-1400000 - tomekkorbak/detoxify-pile-chunk3-1400000-1450000 - tomekkorbak/detoxify-pile-chunk3-1450000-1500000 - tomekkorbak/detoxify-pile-chunk3-1500000-1550000 - tomekkorbak/detoxify-pile-chunk3-1550000-1600000 - tomekkorbak/detoxify-pile-chunk3-1600000-1650000 - tomekkorbak/detoxify-pile-chunk3-1650000-1700000 - tomekkorbak/detoxify-pile-chunk3-1700000-1750000 - tomekkorbak/detoxify-pile-chunk3-1750000-1800000 - tomekkorbak/detoxify-pile-chunk3-1800000-1850000 - tomekkorbak/detoxify-pile-chunk3-1850000-1900000 - tomekkorbak/detoxify-pile-chunk3-1900000-1950000 model-index: - name: ecstatic_hoover results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ecstatic_hoover This model was trained from scratch on the tomekkorbak/detoxify-pile-chunk3-0-50000, the tomekkorbak/detoxify-pile-chunk3-50000-100000, the tomekkorbak/detoxify-pile-chunk3-100000-150000, the tomekkorbak/detoxify-pile-chunk3-150000-200000, the tomekkorbak/detoxify-pile-chunk3-200000-250000, the tomekkorbak/detoxify-pile-chunk3-250000-300000, the tomekkorbak/detoxify-pile-chunk3-300000-350000, the tomekkorbak/detoxify-pile-chunk3-350000-400000, the tomekkorbak/detoxify-pile-chunk3-400000-450000, the tomekkorbak/detoxify-pile-chunk3-450000-500000, the tomekkorbak/detoxify-pile-chunk3-500000-550000, the tomekkorbak/detoxify-pile-chunk3-550000-600000, the tomekkorbak/detoxify-pile-chunk3-600000-650000, the tomekkorbak/detoxify-pile-chunk3-650000-700000, the tomekkorbak/detoxify-pile-chunk3-700000-750000, the tomekkorbak/detoxify-pile-chunk3-750000-800000, the tomekkorbak/detoxify-pile-chunk3-800000-850000, the tomekkorbak/detoxify-pile-chunk3-850000-900000, the tomekkorbak/detoxify-pile-chunk3-900000-950000, the tomekkorbak/detoxify-pile-chunk3-950000-1000000, the tomekkorbak/detoxify-pile-chunk3-1000000-1050000, the tomekkorbak/detoxify-pile-chunk3-1050000-1100000, the tomekkorbak/detoxify-pile-chunk3-1100000-1150000, the tomekkorbak/detoxify-pile-chunk3-1150000-1200000, the tomekkorbak/detoxify-pile-chunk3-1200000-1250000, the tomekkorbak/detoxify-pile-chunk3-1250000-1300000, the tomekkorbak/detoxify-pile-chunk3-1300000-1350000, the tomekkorbak/detoxify-pile-chunk3-1350000-1400000, the tomekkorbak/detoxify-pile-chunk3-1400000-1450000, the tomekkorbak/detoxify-pile-chunk3-1450000-1500000, the tomekkorbak/detoxify-pile-chunk3-1500000-1550000, the tomekkorbak/detoxify-pile-chunk3-1550000-1600000, the tomekkorbak/detoxify-pile-chunk3-1600000-1650000, the tomekkorbak/detoxify-pile-chunk3-1650000-1700000, the tomekkorbak/detoxify-pile-chunk3-1700000-1750000, the tomekkorbak/detoxify-pile-chunk3-1750000-1800000, the tomekkorbak/detoxify-pile-chunk3-1800000-1850000, the tomekkorbak/detoxify-pile-chunk3-1850000-1900000 and the tomekkorbak/detoxify-pile-chunk3-1900000-1950000 datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.01 - training_steps: 50354 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.5.1 - Tokenizers 0.11.6 # Full config {'dataset': {'conditional_training_config': {'aligned_prefix': '<|aligned|>', 'drop_token_fraction': 0.01, 'misaligned_prefix': '<|misaligned|>', 'threshold': 0.00056}, 'datasets': ['tomekkorbak/detoxify-pile-chunk3-0-50000', 'tomekkorbak/detoxify-pile-chunk3-50000-100000', 'tomekkorbak/detoxify-pile-chunk3-100000-150000', 'tomekkorbak/detoxify-pile-chunk3-150000-200000', 'tomekkorbak/detoxify-pile-chunk3-200000-250000', 'tomekkorbak/detoxify-pile-chunk3-250000-300000', 'tomekkorbak/detoxify-pile-chunk3-300000-350000', 'tomekkorbak/detoxify-pile-chunk3-350000-400000', 'tomekkorbak/detoxify-pile-chunk3-400000-450000', 'tomekkorbak/detoxify-pile-chunk3-450000-500000', 'tomekkorbak/detoxify-pile-chunk3-500000-550000', 'tomekkorbak/detoxify-pile-chunk3-550000-600000', 'tomekkorbak/detoxify-pile-chunk3-600000-650000', 'tomekkorbak/detoxify-pile-chunk3-650000-700000', 'tomekkorbak/detoxify-pile-chunk3-700000-750000', 'tomekkorbak/detoxify-pile-chunk3-750000-800000', 'tomekkorbak/detoxify-pile-chunk3-800000-850000', 'tomekkorbak/detoxify-pile-chunk3-850000-900000', 'tomekkorbak/detoxify-pile-chunk3-900000-950000', 'tomekkorbak/detoxify-pile-chunk3-950000-1000000', 'tomekkorbak/detoxify-pile-chunk3-1000000-1050000', 'tomekkorbak/detoxify-pile-chunk3-1050000-1100000', 'tomekkorbak/detoxify-pile-chunk3-1100000-1150000', 'tomekkorbak/detoxify-pile-chunk3-1150000-1200000', 'tomekkorbak/detoxify-pile-chunk3-1200000-1250000', 'tomekkorbak/detoxify-pile-chunk3-1250000-1300000', 'tomekkorbak/detoxify-pile-chunk3-1300000-1350000', 'tomekkorbak/detoxify-pile-chunk3-1350000-1400000', 'tomekkorbak/detoxify-pile-chunk3-1400000-1450000', 'tomekkorbak/detoxify-pile-chunk3-1450000-1500000', 'tomekkorbak/detoxify-pile-chunk3-1500000-1550000', 'tomekkorbak/detoxify-pile-chunk3-1550000-1600000', 'tomekkorbak/detoxify-pile-chunk3-1600000-1650000', 'tomekkorbak/detoxify-pile-chunk3-1650000-1700000', 'tomekkorbak/detoxify-pile-chunk3-1700000-1750000', 'tomekkorbak/detoxify-pile-chunk3-1750000-1800000', 'tomekkorbak/detoxify-pile-chunk3-1800000-1850000', 'tomekkorbak/detoxify-pile-chunk3-1850000-1900000', 'tomekkorbak/detoxify-pile-chunk3-1900000-1950000'], 'is_split_by_sentences': True}, 'generation': {'force_call_on': [25354], 'metrics_configs': [{}, {'n': 1}, {'n': 2}], 'scenario_configs': [{'generate_kwargs': {'bad_words_ids': [[50257], [50258]], 'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'unconditional', 'num_samples': 2048, 'prefix': '<|aligned|>'}, {'generate_kwargs': {'bad_words_ids': [[50257], [50258]], 'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'challenging_rtp', 'num_samples': 2048, 'prefix': '<|aligned|>', 'prompt_before_control': True, 'prompts_path': 'resources/challenging_rtp.jsonl'}], 'scorer_config': {'device': 'cuda:0'}}, 'kl_gpt3_callback': {'force_call_on': [25354], 'max_tokens': 64, 'num_samples': 4096, 'prefix': '<|aligned|>'}, 'model': {'from_scratch': True, 'gpt2_config_kwargs': {'reorder_and_upcast_attn': True, 'scale_attn_by': True}, 'num_additional_tokens': 2, 'path_or_name': 'gpt2'}, 'objective': {'name': 'MLE'}, 'tokenizer': {'path_or_name': 'gpt2', 'special_tokens': ['<|aligned|>', '<|misaligned|>']}, 'training': {'dataloader_num_workers': 0, 'effective_batch_size': 64, 'evaluation_strategy': 'no', 'fp16': True, 'hub_model_id': 'ecstatic_hoover', 'hub_strategy': 'all_checkpoints', 'learning_rate': 0.0005, 'logging_first_step': True, 'logging_steps': 1, 'num_tokens': 3300000000, 'output_dir': 'training_output104340', 'per_device_train_batch_size': 16, 'push_to_hub': True, 'remove_unused_columns': False, 'save_steps': 25354, 'save_strategy': 'steps', 'seed': 42, 'warmup_ratio': 0.01, 'weight_decay': 0.1}} # Wandb URL: https://wandb.ai/tomekkorbak/apo/runs/1p7d3shx
tomekkorbak/dazzling_turing
tomekkorbak
2022-11-23T16:13:54Z
0
0
null
[ "generated_from_trainer", "en", "dataset:tomekkorbak/detoxify-pile-chunk3-0-50000", "dataset:tomekkorbak/detoxify-pile-chunk3-50000-100000", "dataset:tomekkorbak/detoxify-pile-chunk3-100000-150000", "dataset:tomekkorbak/detoxify-pile-chunk3-150000-200000", "dataset:tomekkorbak/detoxify-pile-chunk3-200000-250000", "dataset:tomekkorbak/detoxify-pile-chunk3-250000-300000", "dataset:tomekkorbak/detoxify-pile-chunk3-300000-350000", "dataset:tomekkorbak/detoxify-pile-chunk3-350000-400000", "dataset:tomekkorbak/detoxify-pile-chunk3-400000-450000", "dataset:tomekkorbak/detoxify-pile-chunk3-450000-500000", "dataset:tomekkorbak/detoxify-pile-chunk3-500000-550000", "dataset:tomekkorbak/detoxify-pile-chunk3-550000-600000", "dataset:tomekkorbak/detoxify-pile-chunk3-600000-650000", "dataset:tomekkorbak/detoxify-pile-chunk3-650000-700000", "dataset:tomekkorbak/detoxify-pile-chunk3-700000-750000", "dataset:tomekkorbak/detoxify-pile-chunk3-750000-800000", "dataset:tomekkorbak/detoxify-pile-chunk3-800000-850000", "dataset:tomekkorbak/detoxify-pile-chunk3-850000-900000", "dataset:tomekkorbak/detoxify-pile-chunk3-900000-950000", "dataset:tomekkorbak/detoxify-pile-chunk3-950000-1000000", "dataset:tomekkorbak/detoxify-pile-chunk3-1000000-1050000", "dataset:tomekkorbak/detoxify-pile-chunk3-1050000-1100000", "dataset:tomekkorbak/detoxify-pile-chunk3-1100000-1150000", "dataset:tomekkorbak/detoxify-pile-chunk3-1150000-1200000", "dataset:tomekkorbak/detoxify-pile-chunk3-1200000-1250000", "dataset:tomekkorbak/detoxify-pile-chunk3-1250000-1300000", "dataset:tomekkorbak/detoxify-pile-chunk3-1300000-1350000", "dataset:tomekkorbak/detoxify-pile-chunk3-1350000-1400000", "dataset:tomekkorbak/detoxify-pile-chunk3-1400000-1450000", "dataset:tomekkorbak/detoxify-pile-chunk3-1450000-1500000", "dataset:tomekkorbak/detoxify-pile-chunk3-1500000-1550000", "dataset:tomekkorbak/detoxify-pile-chunk3-1550000-1600000", "dataset:tomekkorbak/detoxify-pile-chunk3-1600000-1650000", "dataset:tomekkorbak/detoxify-pile-chunk3-1650000-1700000", "dataset:tomekkorbak/detoxify-pile-chunk3-1700000-1750000", "dataset:tomekkorbak/detoxify-pile-chunk3-1750000-1800000", "dataset:tomekkorbak/detoxify-pile-chunk3-1800000-1850000", "dataset:tomekkorbak/detoxify-pile-chunk3-1850000-1900000", "dataset:tomekkorbak/detoxify-pile-chunk3-1900000-1950000", "license:mit", "region:us" ]
null
2022-11-23T16:13:46Z
--- language: - en license: mit tags: - generated_from_trainer datasets: - tomekkorbak/detoxify-pile-chunk3-0-50000 - tomekkorbak/detoxify-pile-chunk3-50000-100000 - tomekkorbak/detoxify-pile-chunk3-100000-150000 - tomekkorbak/detoxify-pile-chunk3-150000-200000 - tomekkorbak/detoxify-pile-chunk3-200000-250000 - tomekkorbak/detoxify-pile-chunk3-250000-300000 - tomekkorbak/detoxify-pile-chunk3-300000-350000 - tomekkorbak/detoxify-pile-chunk3-350000-400000 - tomekkorbak/detoxify-pile-chunk3-400000-450000 - tomekkorbak/detoxify-pile-chunk3-450000-500000 - tomekkorbak/detoxify-pile-chunk3-500000-550000 - tomekkorbak/detoxify-pile-chunk3-550000-600000 - tomekkorbak/detoxify-pile-chunk3-600000-650000 - tomekkorbak/detoxify-pile-chunk3-650000-700000 - tomekkorbak/detoxify-pile-chunk3-700000-750000 - tomekkorbak/detoxify-pile-chunk3-750000-800000 - tomekkorbak/detoxify-pile-chunk3-800000-850000 - tomekkorbak/detoxify-pile-chunk3-850000-900000 - tomekkorbak/detoxify-pile-chunk3-900000-950000 - tomekkorbak/detoxify-pile-chunk3-950000-1000000 - tomekkorbak/detoxify-pile-chunk3-1000000-1050000 - tomekkorbak/detoxify-pile-chunk3-1050000-1100000 - tomekkorbak/detoxify-pile-chunk3-1100000-1150000 - tomekkorbak/detoxify-pile-chunk3-1150000-1200000 - tomekkorbak/detoxify-pile-chunk3-1200000-1250000 - tomekkorbak/detoxify-pile-chunk3-1250000-1300000 - tomekkorbak/detoxify-pile-chunk3-1300000-1350000 - tomekkorbak/detoxify-pile-chunk3-1350000-1400000 - tomekkorbak/detoxify-pile-chunk3-1400000-1450000 - tomekkorbak/detoxify-pile-chunk3-1450000-1500000 - tomekkorbak/detoxify-pile-chunk3-1500000-1550000 - tomekkorbak/detoxify-pile-chunk3-1550000-1600000 - tomekkorbak/detoxify-pile-chunk3-1600000-1650000 - tomekkorbak/detoxify-pile-chunk3-1650000-1700000 - tomekkorbak/detoxify-pile-chunk3-1700000-1750000 - tomekkorbak/detoxify-pile-chunk3-1750000-1800000 - tomekkorbak/detoxify-pile-chunk3-1800000-1850000 - tomekkorbak/detoxify-pile-chunk3-1850000-1900000 - tomekkorbak/detoxify-pile-chunk3-1900000-1950000 model-index: - name: dazzling_turing results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # dazzling_turing This model was trained from scratch on the tomekkorbak/detoxify-pile-chunk3-0-50000, the tomekkorbak/detoxify-pile-chunk3-50000-100000, the tomekkorbak/detoxify-pile-chunk3-100000-150000, the tomekkorbak/detoxify-pile-chunk3-150000-200000, the tomekkorbak/detoxify-pile-chunk3-200000-250000, the tomekkorbak/detoxify-pile-chunk3-250000-300000, the tomekkorbak/detoxify-pile-chunk3-300000-350000, the tomekkorbak/detoxify-pile-chunk3-350000-400000, the tomekkorbak/detoxify-pile-chunk3-400000-450000, the tomekkorbak/detoxify-pile-chunk3-450000-500000, the tomekkorbak/detoxify-pile-chunk3-500000-550000, the tomekkorbak/detoxify-pile-chunk3-550000-600000, the tomekkorbak/detoxify-pile-chunk3-600000-650000, the tomekkorbak/detoxify-pile-chunk3-650000-700000, the tomekkorbak/detoxify-pile-chunk3-700000-750000, the tomekkorbak/detoxify-pile-chunk3-750000-800000, the tomekkorbak/detoxify-pile-chunk3-800000-850000, the tomekkorbak/detoxify-pile-chunk3-850000-900000, the tomekkorbak/detoxify-pile-chunk3-900000-950000, the tomekkorbak/detoxify-pile-chunk3-950000-1000000, the tomekkorbak/detoxify-pile-chunk3-1000000-1050000, the tomekkorbak/detoxify-pile-chunk3-1050000-1100000, the tomekkorbak/detoxify-pile-chunk3-1100000-1150000, the tomekkorbak/detoxify-pile-chunk3-1150000-1200000, the tomekkorbak/detoxify-pile-chunk3-1200000-1250000, the tomekkorbak/detoxify-pile-chunk3-1250000-1300000, the tomekkorbak/detoxify-pile-chunk3-1300000-1350000, the tomekkorbak/detoxify-pile-chunk3-1350000-1400000, the tomekkorbak/detoxify-pile-chunk3-1400000-1450000, the tomekkorbak/detoxify-pile-chunk3-1450000-1500000, the tomekkorbak/detoxify-pile-chunk3-1500000-1550000, the tomekkorbak/detoxify-pile-chunk3-1550000-1600000, the tomekkorbak/detoxify-pile-chunk3-1600000-1650000, the tomekkorbak/detoxify-pile-chunk3-1650000-1700000, the tomekkorbak/detoxify-pile-chunk3-1700000-1750000, the tomekkorbak/detoxify-pile-chunk3-1750000-1800000, the tomekkorbak/detoxify-pile-chunk3-1800000-1850000, the tomekkorbak/detoxify-pile-chunk3-1850000-1900000 and the tomekkorbak/detoxify-pile-chunk3-1900000-1950000 datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.01 - training_steps: 50354 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.5.1 - Tokenizers 0.11.6 # Full config {'dataset': {'conditional_training_config': {'aligned_prefix': '<|aligned|>', 'misaligned_prefix': '<|misaligned|>', 'threshold': 0.00056}, 'datasets': ['tomekkorbak/detoxify-pile-chunk3-0-50000', 'tomekkorbak/detoxify-pile-chunk3-50000-100000', 'tomekkorbak/detoxify-pile-chunk3-100000-150000', 'tomekkorbak/detoxify-pile-chunk3-150000-200000', 'tomekkorbak/detoxify-pile-chunk3-200000-250000', 'tomekkorbak/detoxify-pile-chunk3-250000-300000', 'tomekkorbak/detoxify-pile-chunk3-300000-350000', 'tomekkorbak/detoxify-pile-chunk3-350000-400000', 'tomekkorbak/detoxify-pile-chunk3-400000-450000', 'tomekkorbak/detoxify-pile-chunk3-450000-500000', 'tomekkorbak/detoxify-pile-chunk3-500000-550000', 'tomekkorbak/detoxify-pile-chunk3-550000-600000', 'tomekkorbak/detoxify-pile-chunk3-600000-650000', 'tomekkorbak/detoxify-pile-chunk3-650000-700000', 'tomekkorbak/detoxify-pile-chunk3-700000-750000', 'tomekkorbak/detoxify-pile-chunk3-750000-800000', 'tomekkorbak/detoxify-pile-chunk3-800000-850000', 'tomekkorbak/detoxify-pile-chunk3-850000-900000', 'tomekkorbak/detoxify-pile-chunk3-900000-950000', 'tomekkorbak/detoxify-pile-chunk3-950000-1000000', 'tomekkorbak/detoxify-pile-chunk3-1000000-1050000', 'tomekkorbak/detoxify-pile-chunk3-1050000-1100000', 'tomekkorbak/detoxify-pile-chunk3-1100000-1150000', 'tomekkorbak/detoxify-pile-chunk3-1150000-1200000', 'tomekkorbak/detoxify-pile-chunk3-1200000-1250000', 'tomekkorbak/detoxify-pile-chunk3-1250000-1300000', 'tomekkorbak/detoxify-pile-chunk3-1300000-1350000', 'tomekkorbak/detoxify-pile-chunk3-1350000-1400000', 'tomekkorbak/detoxify-pile-chunk3-1400000-1450000', 'tomekkorbak/detoxify-pile-chunk3-1450000-1500000', 'tomekkorbak/detoxify-pile-chunk3-1500000-1550000', 'tomekkorbak/detoxify-pile-chunk3-1550000-1600000', 'tomekkorbak/detoxify-pile-chunk3-1600000-1650000', 'tomekkorbak/detoxify-pile-chunk3-1650000-1700000', 'tomekkorbak/detoxify-pile-chunk3-1700000-1750000', 'tomekkorbak/detoxify-pile-chunk3-1750000-1800000', 'tomekkorbak/detoxify-pile-chunk3-1800000-1850000', 'tomekkorbak/detoxify-pile-chunk3-1850000-1900000', 'tomekkorbak/detoxify-pile-chunk3-1900000-1950000'], 'is_split_by_sentences': True}, 'generation': {'force_call_on': [25354], 'metrics_configs': [{}, {'n': 1}, {'n': 2}], 'scenario_configs': [{'generate_kwargs': {'bad_words_ids': [[50257], [50258]], 'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'unconditional', 'num_samples': 2048, 'prefix': '<|aligned|>'}, {'generate_kwargs': {'bad_words_ids': [[50257], [50258]], 'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'challenging_rtp', 'num_samples': 2048, 'prefix': '<|aligned|>', 'prompt_before_control': True, 'prompts_path': 'resources/challenging_rtp.jsonl'}], 'scorer_config': {'device': 'cuda:0'}}, 'kl_gpt3_callback': {'force_call_on': [25354], 'max_tokens': 64, 'num_samples': 4096, 'prefix': '<|aligned|>'}, 'model': {'from_scratch': True, 'gpt2_config_kwargs': {'reorder_and_upcast_attn': True, 'scale_attn_by': True}, 'num_additional_tokens': 2, 'path_or_name': 'gpt2'}, 'objective': {'name': 'MLE'}, 'tokenizer': {'path_or_name': 'gpt2', 'special_tokens': ['<|aligned|>', '<|misaligned|>']}, 'training': {'dataloader_num_workers': 0, 'effective_batch_size': 64, 'evaluation_strategy': 'no', 'fp16': True, 'hub_model_id': 'dazzling_turing', 'hub_strategy': 'all_checkpoints', 'learning_rate': 0.0005, 'logging_first_step': True, 'logging_steps': 1, 'num_tokens': 3300000000, 'output_dir': 'training_output104340', 'per_device_train_batch_size': 16, 'push_to_hub': True, 'remove_unused_columns': False, 'save_steps': 25354, 'save_strategy': 'steps', 'seed': 42, 'warmup_ratio': 0.01, 'weight_decay': 0.1}} # Wandb URL: https://wandb.ai/tomekkorbak/apo/runs/3roy3cpj
tomekkorbak/vigorous_thompson
tomekkorbak
2022-11-23T16:07:17Z
0
0
null
[ "generated_from_trainer", "en", "dataset:tomekkorbak/detoxify-pile-chunk3-0-50000", "dataset:tomekkorbak/detoxify-pile-chunk3-50000-100000", "dataset:tomekkorbak/detoxify-pile-chunk3-100000-150000", "dataset:tomekkorbak/detoxify-pile-chunk3-150000-200000", "dataset:tomekkorbak/detoxify-pile-chunk3-200000-250000", "dataset:tomekkorbak/detoxify-pile-chunk3-250000-300000", "dataset:tomekkorbak/detoxify-pile-chunk3-300000-350000", "dataset:tomekkorbak/detoxify-pile-chunk3-350000-400000", "dataset:tomekkorbak/detoxify-pile-chunk3-400000-450000", "dataset:tomekkorbak/detoxify-pile-chunk3-450000-500000", "dataset:tomekkorbak/detoxify-pile-chunk3-500000-550000", "dataset:tomekkorbak/detoxify-pile-chunk3-550000-600000", "dataset:tomekkorbak/detoxify-pile-chunk3-600000-650000", "dataset:tomekkorbak/detoxify-pile-chunk3-650000-700000", "dataset:tomekkorbak/detoxify-pile-chunk3-700000-750000", "dataset:tomekkorbak/detoxify-pile-chunk3-750000-800000", "dataset:tomekkorbak/detoxify-pile-chunk3-800000-850000", "dataset:tomekkorbak/detoxify-pile-chunk3-850000-900000", "dataset:tomekkorbak/detoxify-pile-chunk3-900000-950000", "dataset:tomekkorbak/detoxify-pile-chunk3-950000-1000000", "dataset:tomekkorbak/detoxify-pile-chunk3-1000000-1050000", "dataset:tomekkorbak/detoxify-pile-chunk3-1050000-1100000", "dataset:tomekkorbak/detoxify-pile-chunk3-1100000-1150000", "dataset:tomekkorbak/detoxify-pile-chunk3-1150000-1200000", "dataset:tomekkorbak/detoxify-pile-chunk3-1200000-1250000", "dataset:tomekkorbak/detoxify-pile-chunk3-1250000-1300000", "dataset:tomekkorbak/detoxify-pile-chunk3-1300000-1350000", "dataset:tomekkorbak/detoxify-pile-chunk3-1350000-1400000", "dataset:tomekkorbak/detoxify-pile-chunk3-1400000-1450000", "dataset:tomekkorbak/detoxify-pile-chunk3-1450000-1500000", "dataset:tomekkorbak/detoxify-pile-chunk3-1500000-1550000", "dataset:tomekkorbak/detoxify-pile-chunk3-1550000-1600000", "dataset:tomekkorbak/detoxify-pile-chunk3-1600000-1650000", "dataset:tomekkorbak/detoxify-pile-chunk3-1650000-1700000", "dataset:tomekkorbak/detoxify-pile-chunk3-1700000-1750000", "dataset:tomekkorbak/detoxify-pile-chunk3-1750000-1800000", "dataset:tomekkorbak/detoxify-pile-chunk3-1800000-1850000", "dataset:tomekkorbak/detoxify-pile-chunk3-1850000-1900000", "dataset:tomekkorbak/detoxify-pile-chunk3-1900000-1950000", "license:mit", "region:us" ]
null
2022-11-23T16:07:08Z
--- language: - en license: mit tags: - generated_from_trainer datasets: - tomekkorbak/detoxify-pile-chunk3-0-50000 - tomekkorbak/detoxify-pile-chunk3-50000-100000 - tomekkorbak/detoxify-pile-chunk3-100000-150000 - tomekkorbak/detoxify-pile-chunk3-150000-200000 - tomekkorbak/detoxify-pile-chunk3-200000-250000 - tomekkorbak/detoxify-pile-chunk3-250000-300000 - tomekkorbak/detoxify-pile-chunk3-300000-350000 - tomekkorbak/detoxify-pile-chunk3-350000-400000 - tomekkorbak/detoxify-pile-chunk3-400000-450000 - tomekkorbak/detoxify-pile-chunk3-450000-500000 - tomekkorbak/detoxify-pile-chunk3-500000-550000 - tomekkorbak/detoxify-pile-chunk3-550000-600000 - tomekkorbak/detoxify-pile-chunk3-600000-650000 - tomekkorbak/detoxify-pile-chunk3-650000-700000 - tomekkorbak/detoxify-pile-chunk3-700000-750000 - tomekkorbak/detoxify-pile-chunk3-750000-800000 - tomekkorbak/detoxify-pile-chunk3-800000-850000 - tomekkorbak/detoxify-pile-chunk3-850000-900000 - tomekkorbak/detoxify-pile-chunk3-900000-950000 - tomekkorbak/detoxify-pile-chunk3-950000-1000000 - tomekkorbak/detoxify-pile-chunk3-1000000-1050000 - tomekkorbak/detoxify-pile-chunk3-1050000-1100000 - tomekkorbak/detoxify-pile-chunk3-1100000-1150000 - tomekkorbak/detoxify-pile-chunk3-1150000-1200000 - tomekkorbak/detoxify-pile-chunk3-1200000-1250000 - tomekkorbak/detoxify-pile-chunk3-1250000-1300000 - tomekkorbak/detoxify-pile-chunk3-1300000-1350000 - tomekkorbak/detoxify-pile-chunk3-1350000-1400000 - tomekkorbak/detoxify-pile-chunk3-1400000-1450000 - tomekkorbak/detoxify-pile-chunk3-1450000-1500000 - tomekkorbak/detoxify-pile-chunk3-1500000-1550000 - tomekkorbak/detoxify-pile-chunk3-1550000-1600000 - tomekkorbak/detoxify-pile-chunk3-1600000-1650000 - tomekkorbak/detoxify-pile-chunk3-1650000-1700000 - tomekkorbak/detoxify-pile-chunk3-1700000-1750000 - tomekkorbak/detoxify-pile-chunk3-1750000-1800000 - tomekkorbak/detoxify-pile-chunk3-1800000-1850000 - tomekkorbak/detoxify-pile-chunk3-1850000-1900000 - tomekkorbak/detoxify-pile-chunk3-1900000-1950000 model-index: - name: vigorous_thompson results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vigorous_thompson This model was trained from scratch on the tomekkorbak/detoxify-pile-chunk3-0-50000, the tomekkorbak/detoxify-pile-chunk3-50000-100000, the tomekkorbak/detoxify-pile-chunk3-100000-150000, the tomekkorbak/detoxify-pile-chunk3-150000-200000, the tomekkorbak/detoxify-pile-chunk3-200000-250000, the tomekkorbak/detoxify-pile-chunk3-250000-300000, the tomekkorbak/detoxify-pile-chunk3-300000-350000, the tomekkorbak/detoxify-pile-chunk3-350000-400000, the tomekkorbak/detoxify-pile-chunk3-400000-450000, the tomekkorbak/detoxify-pile-chunk3-450000-500000, the tomekkorbak/detoxify-pile-chunk3-500000-550000, the tomekkorbak/detoxify-pile-chunk3-550000-600000, the tomekkorbak/detoxify-pile-chunk3-600000-650000, the tomekkorbak/detoxify-pile-chunk3-650000-700000, the tomekkorbak/detoxify-pile-chunk3-700000-750000, the tomekkorbak/detoxify-pile-chunk3-750000-800000, the tomekkorbak/detoxify-pile-chunk3-800000-850000, the tomekkorbak/detoxify-pile-chunk3-850000-900000, the tomekkorbak/detoxify-pile-chunk3-900000-950000, the tomekkorbak/detoxify-pile-chunk3-950000-1000000, the tomekkorbak/detoxify-pile-chunk3-1000000-1050000, the tomekkorbak/detoxify-pile-chunk3-1050000-1100000, the tomekkorbak/detoxify-pile-chunk3-1100000-1150000, the tomekkorbak/detoxify-pile-chunk3-1150000-1200000, the tomekkorbak/detoxify-pile-chunk3-1200000-1250000, the tomekkorbak/detoxify-pile-chunk3-1250000-1300000, the tomekkorbak/detoxify-pile-chunk3-1300000-1350000, the tomekkorbak/detoxify-pile-chunk3-1350000-1400000, the tomekkorbak/detoxify-pile-chunk3-1400000-1450000, the tomekkorbak/detoxify-pile-chunk3-1450000-1500000, the tomekkorbak/detoxify-pile-chunk3-1500000-1550000, the tomekkorbak/detoxify-pile-chunk3-1550000-1600000, the tomekkorbak/detoxify-pile-chunk3-1600000-1650000, the tomekkorbak/detoxify-pile-chunk3-1650000-1700000, the tomekkorbak/detoxify-pile-chunk3-1700000-1750000, the tomekkorbak/detoxify-pile-chunk3-1750000-1800000, the tomekkorbak/detoxify-pile-chunk3-1800000-1850000, the tomekkorbak/detoxify-pile-chunk3-1850000-1900000 and the tomekkorbak/detoxify-pile-chunk3-1900000-1950000 datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.01 - training_steps: 50354 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.5.1 - Tokenizers 0.11.6 # Full config {'dataset': {'datasets': ['tomekkorbak/detoxify-pile-chunk3-0-50000', 'tomekkorbak/detoxify-pile-chunk3-50000-100000', 'tomekkorbak/detoxify-pile-chunk3-100000-150000', 'tomekkorbak/detoxify-pile-chunk3-150000-200000', 'tomekkorbak/detoxify-pile-chunk3-200000-250000', 'tomekkorbak/detoxify-pile-chunk3-250000-300000', 'tomekkorbak/detoxify-pile-chunk3-300000-350000', 'tomekkorbak/detoxify-pile-chunk3-350000-400000', 'tomekkorbak/detoxify-pile-chunk3-400000-450000', 'tomekkorbak/detoxify-pile-chunk3-450000-500000', 'tomekkorbak/detoxify-pile-chunk3-500000-550000', 'tomekkorbak/detoxify-pile-chunk3-550000-600000', 'tomekkorbak/detoxify-pile-chunk3-600000-650000', 'tomekkorbak/detoxify-pile-chunk3-650000-700000', 'tomekkorbak/detoxify-pile-chunk3-700000-750000', 'tomekkorbak/detoxify-pile-chunk3-750000-800000', 'tomekkorbak/detoxify-pile-chunk3-800000-850000', 'tomekkorbak/detoxify-pile-chunk3-850000-900000', 'tomekkorbak/detoxify-pile-chunk3-900000-950000', 'tomekkorbak/detoxify-pile-chunk3-950000-1000000', 'tomekkorbak/detoxify-pile-chunk3-1000000-1050000', 'tomekkorbak/detoxify-pile-chunk3-1050000-1100000', 'tomekkorbak/detoxify-pile-chunk3-1100000-1150000', 'tomekkorbak/detoxify-pile-chunk3-1150000-1200000', 'tomekkorbak/detoxify-pile-chunk3-1200000-1250000', 'tomekkorbak/detoxify-pile-chunk3-1250000-1300000', 'tomekkorbak/detoxify-pile-chunk3-1300000-1350000', 'tomekkorbak/detoxify-pile-chunk3-1350000-1400000', 'tomekkorbak/detoxify-pile-chunk3-1400000-1450000', 'tomekkorbak/detoxify-pile-chunk3-1450000-1500000', 'tomekkorbak/detoxify-pile-chunk3-1500000-1550000', 'tomekkorbak/detoxify-pile-chunk3-1550000-1600000', 'tomekkorbak/detoxify-pile-chunk3-1600000-1650000', 'tomekkorbak/detoxify-pile-chunk3-1650000-1700000', 'tomekkorbak/detoxify-pile-chunk3-1700000-1750000', 'tomekkorbak/detoxify-pile-chunk3-1750000-1800000', 'tomekkorbak/detoxify-pile-chunk3-1800000-1850000', 'tomekkorbak/detoxify-pile-chunk3-1850000-1900000', 'tomekkorbak/detoxify-pile-chunk3-1900000-1950000'], 'is_split_by_sentences': True}, 'generation': {'force_call_on': [25354], 'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}], 'scenario_configs': [{'generate_kwargs': {'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'unconditional', 'num_samples': 2048}, {'generate_kwargs': {'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'challenging_rtp', 'num_samples': 2048, 'prompts_path': 'resources/challenging_rtp.jsonl'}], 'scorer_config': {'device': 'cuda:0'}}, 'kl_gpt3_callback': {'force_call_on': [25354], 'max_tokens': 64, 'num_samples': 4096}, 'model': {'from_scratch': True, 'gpt2_config_kwargs': {'reorder_and_upcast_attn': True, 'scale_attn_by': True}, 'path_or_name': 'gpt2'}, 'objective': {'name': 'MLE'}, 'tokenizer': {'path_or_name': 'gpt2'}, 'training': {'dataloader_num_workers': 0, 'effective_batch_size': 64, 'evaluation_strategy': 'no', 'fp16': True, 'hub_model_id': 'vigorous_thompson', 'hub_strategy': 'all_checkpoints', 'learning_rate': 0.0005, 'logging_first_step': True, 'logging_steps': 1, 'num_tokens': 3300000000, 'output_dir': 'training_output104340', 'per_device_train_batch_size': 16, 'push_to_hub': True, 'remove_unused_columns': False, 'save_steps': 25354, 'save_strategy': 'steps', 'seed': 42, 'warmup_ratio': 0.01, 'weight_decay': 0.1}} # Wandb URL: https://wandb.ai/tomekkorbak/apo/runs/1kpqechr
daniel-tomiwa/finetuned-pegasus-model
daniel-tomiwa
2022-11-23T15:11:24Z
96
0
transformers
[ "transformers", "pytorch", "tensorboard", "pegasus", "text2text-generation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-11-23T14:27:25Z
--- tags: - generated_from_trainer model-index: - name: finetuned-pegasus-model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned-pegasus-model This model is a fine-tuned version of [human-centered-summarization/financial-summarization-pegasus](https://huggingface.co/human-centered-summarization/financial-summarization-pegasus) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | No log | 1.0 | 240 | 0.6898 | 40.3397 | 29.9123 | 33.8417 | 37.7847 | 61.5333 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.7.1 - Tokenizers 0.13.2
alexziweiwang/exp18-F04-both
alexziweiwang
2022-11-23T14:47:20Z
105
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-11-23T10:19:23Z
--- tags: - generated_from_trainer model-index: - name: exp18-F04-both results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # exp18-F04-both This model is a fine-tuned version of [yongjian/wav2vec2-large-a](https://huggingface.co/yongjian/wav2vec2-large-a) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4137 - Wer: 0.4647 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 41.5777 | 0.34 | 500 | 3.0940 | 1.0188 | | 3.2064 | 0.68 | 1000 | 2.8577 | 1.0157 | | 2.997 | 1.02 | 1500 | 2.7604 | 1.0126 | | 2.8537 | 1.36 | 2000 | 2.7305 | 1.0 | | 2.6677 | 1.7 | 2500 | 2.3201 | 1.2512 | | 2.4414 | 2.04 | 3000 | 2.1550 | 1.2575 | | 2.2113 | 2.38 | 3500 | 2.0825 | 1.2433 | | 2.0619 | 2.72 | 4000 | 2.0245 | 1.2245 | | 1.921 | 3.07 | 4500 | 1.6541 | 1.2057 | | 1.8182 | 3.41 | 5000 | 1.3678 | 1.1962 | | 1.759 | 3.75 | 5500 | 1.1805 | 1.2214 | | 1.6229 | 4.09 | 6000 | 1.0100 | 1.1695 | | 1.4557 | 4.43 | 6500 | 0.8956 | 1.1287 | | 1.4799 | 4.77 | 7000 | 0.7858 | 1.0801 | | 1.3277 | 5.11 | 7500 | 0.7306 | 1.0267 | | 1.2419 | 5.45 | 8000 | 0.6326 | 0.9262 | | 1.1537 | 5.79 | 8500 | 0.6280 | 0.8901 | | 1.0972 | 6.13 | 9000 | 0.5639 | 0.9027 | | 1.0375 | 6.47 | 9500 | 0.7219 | 0.8352 | | 0.9301 | 6.81 | 10000 | 0.4786 | 0.7881 | | 0.9423 | 7.15 | 10500 | 0.4969 | 0.7441 | | 0.8276 | 7.49 | 11000 | 0.4640 | 0.7551 | | 0.8674 | 7.83 | 11500 | 0.5401 | 0.7582 | | 0.7633 | 8.17 | 12000 | 0.4610 | 0.6970 | | 0.7314 | 8.51 | 12500 | 0.4026 | 0.6923 | | 0.7259 | 8.86 | 13000 | 0.4874 | 0.6970 | | 0.6591 | 9.2 | 13500 | 0.4701 | 0.6546 | | 0.615 | 9.54 | 14000 | 0.4259 | 0.6421 | | 0.6098 | 9.88 | 14500 | 0.4206 | 0.6122 | | 0.554 | 10.22 | 15000 | 0.4550 | 0.6201 | | 0.5521 | 10.56 | 15500 | 0.4777 | 0.6154 | | 0.5726 | 10.9 | 16000 | 0.3307 | 0.5997 | | 0.5301 | 11.24 | 16500 | 0.4095 | 0.5777 | | 0.5098 | 11.58 | 17000 | 0.4914 | 0.5934 | | 0.5174 | 11.92 | 17500 | 0.4223 | 0.5981 | | 0.4674 | 12.26 | 18000 | 0.3593 | 0.5651 | | 0.4574 | 12.6 | 18500 | 0.3951 | 0.5651 | | 0.4182 | 12.94 | 19000 | 0.4727 | 0.5808 | | 0.388 | 13.28 | 19500 | 0.4737 | 0.5447 | | 0.3924 | 13.62 | 20000 | 0.4047 | 0.5322 | | 0.3752 | 13.96 | 20500 | 0.3499 | 0.5306 | | 0.3374 | 14.31 | 21000 | 0.2930 | 0.5243 | | 0.3239 | 14.65 | 21500 | 0.4708 | 0.5338 | | 0.3609 | 14.99 | 22000 | 0.3415 | 0.5118 | | 0.309 | 15.33 | 22500 | 0.4738 | 0.5149 | | 0.2987 | 15.67 | 23000 | 0.4351 | 0.5275 | | 0.3726 | 16.01 | 23500 | 0.4305 | 0.5306 | | 0.3075 | 16.35 | 24000 | 0.3290 | 0.5212 | | 0.2995 | 16.69 | 24500 | 0.3386 | 0.4976 | | 0.3262 | 17.03 | 25000 | 0.5279 | 0.5165 | | 0.2607 | 17.37 | 25500 | 0.3836 | 0.5008 | | 0.2664 | 17.71 | 26000 | 0.4128 | 0.4961 | | 0.2578 | 18.05 | 26500 | 0.3517 | 0.4945 | | 0.2443 | 18.39 | 27000 | 0.3126 | 0.4804 | | 0.2488 | 18.73 | 27500 | 0.3895 | 0.4976 | | 0.2382 | 19.07 | 28000 | 0.5097 | 0.5055 | | 0.2684 | 19.41 | 28500 | 0.4171 | 0.5071 | | 0.2038 | 19.75 | 29000 | 0.4126 | 0.4851 | | 0.2273 | 20.1 | 29500 | 0.4142 | 0.4898 | | 0.2144 | 20.44 | 30000 | 0.5022 | 0.4961 | | 0.2274 | 20.78 | 30500 | 0.4640 | 0.4819 | | 0.2055 | 21.12 | 31000 | 0.5124 | 0.4851 | | 0.1814 | 21.46 | 31500 | 0.4745 | 0.4804 | | 0.201 | 21.8 | 32000 | 0.4669 | 0.4835 | | 0.1788 | 22.14 | 32500 | 0.5168 | 0.4851 | | 0.2206 | 22.48 | 33000 | 0.4279 | 0.4772 | | 0.1847 | 22.82 | 33500 | 0.3862 | 0.4772 | | 0.1875 | 23.16 | 34000 | 0.4506 | 0.4851 | | 0.1546 | 23.5 | 34500 | 0.4411 | 0.4867 | | 0.1768 | 23.84 | 35000 | 0.3386 | 0.4584 | | 0.1601 | 24.18 | 35500 | 0.3914 | 0.4678 | | 0.1815 | 24.52 | 36000 | 0.3449 | 0.4600 | | 0.1495 | 24.86 | 36500 | 0.4789 | 0.4819 | | 0.1347 | 25.2 | 37000 | 0.4584 | 0.4741 | | 0.1516 | 25.54 | 37500 | 0.3993 | 0.4678 | | 0.1514 | 25.89 | 38000 | 0.3898 | 0.4662 | | 0.1288 | 26.23 | 38500 | 0.4486 | 0.4819 | | 0.1414 | 26.57 | 39000 | 0.4233 | 0.4835 | | 0.1407 | 26.91 | 39500 | 0.4119 | 0.4710 | | 0.1383 | 27.25 | 40000 | 0.4084 | 0.4788 | | 0.1391 | 27.59 | 40500 | 0.4254 | 0.4757 | | 0.1302 | 27.93 | 41000 | 0.4208 | 0.4741 | | 0.1335 | 28.27 | 41500 | 0.3952 | 0.4662 | | 0.1426 | 28.61 | 42000 | 0.4086 | 0.4647 | | 0.1303 | 28.95 | 42500 | 0.4071 | 0.4615 | | 0.1148 | 29.29 | 43000 | 0.4220 | 0.4662 | | 0.1131 | 29.63 | 43500 | 0.4170 | 0.4662 | | 0.0998 | 29.97 | 44000 | 0.4137 | 0.4647 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 1.18.3 - Tokenizers 0.13.2
kobe/vit-base-beans
kobe
2022-11-23T14:44:58Z
250
0
transformers
[ "transformers", "pytorch", "vit", "image-classification", "vision", "generated_from_trainer", "dataset:beans", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-10-02T02:56:59Z
--- license: apache-2.0 tags: - image-classification - vision - generated_from_trainer datasets: - beans metrics: - accuracy model-index: - name: vit-base-beans results: - task: name: Image Classification type: image-classification dataset: name: beans type: beans config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.9849624060150376 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-beans This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.0866 - Accuracy: 0.9850 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 1337 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2501 | 1.0 | 130 | 0.2281 | 0.9624 | | 0.2895 | 2.0 | 260 | 0.1138 | 0.9925 | | 0.1549 | 3.0 | 390 | 0.1065 | 0.9774 | | 0.0952 | 4.0 | 520 | 0.0866 | 0.9850 | | 0.1511 | 5.0 | 650 | 0.0875 | 0.9774 | ### Framework versions - Transformers 4.25.0.dev0 - Pytorch 1.12.1 - Datasets 2.7.1.dev0 - Tokenizers 0.13.2
pheinisch/roberta-base-150T-argumentative-sentence-detector
pheinisch
2022-11-23T14:37:42Z
116
1
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "argument mining", "claims", "sentence classification", "en", "dataset:FS150T", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-11-09T07:04:50Z
--- language: - "en" tags: - "argument mining" - "claims" - "sentence classification" datasets: - "FS150T" metrics: - "accuracy" - "f1" --- # _EXPERIMENTAL_ roberta-base-150T-argumentative-sentence-detector (this model might not be the optimal one for accomplishing the task) - Task: Detects whether a sentence is argumentative (1 - yes/ 0 - not) given the topic and the sentence itself. - language: English - dataset: Few-Shot-150T Corpus v1.1 (FS150T-Corpus) _fine-tuned roberta-base_ ## Performace on test data (threshold: 0.5) ```` {'accuracy': 0.7451388888888889, 'f1': 0.6690712353471596, 'precision': 0.733201581027668, 'recall': 0.615257048092869} ````
jamiehudson/579-STmodel-v3
jamiehudson
2022-11-23T14:29:06Z
1
0
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-11-23T14:28:54Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 1800 with parameters: ``` {'batch_size': 4, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 1800, "warmup_steps": 180, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
PlanTL-GOB-ES/roberta-base-es-wikicat-es
PlanTL-GOB-ES
2022-11-23T14:02:14Z
332
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "español", "text classification", "WikiCAT_esv2", "es", "dataset:projecte-aina/WikiCAT_esv2", "arxiv:1907.11692", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-11-23T13:34:08Z
--- language: - es license: apache-2.0 tags: - "español" - "text classification" - "WikiCAT_esv2" datasets: - "projecte-aina/WikiCAT_esv2" metrics: - f1-macro model-index: - name: roberta-base-es-wikicat-es results: - task: type: text-classification dataset: type: projecte-aina/WikiCAT_esv2 name: WikiCAT_esv2 metrics: - name: F1-macro type: f1 value: 0.76632 - name: Accuracy type: accuracy value: 0.79347 widget: - text: "Sedna es el cuerpo menor del sistema solar número 90377; concretamente es un objeto transneptuniano." - text: "El Fútbol Club Barcelona, conocido popularmente como Barça, es una entidad polideportiva con sede en Barcelona, España." --- # Spanish BERTa-v2 (roberta-base-es) finetuned for Text Classification. ## Table of Contents <details> <summary>Click to expand</summary> - [Model description](#model-description) - [Intended uses and limitations](#intended-uses-and-limitations) - [How to use](#how-to-use) - [Limitations and bias](#limitations-and-bias) - [Training](#training) - [Training data](#training-data) - [Training procedure](#training-procedure) - [Evaluation](#evaluation) - [Variable and metrics](#variable-and-metrics) - [Evaluation results](#evaluation-results) - [Additional information](#additional-information) - [Author](#author) - [Contact information](#contact-information) - [Copyright](#copyright) - [Licensing information](#licensing-information) - [Funding](#funding) - [Disclaimer](#disclaimer) </details> ## Model description The **roberta-base-es-wikicat-es** is a Text Classification model for the Catalan language fine-tuned from the [roberta-base-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) model, a [RoBERTa](https://arxiv.org/abs/1907.11692) base model pre-trained on a medium-size corpus collected from publicly available corpora and crawlers (check the roberta-base-bne model card for more details). ## Intended uses and limitations **roberta-base-es-wikicat-es** model can be used to classify texts. The model is limited by its training dataset and may not generalize well for all use cases. ## How to use Here is how to use this model: ```python from transformers import pipeline from pprint import pprint nlp = pipeline("text-classification", model="roberta-base-es-wikicat-es") example = "Sedna es el cuerpo menor del sistema solar número 90377; concretamente es un objeto transneptuniano." tc_results = nlp(example) pprint(tc_results) ``` ## Limitations and bias At the time of submission, no measures have been taken to estimate the bias embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated. ## Training ### Training data We used the TC dataset in Spanish called [WikiCAT_esv2](https://huggingface.co/datasets/PlanTL-GOB-ES/WikiCAT_esv2) for training and evaluation. ### Training procedure The model was trained with a batch size of 16 and three learning rates (1e-5, 3e-5, 5e-5) for 5 epochs. We then selected the best learning rate (2e-5) and checkpoint (epoch 3) using the downstream task metric in the corresponding development set. ## Evaluation ### Variable and metrics This model was finetuned maximizing F1 (macro) score. ### Evaluation results We evaluated the _roberta-base-es-wikicat-es_ on the WikiCAT_esv2 dev set: | Model | WikiCAT_ca (F1)| | ------------|:-------------| | rroberta-base-es-wikicat-es | 0.76632 | For more details, check the fine-tuning and evaluation scripts in the official [GitHub repository](https://github.com/projecte-aina/club). ## Additional information ### Author Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es) ### Contact information For further information, send an email to aina@bsc.es ### Copyright Copyright (c) 2022 Text Mining Unit at Barcelona Supercomputing Center ### Licensing information [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0) ### Funding This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina). ## Disclaimer <details> <summary>Click to expand</summary> The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence. In no event shall the owner and creator of the models (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.
mshuggingface/swin-tiny-patch4-window7-224-ms-test1
mshuggingface
2022-11-23T13:54:56Z
205
0
transformers
[ "transformers", "pytorch", "tensorboard", "swin", "image-classification", "generated_from_trainer", "dataset:imagefolder", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-11-23T13:51:08Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: swin-tiny-patch4-window7-224-ms-test1 results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.5 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-ms-test1 This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.6036 - Accuracy: 0.5 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 1 | 0.7667 | 0.5 | | No log | 2.0 | 2 | 0.6644 | 0.5 | | No log | 3.0 | 3 | 0.6036 | 0.5 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.7.1 - Tokenizers 0.13.2
archipela/ell-vocabulary
archipela
2022-11-23T13:33:26Z
100
0
transformers
[ "transformers", "pytorch", "autotrain", "text-regression", "unk", "dataset:huynhdoo/autotrain-data-ell-vocabulary", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
null
2022-11-23T13:31:43Z
--- tags: - autotrain - text-regression language: - unk widget: - text: "I love AutoTrain 🤗" datasets: - huynhdoo/autotrain-data-ell-vocabulary co2_eq_emissions: emissions: 2.3719978527185237 --- # Model Trained Using AutoTrain - Problem type: Single Column Regression - Model ID: 2218271145 - CO2 Emissions (in grams): 2.3720 ## Validation Metrics - Loss: 0.228 - MSE: 0.228 - MAE: 0.383 - R2: 0.343 - RMSE: 0.478 - Explained Variance: 0.402 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/huynhdoo/autotrain-ell-vocabulary-2218271145 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("huynhdoo/autotrain-ell-vocabulary-2218271145", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("huynhdoo/autotrain-ell-vocabulary-2218271145", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
jamiehudson/579-STmodel-v4
jamiehudson
2022-11-23T13:31:46Z
1
0
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-11-23T12:18:54Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 1800 with parameters: ``` {'batch_size': 4, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 1800, "warmup_steps": 180, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
archipela/ell-cohesion
archipela
2022-11-23T13:30:47Z
100
0
transformers
[ "transformers", "pytorch", "autotrain", "text-regression", "unk", "dataset:huynhdoo/autotrain-data-ell-cohesion", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
null
2022-11-23T13:27:59Z
--- tags: - autotrain - text-regression language: - unk widget: - text: "I love AutoTrain 🤗" datasets: - huynhdoo/autotrain-data-ell-cohesion co2_eq_emissions: emissions: 4.569992504332477 --- # Model Trained Using AutoTrain - Problem type: Single Column Regression - Model ID: 2217971118 - CO2 Emissions (in grams): 4.5700 ## Validation Metrics - Loss: 0.259 - MSE: 0.259 - MAE: 0.407 - R2: 0.416 - RMSE: 0.509 - Explained Variance: 0.427 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/huynhdoo/autotrain-ell-cohesion-2217971118 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("huynhdoo/autotrain-ell-cohesion-2217971118", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("huynhdoo/autotrain-ell-cohesion-2217971118", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
aherzberg/wav2vec2-base-POSITIVE_NEGATIVE_ONLY_BALANCED_CLASSES
aherzberg
2022-11-23T13:27:27Z
158
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "audio-classification", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
audio-classification
2022-11-23T12:20:30Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: wav2vec2-base-POSITIVE_NEGATIVE_ONLY_BALANCED_CLASSES results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-POSITIVE_NEGATIVE_ONLY_BALANCED_CLASSES This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3710 - Accuracy: 0.8822 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7822 | 0.96 | 18 | 0.6874 | 0.7424 | | 0.5685 | 1.96 | 36 | 0.5974 | 0.7845 | | 0.45 | 2.96 | 54 | 0.4988 | 0.8182 | | 0.399 | 3.96 | 72 | 0.4583 | 0.8384 | | 0.3457 | 4.96 | 90 | 0.4415 | 0.8451 | | 0.352 | 5.96 | 108 | 0.3710 | 0.8822 | | 0.2878 | 6.96 | 126 | 0.3881 | 0.8620 | | 0.2669 | 7.96 | 144 | 0.4309 | 0.8502 | | 0.2406 | 8.96 | 162 | 0.4271 | 0.8502 | | 0.2491 | 9.96 | 180 | 0.4271 | 0.8502 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0 - Datasets 1.14.0 - Tokenizers 0.10.3
heziiiii/ddpm-butterflies-128
heziiiii
2022-11-23T13:26:40Z
0
0
diffusers
[ "diffusers", "tensorboard", "en", "dataset:huggan/smithsonian_butterflies_subset", "license:apache-2.0", "diffusers:DDPMPipeline", "region:us" ]
null
2022-11-23T12:08:11Z
--- language: en license: apache-2.0 library_name: diffusers tags: [] datasets: huggan/smithsonian_butterflies_subset metrics: [] --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # ddpm-butterflies-128 ## Model description This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library on the `huggan/smithsonian_butterflies_subset` dataset. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training data [TODO: describe the data used to train the model] ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - gradient_accumulation_steps: 1 - optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None - lr_scheduler: None - lr_warmup_steps: 500 - ema_inv_gamma: None - ema_inv_gamma: None - ema_inv_gamma: None - mixed_precision: fp16 ### Training results 📈 [TensorBoard logs](https://huggingface.co/heziiiii/ddpm-butterflies-128/tensorboard?#scalars)
sd-concepts-library/yellow-cockatiel-parrot
sd-concepts-library
2022-11-23T12:50:05Z
0
1
null
[ "license:mit", "region:us" ]
null
2022-11-23T12:49:55Z
--- license: mit --- ### Yellow Cockatiel Parrot on Stable Diffusion This is the `<rosa-popugai>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<rosa-popugai> 0](https://huggingface.co/sd-concepts-library/yellow-cockatiel-parrot/resolve/main/concept_images/3.jpeg) ![<rosa-popugai> 1](https://huggingface.co/sd-concepts-library/yellow-cockatiel-parrot/resolve/main/concept_images/0.jpeg) ![<rosa-popugai> 2](https://huggingface.co/sd-concepts-library/yellow-cockatiel-parrot/resolve/main/concept_images/2.jpeg) ![<rosa-popugai> 3](https://huggingface.co/sd-concepts-library/yellow-cockatiel-parrot/resolve/main/concept_images/1.jpeg)
cafeai/cafe_aesthetic
cafeai
2022-11-23T12:08:27Z
3,264
50
transformers
[ "transformers", "pytorch", "beit", "image-classification", "license:agpl-3.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-11-14T09:56:39Z
--- license: agpl-3.0 --- # Info Since people are downloading this and I don't know why, I'll add some information. This model is an image classifier fine-tuned on `microsoft/beit-base-patch16-384`. Its purpose is to be used in the dataset conditioning step for the [Waifu Diffusion project](https://huggingface.co/hakurei/waifu-diffusion), a fine-tune effort for Stable Diffusion. As WD1.4 is planned to have a *significantly large dataset* (~15m images), it is infeasible to analyze every image manually to determine whether or not it should be included in the final training dataset. This image classifier is trained on approximately 3.5k real-life and anime/manga images. Its purpose is to remove aesthetically worthless images from our dataset by classifying them as "`not_aesthetic`". The image classifier was trained to **err on the side of caution** and will generally tend to include images unless they are in a "manga-like" format, have messy lines and/or are sketches, or include an unacceptable amount of text (namely text that covers the primary subject of the image). The idea is that certain images will hurt a SD fine-tune. Note: This classifier is not perfect, just like every other classifier out there. However, with a sufficiently large dataset, any imperfections or misclassifications should average themselves out due to the Law of Large Numbers. You can test out the classifier [here](https://huggingface.co/spaces/cafeai/cafe_aesthetic_demo), along with some other classifiers for the project. # License Released under the aGPLv3. Use the model as you wish for any purpose. If you make changes, share the changes.
christofid/dabert-multi
christofid
2022-11-23T12:05:14Z
121
0
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-11-23T11:43:17Z
--- license: mit --- ### dapBERT DapBERT-multi is a BERT-like model trained based on the domain adaptive pretraining method ([Gururangan et al.](https://aclanthology.org/2020.acl-main.740/)) for the patent domain. Bert-base-multilingual-cased is used as base for the training. The training dataset used consists of a corpus of 10,000,000 patent abstracts that have been filed between 1998-2020 in US and European patent offices as well as the World Intellectual Property Organization.
gwz0202/ddpm-butterflied-128
gwz0202
2022-11-23T12:03:46Z
0
0
diffusers
[ "diffusers", "tensorboard", "en", "dataset:huggan/few-shot-pokemon", "license:apache-2.0", "diffusers:DDPMPipeline", "region:us" ]
null
2022-11-23T10:51:41Z
--- language: en license: apache-2.0 library_name: diffusers tags: [] datasets: huggan/few-shot-pokemon metrics: [] --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # ddpm-butterflied-128 ## Model description This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library on the `huggan/few-shot-pokemon` dataset. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training data [TODO: describe the data used to train the model] ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - gradient_accumulation_steps: 1 - optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None - lr_scheduler: None - lr_warmup_steps: 500 - ema_inv_gamma: None - ema_inv_gamma: None - ema_inv_gamma: None - mixed_precision: fp16 ### Training results 📈 [TensorBoard logs](https://huggingface.co/gwz0202/ddpm-butterflied-128/tensorboard?#scalars)
dscoursetechnion/t5-small-finetuned-xsum
dscoursetechnion
2022-11-23T12:03:09Z
113
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "dataset:xsum", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-11-23T08:03:31Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - xsum metrics: - rouge model-index: - name: t5-small-finetuned-xsum results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: xsum type: xsum config: default split: train args: default metrics: - name: Rouge1 type: rouge value: 26.7823 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-xsum This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset. It achieves the following results on the evaluation set: - Loss: 2.5658 - Rouge1: 26.7823 - Rouge2: 6.7168 - Rougel: 20.9066 - Rougelsum: 20.9054 - Gen Len: 18.8193 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | 2.8016 | 1.0 | 4251 | 2.5658 | 26.7823 | 6.7168 | 20.9066 | 20.9054 | 18.8193 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.7.1 - Tokenizers 0.13.2
Intel/bert-mini-sst2-distilled-sparse-90-1X4-block
Intel
2022-11-23T11:48:53Z
115
1
transformers
[ "transformers", "pytorch", "onnx", "bert", "text-classification", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-08-16T01:44:17Z
--- license: mit --- # Sparse BERT mini model (uncased) Finetuned model pruned to 1:4 structured sparsity. The model is a pruned version of the [BERT mini model](https://huggingface.co/prajjwal1/bert-mini). ## Intended Use The model can be used for inference with sparsity optimization. For further details on the model and its usage will be soon available. ## Evaluation Results We get the following results on the sst2 tasks development set: | Task | SST-2 (Acc) | |------|-------------| | | 87.2 | Better than dense [bert mini](https://huggingface.co/M-FAC/bert-mini-finetuned-sst2) which is 84.74%.
josiahkhor/en_triage_subject
josiahkhor
2022-11-23T11:43:56Z
5
0
spacy
[ "spacy", "text-classification", "en", "region:us" ]
text-classification
2022-11-23T11:30:59Z
--- tags: - spacy - text-classification language: - en model-index: - name: en_triage_subject results: [] --- | Feature | Description | | --- | --- | | **Name** | `en_triage_subject` | | **Version** | `0.0.0` | | **spaCy** | `>=3.4.3,<3.5.0` | | **Default Pipeline** | `tok2vec`, `textcat` | | **Components** | `tok2vec`, `textcat` | | **Vectors** | 514157 keys, 514157 unique vectors (300 dimensions) | | **Sources** | n/a | | **License** | n/a | | **Author** | [n/a]() | ### Label Scheme <details> <summary>View label scheme (5 labels for 1 components)</summary> | Component | Labels | | --- | --- | | **`textcat`** | `General Correspondence`, `Invoice`, `New Claim Form`, `Assessor Report`, `Complaint` | </details> ### Accuracy | Type | Score | | --- | --- | | `CATS_SCORE` | 79.52 | | `CATS_MICRO_P` | 99.34 | | `CATS_MICRO_R` | 99.34 | | `CATS_MICRO_F` | 99.34 | | `CATS_MACRO_P` | 79.37 | | `CATS_MACRO_R` | 79.67 | | `CATS_MACRO_F` | 79.52 | | `CATS_MACRO_AUC` | 79.99 | | `CATS_MACRO_AUC_PER_TYPE` | 0.00 | | `TOK2VEC_LOSS` | 25952.93 | | `TEXTCAT_LOSS` | 58.98 |
tubyneto/crowdedflowertunedbert
tubyneto
2022-11-23T11:21:31Z
2
0
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "feature-extraction", "sentence-similarity", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-11-23T11:21:19Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # tubyneto/crowdedflowertunedbert This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('tubyneto/crowdedflowertunedbert') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=tubyneto/crowdedflowertunedbert) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 916 with parameters: ``` {'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.ContrastiveLoss.ContrastiveLoss` with parameters: ``` {'distance_metric': 'SiameseDistanceMetric.COSINE_DISTANCE', 'margin': 0.5, 'size_average': True} ``` Parameters of the fit()-Method: ``` { "epochs": 5, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 10000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
Watwat100/gpu2
Watwat100
2022-11-23T11:06:00Z
3
0
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "feature-extraction", "sentence-similarity", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-11-23T11:05:48Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 2347 with parameters: ``` {'batch_size': 12, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 2, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 4694, "warmup_steps": 470, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
jesspi/IFE-sentence-model
jesspi
2022-11-23T10:29:47Z
1
0
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-11-23T10:29:34Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 3170 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 6.629946430758516e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 3170, "warmup_steps": 317, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
selmey/behaviour-change-valence-german
selmey
2022-11-23T10:02:13Z
103
1
transformers
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-11-23T09:17:40Z
Bert-base-german-cased finetuned on the Valence level of the GLoHBCD Dataset (https://github.com/SelinaMeyer/GLoHBCD). The dataset leverages Motivational Interviewing client behaviour codes to evaluate user utterances across different dimensions and gauge user's stance and thoughts about behaviour change in the context of weight loss. This model classifies German text around behaviour change as either "Change Talk" (utterances in favour of change, 1) or "Sustain Talk" (utterances in favour of the status quo, 0). When using the model, please cite: @InProceedings{meyer-elsweiler:2022:LREC, author = {Meyer, Selina and Elsweiler, David}, title = {GLoHBCD: A Naturalistic German Dataset for Language of Health Behaviour Change on Online Support Forums}, booktitle = {Proceedings of the Language Resources and Evaluation Conference}, month = {June}, year = {2022}, address = {Marseille, France}, publisher = {European Language Resources Association}, pages = {2226--2235}, url = {https://aclanthology.org/2022.lrec-1.239}}
cgt/pert-qa
cgt
2022-11-23T09:46:49Z
47
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "dataset:cmrc2018", "license:cc-by-nc-sa-4.0", "endpoints_compatible", "region:us" ]
question-answering
2022-11-03T06:29:16Z
--- license: cc-by-nc-sa-4.0 tags: - generated_from_trainer datasets: - cmrc2018 model-index: - name: pert-qa results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pert-qa This model is a fine-tuned version of [hfl/chinese-pert-large](https://huggingface.co/hfl/chinese-pert-large) on the cmrc2018 dataset. It achieves the following results on the evaluation set: - Loss: 0.6942 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.1273 | 1.0 | 1200 | 0.7088 | | 0.6132 | 2.0 | 2400 | 0.6942 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.10.0+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
Watwat100/gpu1
Watwat100
2022-11-23T09:19:44Z
2
0
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "feature-extraction", "sentence-similarity", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-11-23T09:19:31Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 1744 with parameters: ``` {'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 1744, "warmup_steps": 175, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
Roy029/mpyt5_e20
Roy029
2022-11-23T08:58:44Z
105
0
transformers
[ "transformers", "pytorch", "mt5", "text2text-generation", "license:openrail", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-11-22T09:15:04Z
--- license: openrail --- # Model Card for mpyt5_e15 <!-- Provide a quick summary of what the model is/does. [Optional] --> 事前に自然言語だけでなくPythonを学習したモデル # Training Details ## Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> Python Code (1.05GB) ## Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> - MLM - python vocab (https://huggingface.co/kkuramitsu/mt5-pytoken) ### Preprocessing mT5 + Python ### Speeds, Sizes, Times <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> - mT5-small(300M Paramators) - max_length = 128 # Model Version - *epoch5: https://huggingface.co/Roy029/mpyt5_e5 - *epoch10: https://huggingface.co/Roy029/mpyt5_e10 - *epoch15: https://huggingface.co/Roy029/mpyt5_e15 - *epoch20: This Model
Roy029/mpyt5_e15
Roy029
2022-11-23T08:57:10Z
106
0
transformers
[ "transformers", "pytorch", "mt5", "text2text-generation", "license:openrail", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-11-22T11:18:09Z
--- license: openrail --- # Model Card for mpyt5_e15 <!-- Provide a quick summary of what the model is/does. [Optional] --> 事前に自然言語だけでなくPythonを学習したモデル # Training Details ## Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> Python Code (1.05GB) ## Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> - MLM - python vocab (https://huggingface.co/kkuramitsu/mt5-pytoken) ### Preprocessing mT5 + Python ### Speeds, Sizes, Times <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> - mT5-small(300M Paramators) - max_length = 128 # Model Version - *epoch5: https://huggingface.co/Roy029/mpyt5_e5 - *epoch10: https://huggingface.co/Roy029/mpyt5_e10 - *epoch15: This Model - *epoch20: https://huggingface.co/Roy029/mpyt5_e20
birgermoell/whisper-small-sv-bm
birgermoell
2022-11-23T08:54:31Z
14
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "hf-asr-leaderboard", "generated_from_trainer", "sv", "dataset:mozilla-foundation/common_voice_11_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-11-05T00:29:07Z
--- language: - sv license: apache-2.0 tags: - hf-asr-leaderboard - generated_from_trainer datasets: - mozilla-foundation/common_voice_11_0 metrics: - wer model-index: - name: WhisperSmallSwedishBirgerMoell results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 11.0 type: mozilla-foundation/common_voice_11_0 config: sv-SE split: train+validation args: sv-SE metrics: - name: Wer type: wer value: 19.58538356053884 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # WhisperSmallSwedishBirgerMoell This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.3253 - Wer: 19.5854 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.1523 | 1.29 | 1000 | 0.2924 | 21.5509 | | 0.0515 | 2.59 | 2000 | 0.2856 | 20.4593 | | 0.0214 | 3.88 | 3000 | 0.3010 | 19.9054 | | 0.0042 | 5.17 | 4000 | 0.3253 | 19.5854 | ### Framework versions - Transformers 4.25.0.dev0 - Pytorch 1.13.0+cu117 - Datasets 2.6.1 - Tokenizers 0.13.1
eikoenchine/xlm-roberta-base-finetuned-panx-all
eikoenchine
2022-11-23T08:42:37Z
137
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-11-23T08:29:14Z
--- license: mit tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-all results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-all This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1713 - F1: 0.8544 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.3076 | 1.0 | 835 | 0.2008 | 0.7923 | | 0.1565 | 2.0 | 1670 | 0.1809 | 0.8437 | | 0.1027 | 3.0 | 2505 | 0.1713 | 0.8544 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0 - Datasets 2.7.0 - Tokenizers 0.12.1
crodri/autotrain-wikicat_es-2213570987
crodri
2022-11-23T08:18:56Z
101
0
transformers
[ "transformers", "pytorch", "autotrain", "text-classification", "es", "dataset:crodri/autotrain-data-wikicat_es", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
text-classification
2022-11-23T08:07:19Z
--- tags: - autotrain - text-classification language: - es widget: - text: "El Fútbol Club Barcelona, conocido popularmente como Barça, es una entidad polideportiva con sede en Barcelona, España." datasets: - crodri/autotrain-data-wikicat_es co2_eq_emissions: emissions: 10.4216765068249 --- # Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 2213570987 - CO2 Emissions (in grams): 10.4217 ## Validation Metrics - Loss: 0.713 - Accuracy: 0.786 - Macro F1: 0.758 - Micro F1: 0.786 - Weighted F1: 0.785 - Macro Precision: 0.762 - Micro Precision: 0.786 - Weighted Precision: 0.787 - Macro Recall: 0.757 - Micro Recall: 0.786 - Weighted Recall: 0.786 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/crodri/autotrain-wikicat_es-2213570987 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("crodri/autotrain-wikicat_es-2213570987", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("crodri/autotrain-wikicat_es-2213570987", use_auth_token=True) inputs = tokenizer("El Fútbol Club Barcelona, conocido popularmente como Barça, es una entidad polideportiva con sede en Barcelona, España.", return_tensors="pt") outputs = model(**inputs) ```
utkarshbelkhede/distill-pegasus-sec-10K
utkarshbelkhede
2022-11-23T08:03:24Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "pegasus", "text2text-generation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-11-23T07:14:46Z
--- tags: - generated_from_trainer metrics: - rouge model-index: - name: distill-pegasus-cnn-16-4-sec results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distill-pegasus-cnn-16-4-sec This model is a fine-tuned version of [sshleifer/distill-pegasus-cnn-16-4](https://huggingface.co/sshleifer/distill-pegasus-cnn-16-4) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0146 - Rouge1: 48.3239 - Rouge2: 34.4713 - Rougel: 43.5113 - Rougelsum: 46.371 - Gen Len: 106.98 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 12 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | No log | 1.0 | 99 | 3.0918 | 20.297 | 6.5201 | 16.1329 | 18.0062 | 64.38 | | No log | 2.0 | 198 | 2.4999 | 23.2475 | 10.4548 | 19.4955 | 21.3927 | 73.92 | | No log | 3.0 | 297 | 2.0991 | 25.1919 | 13.2866 | 22.1497 | 23.7988 | 80.5 | | No log | 4.0 | 396 | 1.7855 | 29.3799 | 17.4892 | 26.0768 | 27.3547 | 84.08 | | No log | 5.0 | 495 | 1.5388 | 34.3057 | 21.5888 | 30.043 | 32.1758 | 98.26 | | 2.7981 | 6.0 | 594 | 1.3553 | 36.5817 | 22.9587 | 32.0113 | 34.3963 | 95.02 | | 2.7981 | 7.0 | 693 | 1.2281 | 37.9149 | 24.4547 | 33.9621 | 35.7424 | 90.04 | | 2.7981 | 8.0 | 792 | 1.1430 | 40.9219 | 27.4248 | 36.1746 | 38.8887 | 96.56 | | 2.7981 | 9.0 | 891 | 1.0844 | 43.935 | 29.7536 | 38.63 | 41.6618 | 98.7 | | 2.7981 | 10.0 | 990 | 1.0472 | 45.3353 | 32.042 | 40.8945 | 43.3416 | 106.22 | | 1.5684 | 11.0 | 1089 | 1.0254 | 47.6564 | 34.3221 | 43.1757 | 45.7094 | 107.88 | | 1.5684 | 12.0 | 1188 | 1.0146 | 48.3239 | 34.4713 | 43.5113 | 46.371 | 106.98 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0 - Datasets 2.1.0 - Tokenizers 0.12.1
xaeroq/dqn-Qbert-v5
xaeroq
2022-11-23T07:49:54Z
0
0
stable-baselines3
[ "stable-baselines3", "ALE/Qbert-v5", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-11-23T07:49:30Z
--- library_name: stable-baselines3 tags: - ALE/Qbert-v5 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: ALE/Qbert-v5 type: ALE/Qbert-v5 metrics: - type: mean_reward value: 6665.00 +/- 1973.49 name: mean_reward verified: false --- # **DQN** Agent playing **ALE/Qbert-v5** This is a trained model of a **DQN** agent playing **ALE/Qbert-v5** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env ALE/Qbert-v5 -orga xaeroq -f logs/ python enjoy.py --algo dqn --env ALE/Qbert-v5 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env ALE/Qbert-v5 -orga xaeroq -f logs/ rl_zoo3 enjoy --algo dqn --env ALE/Qbert-v5 -f logs/ ``` ## Training (with the RL Zoo) ``` python train.py --algo dqn --env ALE/Qbert-v5 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env ALE/Qbert-v5 -f logs/ -orga xaeroq ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
utkarshbelkhede/distilbart-sec-10K
utkarshbelkhede
2022-11-23T07:02:57Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "bart", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-11-23T06:54:29Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - rouge model-index: - name: distilbart-cnn-12-6-sec results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbart-cnn-12-6-sec This model is a fine-tuned version of [sshleifer/distilbart-cnn-12-6](https://huggingface.co/sshleifer/distilbart-cnn-12-6) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1379 - Rouge1: 72.2845 - Rouge2: 61.1501 - Rougel: 67.6999 - Rougelsum: 70.9968 - Gen Len: 113.8 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | No log | 1.0 | 99 | 0.4429 | 56.0806 | 40.5969 | 47.5271 | 53.7227 | 115.44 | | No log | 2.0 | 198 | 0.2279 | 56.6042 | 42.1781 | 48.9542 | 54.951 | 116.84 | | No log | 3.0 | 297 | 0.1845 | 65.9646 | 51.8575 | 59.8647 | 64.103 | 113.8 | | No log | 4.0 | 396 | 0.1532 | 71.6132 | 61.1434 | 67.4165 | 70.4093 | 110.46 | | No log | 5.0 | 495 | 0.1379 | 72.2845 | 61.1501 | 67.6999 | 70.9968 | 113.8 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0 - Datasets 2.1.0 - Tokenizers 0.12.1
Mentatko/distilbert-base-uncased-finetuned-squad
Mentatko
2022-11-23T06:34:56Z
105
0
transformers
[ "transformers", "pytorch", "distilbert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-11-22T05:12:35Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: distilbert-base-uncased-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Framework versions - Transformers 4.23.1 - Pytorch 1.13.0+cpu - Datasets 2.6.1 - Tokenizers 0.13.2
wyu1/GenRead-3B-TQA
wyu1
2022-11-23T05:04:45Z
2
0
transformers
[ "transformers", "pytorch", "t5", "license:cc-by-4.0", "text-generation-inference", "endpoints_compatible", "region:us" ]
null
2022-11-23T04:41:41Z
--- license: cc-by-4.0 --- # GenRead: FiD model trained on TQA -- This is the model checkpoint of GenRead [2], based on the T5-3B and trained on the TriviaQA [1]. -- Hyperparameters: 8 x 80GB A100 GPUs; batch size 16; AdamW; LR 6e-5; best dev at 8500 steps References: [1] TriviaQA: A Large Scale Dataset for Reading Comprehension and Question Answering. ACL 2017 [2] Generate rather than Retrieve: Large Language Models are Strong Context Generators. arXiv 2022 ## Model performance We evaluate it on the TriviaQA dataset, the EM score is 71.55. <a href="https://huggingface.co/exbert/?model=bert-base-uncased"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a> --- license: cc-by-4.0 ---
alexziweiwang/exp17-F03-both
alexziweiwang
2022-11-23T04:23:20Z
105
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-11-23T00:00:04Z
--- tags: - generated_from_trainer model-index: - name: exp17-F03-both results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # exp17-F03-both This model is a fine-tuned version of [yongjian/wav2vec2-large-a](https://huggingface.co/yongjian/wav2vec2-large-a) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.9268 - Wer: 0.9485 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 47.4704 | 0.36 | 500 | 3.3075 | 1.0131 | | 3.1649 | 0.71 | 1000 | 3.3442 | 1.0 | | 2.9674 | 1.07 | 1500 | 2.6986 | 1.0 | | 2.7514 | 1.42 | 2000 | 2.5789 | 1.1299 | | 2.6045 | 1.78 | 2500 | 2.3025 | 1.2529 | | 2.373 | 2.14 | 3000 | 2.2169 | 1.2698 | | 2.1632 | 2.49 | 3500 | 1.9883 | 1.2667 | | 2.0942 | 2.85 | 4000 | 1.9294 | 1.2567 | | 1.9239 | 3.2 | 4500 | 1.9799 | 1.2467 | | 1.7549 | 3.56 | 5000 | 1.7485 | 1.2252 | | 1.6973 | 3.91 | 5500 | 1.6799 | 1.2283 | | 1.5823 | 4.27 | 6000 | 1.6847 | 1.2267 | | 1.4761 | 4.63 | 6500 | 1.6971 | 1.1968 | | 1.4381 | 4.98 | 7000 | 1.6280 | 1.2052 | | 1.2509 | 5.34 | 7500 | 1.6657 | 1.2060 | | 1.3112 | 5.69 | 8000 | 1.5618 | 1.1783 | | 1.1851 | 6.05 | 8500 | 1.6555 | 1.1783 | | 1.1112 | 6.41 | 9000 | 1.6586 | 1.1752 | | 1.0463 | 6.76 | 9500 | 1.6135 | 1.1683 | | 1.041 | 7.12 | 10000 | 1.5444 | 1.1522 | | 0.9451 | 7.47 | 10500 | 1.5561 | 1.1622 | | 0.9454 | 7.83 | 11000 | 1.5044 | 1.1483 | | 0.8496 | 8.19 | 11500 | 1.6724 | 1.1330 | | 0.825 | 8.54 | 12000 | 1.5950 | 1.1414 | | 0.8291 | 8.9 | 12500 | 1.6023 | 1.1384 | | 0.7279 | 9.25 | 13000 | 1.6319 | 1.1314 | | 0.7394 | 9.61 | 13500 | 1.5478 | 1.1337 | | 0.7079 | 9.96 | 14000 | 1.7564 | 1.1453 | | 0.609 | 10.32 | 14500 | 1.7671 | 1.1245 | | 0.6639 | 10.68 | 15000 | 1.7471 | 1.1314 | | 0.648 | 11.03 | 15500 | 1.7694 | 1.2160 | | 0.577 | 11.39 | 16000 | 1.6149 | 1.1760 | | 0.577 | 11.74 | 16500 | 1.9288 | 1.1238 | | 0.5695 | 12.1 | 17000 | 1.7503 | 1.1253 | | 0.5326 | 12.46 | 17500 | 1.5635 | 1.1376 | | 0.5423 | 12.81 | 18000 | 1.7083 | 1.1668 | | 0.4775 | 13.17 | 18500 | 1.7054 | 1.1245 | | 0.4772 | 13.52 | 19000 | 1.6455 | 1.1045 | | 0.4737 | 13.88 | 19500 | 1.5996 | 1.0968 | | 0.4529 | 14.23 | 20000 | 1.9847 | 1.1653 | | 0.4461 | 14.59 | 20500 | 1.6845 | 1.1084 | | 0.4497 | 14.95 | 21000 | 1.6465 | 1.0938 | | 0.4096 | 15.3 | 21500 | 1.5919 | 1.0769 | | 0.3897 | 15.66 | 22000 | 1.5637 | 1.0761 | | 0.4234 | 16.01 | 22500 | 1.6360 | 1.0953 | | 0.3659 | 16.37 | 23000 | 1.7573 | 1.0830 | | 0.3352 | 16.73 | 23500 | 1.8474 | 1.0976 | | 0.3886 | 17.08 | 24000 | 1.9115 | 1.0953 | | 0.3255 | 17.44 | 24500 | 1.8820 | 1.0815 | | 0.3405 | 17.79 | 25000 | 1.6862 | 1.0346 | | 0.3205 | 18.15 | 25500 | 1.6912 | 1.0500 | | 0.322 | 18.51 | 26000 | 1.6253 | 1.0615 | | 0.296 | 18.86 | 26500 | 1.7924 | 1.0546 | | 0.2869 | 19.22 | 27000 | 1.8204 | 1.0899 | | 0.269 | 19.57 | 27500 | 1.7558 | 1.0292 | | 0.2844 | 19.93 | 28000 | 1.6038 | 1.0131 | | 0.2543 | 20.28 | 28500 | 1.7935 | 1.0161 | | 0.3025 | 20.64 | 29000 | 1.8706 | 1.0423 | | 0.2707 | 21.0 | 29500 | 2.0011 | 1.0208 | | 0.2401 | 21.35 | 30000 | 1.9058 | 1.0161 | | 0.2609 | 21.71 | 30500 | 1.7555 | 1.0015 | | 0.2403 | 22.06 | 31000 | 1.9301 | 1.0085 | | 0.2538 | 22.42 | 31500 | 1.8586 | 0.9969 | | 0.2334 | 22.78 | 32000 | 1.8588 | 0.9985 | | 0.2013 | 23.13 | 32500 | 1.9307 | 1.0108 | | 0.2122 | 23.49 | 33000 | 1.8830 | 0.9908 | | 0.2242 | 23.84 | 33500 | 1.8133 | 0.9754 | | 0.188 | 24.2 | 34000 | 1.8435 | 0.9800 | | 0.2142 | 24.56 | 34500 | 1.8491 | 0.9792 | | 0.2059 | 24.91 | 35000 | 1.8005 | 0.9754 | | 0.1794 | 25.27 | 35500 | 1.8845 | 0.9700 | | 0.185 | 25.62 | 36000 | 1.8620 | 0.9731 | | 0.1843 | 25.98 | 36500 | 1.8461 | 0.9539 | | 0.1717 | 26.33 | 37000 | 1.8100 | 0.9639 | | 0.164 | 26.69 | 37500 | 1.8192 | 0.9547 | | 0.1888 | 27.05 | 38000 | 1.8005 | 0.9470 | | 0.1792 | 27.4 | 38500 | 1.8901 | 0.9562 | | 0.1708 | 27.76 | 39000 | 1.8306 | 0.9547 | | 0.1508 | 28.11 | 39500 | 1.8934 | 0.9508 | | 0.1751 | 28.47 | 40000 | 1.8956 | 0.9523 | | 0.1541 | 28.83 | 40500 | 1.9360 | 0.9416 | | 0.1611 | 29.18 | 41000 | 1.9346 | 0.9454 | | 0.1684 | 29.54 | 41500 | 1.9247 | 0.9470 | | 0.1463 | 29.89 | 42000 | 1.9268 | 0.9485 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 1.18.3 - Tokenizers 0.13.2
caffeinism/ddpm-butterflies-128
caffeinism
2022-11-23T04:20:02Z
2
0
diffusers
[ "diffusers", "tensorboard", "en", "dataset:huggan/smithsonian_butterflies_subset", "license:apache-2.0", "diffusers:DDPMPipeline", "region:us" ]
null
2022-11-21T09:47:12Z
--- language: en license: apache-2.0 library_name: diffusers tags: [] datasets: huggan/smithsonian_butterflies_subset metrics: [] --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # ddpm-butterflies-128 ## Model description This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library on the `huggan/smithsonian_butterflies_subset` dataset. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training data [TODO: describe the data used to train the model] ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - gradient_accumulation_steps: 1 - optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None - lr_scheduler: None - lr_warmup_steps: 500 - ema_inv_gamma: None - ema_inv_gamma: None - ema_inv_gamma: None - mixed_precision: fp16 ### Training results 📈 [TensorBoard logs](https://huggingface.co/caffeinism/ddpm-butterflies-128/tensorboard?#scalars)
Migueluao123/roberta-base-bne-finetuned-amazon_reviews_multi
Migueluao123
2022-11-23T03:30:00Z
106
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-11-23T02:45:02Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: roberta-base-bne-finetuned-amazon_reviews_multi results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-bne-finetuned-amazon_reviews_multi This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2215 - Accuracy: 0.9343 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1948 | 1.0 | 1250 | 0.1743 | 0.933 | | 0.0979 | 2.0 | 2500 | 0.2215 | 0.9343 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Tokenizers 0.13.2
sd-concepts-library/dreams
sd-concepts-library
2022-11-23T03:28:49Z
0
3
null
[ "license:mit", "region:us" ]
null
2022-11-23T03:28:44Z
--- license: mit --- ### Dreams on Stable Diffusion This is the `<meeg>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<meeg> 0](https://huggingface.co/sd-concepts-library/dreams/resolve/main/concept_images/3.jpeg) ![<meeg> 1](https://huggingface.co/sd-concepts-library/dreams/resolve/main/concept_images/0.jpeg) ![<meeg> 2](https://huggingface.co/sd-concepts-library/dreams/resolve/main/concept_images/2.jpeg) ![<meeg> 3](https://huggingface.co/sd-concepts-library/dreams/resolve/main/concept_images/1.jpeg) ![<meeg> 4](https://huggingface.co/sd-concepts-library/dreams/resolve/main/concept_images/4.jpeg)
Jellywibble/gpt2_dalio_reward_model_v0
Jellywibble
2022-11-23T03:25:01Z
103
0
transformers
[ "transformers", "pytorch", "gpt2", "text-classification", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2022-11-23T03:00:21Z
https://wandb.ai/jellywibble/huggingface/runs/fwr1rnir?workspace=user-jellywibble
Egrt/Luuuu
Egrt
2022-11-23T02:54:17Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2022-03-20T12:11:42Z
--- license: apache-2.0 ---
jeveloper/sd-v1-4
jeveloper
2022-11-23T02:50:59Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2022-11-23T02:50:59Z
--- license: creativeml-openrail-m ---
nhanv/ner_cv
nhanv
2022-11-23T01:27:32Z
112
0
transformers
[ "transformers", "pytorch", "deberta-v2", "token-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-11-23T01:25:59Z
--- license: mit tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: reco-ner results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # reco-ner This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0668 - Precision: 0.8125 - Recall: 0.8790 - F1: 0.8444 - Accuracy: 0.9819 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.4516 | 1.0 | 626 | 0.4047 | 0.4332 | 0.4564 | 0.4445 | 0.8980 | | 0.3677 | 2.0 | 1252 | 0.2774 | 0.4918 | 0.5731 | 0.5293 | 0.9193 | | 0.2892 | 3.0 | 1878 | 0.2133 | 0.6139 | 0.6581 | 0.6353 | 0.9384 | | 0.2736 | 4.0 | 2504 | 0.1772 | 0.6248 | 0.6854 | 0.6537 | 0.9488 | | 0.221 | 5.0 | 3130 | 0.1503 | 0.6295 | 0.7328 | 0.6772 | 0.9560 | | 0.1569 | 6.0 | 3756 | 0.1283 | 0.6821 | 0.8108 | 0.7409 | 0.9623 | | 0.1534 | 7.0 | 4382 | 0.0995 | 0.7412 | 0.8119 | 0.7749 | 0.9708 | | 0.089 | 8.0 | 5008 | 0.0846 | 0.7695 | 0.8353 | 0.8010 | 0.9760 | | 0.0923 | 9.0 | 5634 | 0.0743 | 0.7881 | 0.8740 | 0.8289 | 0.9789 | | 0.0711 | 10.0 | 6260 | 0.0668 | 0.8125 | 0.8790 | 0.8444 | 0.9819 | ### Framework versions - Transformers 4.22.2 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
flamesbob/Yadu_model
flamesbob
2022-11-23T01:16:33Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2022-11-08T00:22:31Z
--- license: creativeml-openrail-m --- To use draw emphasis from the training model include the word `m_yadu` in your prompt. `yadu_model_6k` was trained on anythingv3 for 6000 steps, classification "artstyle" License This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license here
AlekseyKorshuk/6.7b-dalio-principles-book-1-epoch-1-gas-6e-6-lr
AlekseyKorshuk
2022-11-23T00:59:42Z
5
0
transformers
[ "transformers", "pytorch", "opt", "text-generation", "generated_from_trainer", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-11-22T12:39:25Z
--- license: other tags: - generated_from_trainer metrics: - accuracy model-index: - name: 6.7b-dalio-principles-book-1-epoch-1-gas-6e-6-lr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 6.7b-dalio-principles-book-1-epoch-1-gas-6e-6-lr This model is a fine-tuned version of [facebook/opt-6.7b](https://huggingface.co/facebook/opt-6.7b) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.4121 - Accuracy: 0.3487 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-06 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 32 - total_eval_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 1.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.4875 | 0.11 | 1 | 2.5059 | 0.3397 | | 2.5339 | 0.22 | 2 | 2.5059 | 0.3397 | | 2.5161 | 0.33 | 3 | 2.5059 | 0.3397 | | 2.4524 | 0.44 | 4 | 2.5059 | 0.3397 | | 2.554 | 0.56 | 5 | 2.4785 | 0.3416 | | 2.4678 | 0.67 | 6 | 2.4785 | 0.3416 | | 2.4836 | 0.78 | 7 | 2.4473 | 0.3458 | | 2.4138 | 0.89 | 8 | 2.4297 | 0.3473 | | 2.4551 | 1.0 | 9 | 2.4121 | 0.3487 | ### Framework versions - Transformers 4.25.0.dev0 - Pytorch 1.12.1+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
mwmathis/DeepLabCutModelZoo-full_cheetah
mwmathis
2022-11-23T00:39:10Z
0
0
null
[ "computer_vision", "pose_estimation", "arxiv:2103.13282", "license:lgpl-3.0", "region:us" ]
null
2022-11-23T00:38:27Z
--- license: lgpl-3.0 tags: - computer_vision - pose_estimation --- Model from Joska et al. 2021 ICRA please cite: https://arxiv.org/abs/2103.13282
sd-concepts-library/alberto-montt
sd-concepts-library
2022-11-23T00:37:04Z
0
7
null
[ "license:mit", "region:us" ]
null
2022-11-23T00:36:55Z
--- license: mit --- ### Alberto_Montt on Stable Diffusion This is the `<AlbertoMontt>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<AlbertoMontt> 0](https://huggingface.co/sd-concepts-library/alberto-montt/resolve/main/concept_images/3.jpeg) ![<AlbertoMontt> 1](https://huggingface.co/sd-concepts-library/alberto-montt/resolve/main/concept_images/0.jpeg) ![<AlbertoMontt> 2](https://huggingface.co/sd-concepts-library/alberto-montt/resolve/main/concept_images/5.jpeg) ![<AlbertoMontt> 3](https://huggingface.co/sd-concepts-library/alberto-montt/resolve/main/concept_images/2.jpeg) ![<AlbertoMontt> 4](https://huggingface.co/sd-concepts-library/alberto-montt/resolve/main/concept_images/1.jpeg) ![<AlbertoMontt> 5](https://huggingface.co/sd-concepts-library/alberto-montt/resolve/main/concept_images/4.jpeg)
manirai91/mbert-conll2003
manirai91
2022-11-23T00:19:30Z
119
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:conll2003", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-11-22T23:16:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - conll2003 model-index: - name: mbert-conll2003 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mbert-conll2003 This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the conll2003 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.06 - num_epochs: 10 ### Training results ### Framework versions - Transformers 4.24.0 - Pytorch 1.11.0 - Datasets 2.7.0 - Tokenizers 0.13.2
jeapaul/wav2vec2-base-torgo-demo-m04-nolm
jeapaul
2022-11-23T00:14:40Z
106
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-11-16T20:01:09Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-torgo-demo-m04-nolm results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-torgo-demo-m04-nolm This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.5735 - Wer: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:---:| | 3.431 | 0.88 | 500 | 4.5567 | 1.0 | | 3.4727 | 1.75 | 1000 | 3.5626 | 1.0 | | 3.3879 | 2.63 | 1500 | 3.9274 | 1.0 | | 3.3513 | 3.5 | 2000 | 3.4813 | 1.0 | | 3.3538 | 4.38 | 2500 | 3.7300 | 1.0 | | 3.3539 | 5.25 | 3000 | 3.5714 | 1.0 | | 3.339 | 6.13 | 3500 | 3.6732 | 1.0 | | 3.3038 | 7.01 | 4000 | 3.6788 | 1.0 | | 3.35 | 7.88 | 4500 | 3.6715 | 1.0 | | 3.338 | 8.76 | 5000 | 3.5161 | 1.0 | | 3.3306 | 9.63 | 5500 | 3.7386 | 1.0 | | 3.3266 | 10.51 | 6000 | 3.4908 | 1.0 | | 3.3184 | 11.38 | 6500 | 3.7669 | 1.0 | | 3.3189 | 12.26 | 7000 | 3.6142 | 1.0 | | 3.331 | 13.13 | 7500 | 3.5619 | 1.0 | | 3.3139 | 14.01 | 8000 | 3.6632 | 1.0 | | 3.3069 | 14.89 | 8500 | 3.6127 | 1.0 | | 3.315 | 15.76 | 9000 | 3.5562 | 1.0 | | 3.3079 | 16.64 | 9500 | 3.7094 | 1.0 | | 3.3077 | 17.51 | 10000 | 3.5412 | 1.0 | | 3.3188 | 18.39 | 10500 | 3.6303 | 1.0 | | 3.3133 | 19.26 | 11000 | 3.5704 | 1.0 | | 3.3428 | 20.14 | 11500 | 3.5662 | 1.0 | | 3.3082 | 21.02 | 12000 | 3.6084 | 1.0 | | 3.3238 | 21.89 | 12500 | 3.6164 | 1.0 | | 3.3119 | 22.77 | 13000 | 3.5787 | 1.0 | | 3.2981 | 23.64 | 13500 | 3.6356 | 1.0 | | 3.3153 | 24.52 | 14000 | 3.5726 | 1.0 | | 3.3065 | 25.39 | 14500 | 3.5908 | 1.0 | | 3.3199 | 26.27 | 15000 | 3.5823 | 1.0 | | 3.306 | 27.15 | 15500 | 3.5658 | 1.0 | | 3.3153 | 28.02 | 16000 | 3.5818 | 1.0 | | 3.2762 | 28.9 | 16500 | 3.5810 | 1.0 | | 3.3196 | 29.77 | 17000 | 3.5735 | 1.0 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 2.0.0 - Tokenizers 0.13.2
manirai91/mbert-imdb
manirai91
2022-11-22T23:08:42Z
101
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-11-22T08:42:34Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb model-index: - name: mbert-imdb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mbert-imdb This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the imdb dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.06 - num_epochs: 10 ### Training results ### Framework versions - Transformers 4.24.0 - Pytorch 1.11.0 - Datasets 2.7.0 - Tokenizers 0.13.2
unza/xls-r-300m-nyanja-fullset
unza
2022-11-22T23:02:48Z
163
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "NyanjaSpeech", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-11-22T10:28:07Z
--- license: apache-2.0 tags: - automatic-speech-recognition - NyanjaSpeech - generated_from_trainer metrics: - wer model-index: - name: xls-r-300m-nyanja-fullset results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xls-r-300m-nyanja-fullset This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the NYANJASPEECH - NYA dataset. It achieves the following results on the evaluation set: - Loss: 3.1987 - Wer: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1500 - num_epochs: 2.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:---:| | 3.3815 | 1.58 | 500 | 3.1987 | 1.0 | ### Framework versions - Transformers 4.25.0.dev0 - Pytorch 1.12.1+cu113 - Datasets 2.7.1 - Tokenizers 0.13.2
monakth/distilbert-base-multilingual-cased-sv2
monakth
2022-11-22T22:26:39Z
105
0
transformers
[ "transformers", "pytorch", "distilbert", "question-answering", "generated_from_trainer", "dataset:squad_v2", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-11-22T22:24:13Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad_v2 model-index: - name: distilbert-base-multilingual-cased-sv2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-multilingual-cased-sv2 This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the squad_v2 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0 - Datasets 2.1.0 - Tokenizers 0.12.1
sacculifer/dimbat_disaster_type_distilbert
sacculifer
2022-11-22T22:07:32Z
61
0
transformers
[ "transformers", "tf", "distilbert", "text-classification", "generated_from_keras_callback", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-08-05T19:36:01Z
--- tags: - generated_from_keras_callback model-index: - name: tmpzujlpono results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Tweets disaster type classification model This model was trained from part of Disaster Tweet Corpus 2020 (Analysis of Filtering Models for Disaster-Related Tweets, Wiegmann,M. et al, 2020) dataset It achieves the following results on the evaluation set: - Train Loss: 0.0875 - Train Accuracy: 0.8783 - Validation Loss: 0.2980 - Validation Accuracy: 0.8133 - Epoch: 5 ## Model description Labels <br> disease --- 1 <br> earthquake --- 2 <br> flood --- 3 <br> hurricane & tornado --- 4 <br> wildfire --- 5 <br> industrial accident --- 6 <br> societal crime --- 7 <br> transportation accident --- 8 <br> meteor crash --- 9 <br> haze --- 0 ## Intended uses & limitation This model is able to detect 10 different type of disaster (nature and human-made), but it shows problem to detect the type 0 disaster due to the insignificant tweets and similarity to type 5 in the training dataset ### Training hyperparameters The following hyperparameters were used during training: - optimizer: <br> batch_size = 16 <br> num_epochs = 5 <br> batches_per_epoch = len(tokenized_tweet["train"])//batch_size <br> total_train_steps = int(batches_per_epoch * num_epochs) <br> optimizer, schedule = create_optimizer(init_lr=2e-5, num_warmup_steps=0, num_train_steps=total_train_steps) - training_precision: float32 ### Framework versions - Transformers 4.16.2 - TensorFlow 2.9.2 - Datasets 2.4.0 - Tokenizers 0.12.1 ### How to use it from transformers import AutoTokenizer, TFAutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("sacculifer/dimbat_disaster_type_distilbert") model = TFAutoModelForSequenceClassification.from_pretrained("sacculifer/dimbat_disaster_type_distilbert")
research-backup/relbert-roberta-base-semeval2012-v6-average-prompt-e-triplet-2
research-backup
2022-11-22T20:25:41Z
103
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity_v6", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-11-22T07:40:00Z
--- datasets: - relbert/semeval2012_relational_similarity_v6 model-index: - name: relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-triplet-2 results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.790515873015873 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.37967914438502676 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3857566765578635 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5063924402445803 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.646 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.4517543859649123 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.42824074074074076 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8080458038270304 - name: F1 (macro) type: f1_macro value: 0.7357565896819839 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.7894366197183098 - name: F1 (macro) type: f1_macro value: 0.4680529848631216 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.5520043336944745 - name: F1 (macro) type: f1_macro value: 0.5647005456999193 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9177157960631565 - name: F1 (macro) type: f1_macro value: 0.7991809595622609 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.770918207458477 - name: F1 (macro) type: f1_macro value: 0.701131895018139 --- # relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-triplet-2 RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on [relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-triplet-2/raw/main/analogy.json)): - Accuracy on SAT (full): 0.37967914438502676 - Accuracy on SAT: 0.3857566765578635 - Accuracy on BATS: 0.5063924402445803 - Accuracy on U2: 0.4517543859649123 - Accuracy on U4: 0.42824074074074076 - Accuracy on Google: 0.646 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-triplet-2/raw/main/classification.json)): - Micro F1 score on BLESS: 0.8080458038270304 - Micro F1 score on CogALexV: 0.7894366197183098 - Micro F1 score on EVALution: 0.5520043336944745 - Micro F1 score on K&H+N: 0.9177157960631565 - Micro F1 score on ROOT09: 0.770918207458477 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-triplet-2/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.790515873015873 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-triplet-2") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-base - max_length: 64 - mode: average - data: relbert/semeval2012_relational_similarity_v6 - split: train - split_eval: validation - template_mode: manual - loss_function: triplet - classification_loss: False - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 10 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 2 - exclude_relation: None - n_sample: 320 - gradient_accumulation: 8 - relation_level: None The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-triplet-2/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
research-backup/relbert-roberta-base-semeval2012-v6-average-prompt-c-triplet-2
research-backup
2022-11-22T19:57:35Z
103
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity_v6", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-11-22T07:36:40Z
--- datasets: - relbert/semeval2012_relational_similarity_v6 model-index: - name: relbert/relbert-roberta-base-semeval2012-v6-average-prompt-c-triplet-2 results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.8335714285714285 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.38235294117647056 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3798219584569733 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5336297943301834 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.662 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.4473684210526316 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.4166666666666667 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8625885189091457 - name: F1 (macro) type: f1_macro value: 0.8603027072164148 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8065727699530516 - name: F1 (macro) type: f1_macro value: 0.5506373401584694 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6175514626218852 - name: F1 (macro) type: f1_macro value: 0.6052063445391235 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9263406830354037 - name: F1 (macro) type: f1_macro value: 0.8061025838390545 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8373550611093701 - name: F1 (macro) type: f1_macro value: 0.837629132435287 --- # relbert/relbert-roberta-base-semeval2012-v6-average-prompt-c-triplet-2 RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on [relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-c-triplet-2/raw/main/analogy.json)): - Accuracy on SAT (full): 0.38235294117647056 - Accuracy on SAT: 0.3798219584569733 - Accuracy on BATS: 0.5336297943301834 - Accuracy on U2: 0.4473684210526316 - Accuracy on U4: 0.4166666666666667 - Accuracy on Google: 0.662 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-c-triplet-2/raw/main/classification.json)): - Micro F1 score on BLESS: 0.8625885189091457 - Micro F1 score on CogALexV: 0.8065727699530516 - Micro F1 score on EVALution: 0.6175514626218852 - Micro F1 score on K&H+N: 0.9263406830354037 - Micro F1 score on ROOT09: 0.8373550611093701 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-c-triplet-2/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.8335714285714285 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-average-prompt-c-triplet-2") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-base - max_length: 64 - mode: average - data: relbert/semeval2012_relational_similarity_v6 - split: train - split_eval: validation - template_mode: manual - loss_function: triplet - classification_loss: False - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 10 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 2 - exclude_relation: None - n_sample: 320 - gradient_accumulation: 8 - relation_level: None The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-c-triplet-2/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
alryan1478/gpt2-wikitext2
alryan1478
2022-11-22T19:15:47Z
175
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-11-22T16:54:38Z
--- license: mit tags: - generated_from_trainer model-index: - name: gpt2-wikitext2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-wikitext2 This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 6.1085 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 6.561 | 1.0 | 2249 | 6.4685 | | 6.1921 | 2.0 | 4498 | 6.1978 | | 6.017 | 3.0 | 6747 | 6.1085 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.13.0+cu117 - Datasets 2.7.0 - Tokenizers 0.13.2
HarshitaDiddee/AmericasNLP_Kotiria
HarshitaDiddee
2022-11-22T18:58:36Z
4
0
transformers
[ "transformers", "wav2vec2", "automatic-speech-recognition", "license:cc-by-4.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-11-22T18:56:28Z
--- license: cc-by-4.0 --- ASR for Kotiria ( Data Source: AmericasNLP Shared Task for Low-Resource ASR)
research-backup/relbert-roberta-base-semeval2012-v6-mask-prompt-c-triplet-2
research-backup
2022-11-22T18:16:11Z
103
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity_v6", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-11-22T07:26:58Z
--- datasets: - relbert/semeval2012_relational_similarity_v6 model-index: - name: relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-triplet-2 results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.6346626984126984 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.32887700534759357 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3264094955489614 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.47581989994441354 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.464 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.37719298245614036 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.36342592592592593 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.7761036612927528 - name: F1 (macro) type: f1_macro value: 0.7415561766602355 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.7328638497652582 - name: F1 (macro) type: f1_macro value: 0.47573763054929613 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.5390032502708559 - name: F1 (macro) type: f1_macro value: 0.49194003623703636 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8753564721430062 - name: F1 (macro) type: f1_macro value: 0.7536524804914483 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8282670009401442 - name: F1 (macro) type: f1_macro value: 0.8236645741563291 --- # relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-triplet-2 RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on [relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-triplet-2/raw/main/analogy.json)): - Accuracy on SAT (full): 0.32887700534759357 - Accuracy on SAT: 0.3264094955489614 - Accuracy on BATS: 0.47581989994441354 - Accuracy on U2: 0.37719298245614036 - Accuracy on U4: 0.36342592592592593 - Accuracy on Google: 0.464 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-triplet-2/raw/main/classification.json)): - Micro F1 score on BLESS: 0.7761036612927528 - Micro F1 score on CogALexV: 0.7328638497652582 - Micro F1 score on EVALution: 0.5390032502708559 - Micro F1 score on K&H+N: 0.8753564721430062 - Micro F1 score on ROOT09: 0.8282670009401442 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-triplet-2/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.6346626984126984 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-triplet-2") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-base - max_length: 64 - mode: mask - data: relbert/semeval2012_relational_similarity_v6 - split: train - split_eval: validation - template_mode: manual - loss_function: triplet - classification_loss: False - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 9 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 2 - exclude_relation: None - n_sample: 320 - gradient_accumulation: 8 - relation_level: None The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-triplet-2/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
research-backup/relbert-roberta-base-semeval2012-v6-mask-prompt-b-triplet-2
research-backup
2022-11-22T17:54:16Z
103
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity_v6", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-11-22T07:24:50Z
--- datasets: - relbert/semeval2012_relational_similarity_v6 model-index: - name: relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-b-triplet-2 results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.7584126984126984 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.32887700534759357 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3353115727002967 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.39466370205669815 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.504 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.39035087719298245 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.38425925925925924 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8323037516950429 - name: F1 (macro) type: f1_macro value: 0.8135716497645339 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.7492957746478873 - name: F1 (macro) type: f1_macro value: 0.28766475530328117 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.5861321776814734 - name: F1 (macro) type: f1_macro value: 0.545958272767557 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.903109132642415 - name: F1 (macro) type: f1_macro value: 0.7624740127692404 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8429959260419931 - name: F1 (macro) type: f1_macro value: 0.8383818257665551 --- # relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-b-triplet-2 RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on [relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-b-triplet-2/raw/main/analogy.json)): - Accuracy on SAT (full): 0.32887700534759357 - Accuracy on SAT: 0.3353115727002967 - Accuracy on BATS: 0.39466370205669815 - Accuracy on U2: 0.39035087719298245 - Accuracy on U4: 0.38425925925925924 - Accuracy on Google: 0.504 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-b-triplet-2/raw/main/classification.json)): - Micro F1 score on BLESS: 0.8323037516950429 - Micro F1 score on CogALexV: 0.7492957746478873 - Micro F1 score on EVALution: 0.5861321776814734 - Micro F1 score on K&H+N: 0.903109132642415 - Micro F1 score on ROOT09: 0.8429959260419931 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-b-triplet-2/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.7584126984126984 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-b-triplet-2") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-base - max_length: 64 - mode: mask - data: relbert/semeval2012_relational_similarity_v6 - split: train - split_eval: validation - template_mode: manual - loss_function: triplet - classification_loss: False - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 9 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 2 - exclude_relation: None - n_sample: 320 - gradient_accumulation: 8 - relation_level: None The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-b-triplet-2/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
renjithman/finetuning-sentiment-model-3000-samples
renjithman
2022-11-22T17:43:52Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-11-22T17:30:07Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb metrics: - accuracy - f1 model-index: - name: finetuning-sentiment-model-3000-samples results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb config: plain_text split: train args: plain_text metrics: - name: Accuracy type: accuracy value: 0.87 - name: F1 type: f1 value: 0.8704318936877077 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3099 - Accuracy: 0.87 - F1: 0.8704 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.7.0 - Tokenizers 0.13.2
research-backup/relbert-roberta-base-semeval2012-v6-average-prompt-e-triplet-1
research-backup
2022-11-22T17:34:18Z
103
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity_v6", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-11-22T07:40:04Z
--- datasets: - relbert/semeval2012_relational_similarity_v6 model-index: - name: relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-triplet-1 results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.8018650793650793 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3502673796791444 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.35014836795252224 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5202890494719289 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.644 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.39035087719298245 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.43287037037037035 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8461654361910502 - name: F1 (macro) type: f1_macro value: 0.8411664963735426 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8145539906103286 - name: F1 (macro) type: f1_macro value: 0.5873414064116238 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6505958829902492 - name: F1 (macro) type: f1_macro value: 0.6269958308732405 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9319051262433052 - name: F1 (macro) type: f1_macro value: 0.8393686548194149 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.7511751801942964 - name: F1 (macro) type: f1_macro value: 0.6464435364634403 --- # relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-triplet-1 RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on [relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-triplet-1/raw/main/analogy.json)): - Accuracy on SAT (full): 0.3502673796791444 - Accuracy on SAT: 0.35014836795252224 - Accuracy on BATS: 0.5202890494719289 - Accuracy on U2: 0.39035087719298245 - Accuracy on U4: 0.43287037037037035 - Accuracy on Google: 0.644 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-triplet-1/raw/main/classification.json)): - Micro F1 score on BLESS: 0.8461654361910502 - Micro F1 score on CogALexV: 0.8145539906103286 - Micro F1 score on EVALution: 0.6505958829902492 - Micro F1 score on K&H+N: 0.9319051262433052 - Micro F1 score on ROOT09: 0.7511751801942964 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-triplet-1/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.8018650793650793 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-triplet-1") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-base - max_length: 64 - mode: average - data: relbert/semeval2012_relational_similarity_v6 - split: train - split_eval: validation - template_mode: manual - loss_function: triplet - classification_loss: False - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 9 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 1 - exclude_relation: None - n_sample: 320 - gradient_accumulation: 8 - relation_level: None The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-triplet-1/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
research-backup/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-2
research-backup
2022-11-22T17:33:29Z
103
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity_v6", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-11-22T07:22:15Z
--- datasets: - relbert/semeval2012_relational_similarity_v6 model-index: - name: relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-2 results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.7463293650793651 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.34759358288770054 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3590504451038576 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.481378543635353 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.494 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3991228070175439 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.35648148148148145 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8610818140726232 - name: F1 (macro) type: f1_macro value: 0.8525458448699613 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8171361502347417 - name: F1 (macro) type: f1_macro value: 0.5610856949320919 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6229685807150596 - name: F1 (macro) type: f1_macro value: 0.6126645128177534 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9215413507685887 - name: F1 (macro) type: f1_macro value: 0.8042276096823726 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.857724851143842 - name: F1 (macro) type: f1_macro value: 0.8472661094927697 --- # relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-2 RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on [relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-2/raw/main/analogy.json)): - Accuracy on SAT (full): 0.34759358288770054 - Accuracy on SAT: 0.3590504451038576 - Accuracy on BATS: 0.481378543635353 - Accuracy on U2: 0.3991228070175439 - Accuracy on U4: 0.35648148148148145 - Accuracy on Google: 0.494 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-2/raw/main/classification.json)): - Micro F1 score on BLESS: 0.8610818140726232 - Micro F1 score on CogALexV: 0.8171361502347417 - Micro F1 score on EVALution: 0.6229685807150596 - Micro F1 score on K&H+N: 0.9215413507685887 - Micro F1 score on ROOT09: 0.857724851143842 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-2/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.7463293650793651 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-2") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-base - max_length: 64 - mode: mask - data: relbert/semeval2012_relational_similarity_v6 - split: train - split_eval: validation - template_mode: manual - loss_function: triplet - classification_loss: False - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 9 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 2 - exclude_relation: None - n_sample: 320 - gradient_accumulation: 8 - relation_level: None The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-2/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
Rahul-2022/detr-base-sroie
Rahul-2022
2022-11-22T17:31:26Z
163
0
transformers
[ "transformers", "pytorch", "tensorboard", "detr", "object-detection", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
object-detection
2022-11-22T17:09:50Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: detr-base-sroie results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # detr-base-sroie This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.7.0 - Tokenizers 0.13.2
research-backup/relbert-roberta-base-semeval2012-v6-average-prompt-d-triplet-1
research-backup
2022-11-22T17:31:19Z
103
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity_v6", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-11-22T07:38:22Z
--- datasets: - relbert/semeval2012_relational_similarity_v6 model-index: - name: relbert/relbert-roberta-base-semeval2012-v6-average-prompt-d-triplet-1 results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.7048015873015873 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.37967914438502676 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3916913946587537 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5347415230683713 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.69 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.41228070175438597 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3888888888888889 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.853246948922706 - name: F1 (macro) type: f1_macro value: 0.8485536876305343 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8044600938967136 - name: F1 (macro) type: f1_macro value: 0.5726819680585065 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.5839653304442037 - name: F1 (macro) type: f1_macro value: 0.5524953070884607 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.934687347847256 - name: F1 (macro) type: f1_macro value: 0.8063588254058023 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8279536195549985 - name: F1 (macro) type: f1_macro value: 0.7955713493721125 --- # relbert/relbert-roberta-base-semeval2012-v6-average-prompt-d-triplet-1 RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on [relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-d-triplet-1/raw/main/analogy.json)): - Accuracy on SAT (full): 0.37967914438502676 - Accuracy on SAT: 0.3916913946587537 - Accuracy on BATS: 0.5347415230683713 - Accuracy on U2: 0.41228070175438597 - Accuracy on U4: 0.3888888888888889 - Accuracy on Google: 0.69 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-d-triplet-1/raw/main/classification.json)): - Micro F1 score on BLESS: 0.853246948922706 - Micro F1 score on CogALexV: 0.8044600938967136 - Micro F1 score on EVALution: 0.5839653304442037 - Micro F1 score on K&H+N: 0.934687347847256 - Micro F1 score on ROOT09: 0.8279536195549985 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-d-triplet-1/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.7048015873015873 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-average-prompt-d-triplet-1") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-base - max_length: 64 - mode: average - data: relbert/semeval2012_relational_similarity_v6 - split: train - split_eval: validation - template_mode: manual - loss_function: triplet - classification_loss: False - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 10 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 1 - exclude_relation: None - n_sample: 320 - gradient_accumulation: 8 - relation_level: None The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-d-triplet-1/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
research-backup/relbert-roberta-base-semeval2012-v6-average-prompt-c-triplet-1
research-backup
2022-11-22T17:26:35Z
104
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity_v6", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-11-22T07:36:45Z
--- datasets: - relbert/semeval2012_relational_similarity_v6 model-index: - name: relbert/relbert-roberta-base-semeval2012-v6-average-prompt-c-triplet-1 results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.7624206349206349 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3770053475935829 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3768545994065282 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.44580322401334077 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.57 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.39473684210526316 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.37962962962962965 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8797649540455025 - name: F1 (macro) type: f1_macro value: 0.8747086885506318 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.7992957746478874 - name: F1 (macro) type: f1_macro value: 0.5104712427778083 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6397616468039004 - name: F1 (macro) type: f1_macro value: 0.6084431389476428 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9367044585101204 - name: F1 (macro) type: f1_macro value: 0.8301423655430062 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8677530554685051 - name: F1 (macro) type: f1_macro value: 0.8691031015559968 --- # relbert/relbert-roberta-base-semeval2012-v6-average-prompt-c-triplet-1 RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on [relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-c-triplet-1/raw/main/analogy.json)): - Accuracy on SAT (full): 0.3770053475935829 - Accuracy on SAT: 0.3768545994065282 - Accuracy on BATS: 0.44580322401334077 - Accuracy on U2: 0.39473684210526316 - Accuracy on U4: 0.37962962962962965 - Accuracy on Google: 0.57 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-c-triplet-1/raw/main/classification.json)): - Micro F1 score on BLESS: 0.8797649540455025 - Micro F1 score on CogALexV: 0.7992957746478874 - Micro F1 score on EVALution: 0.6397616468039004 - Micro F1 score on K&H+N: 0.9367044585101204 - Micro F1 score on ROOT09: 0.8677530554685051 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-c-triplet-1/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.7624206349206349 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-average-prompt-c-triplet-1") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-base - max_length: 64 - mode: average - data: relbert/semeval2012_relational_similarity_v6 - split: train - split_eval: validation - template_mode: manual - loss_function: triplet - classification_loss: False - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 9 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 1 - exclude_relation: None - n_sample: 320 - gradient_accumulation: 8 - relation_level: None The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-c-triplet-1/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
alanoix/whisper-small-br
alanoix
2022-11-22T17:26:31Z
80
1
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "hf-asr-leaderboard", "generated_from_trainer", "br", "dataset:mozilla-foundation/common_voice_11_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-11-22T09:51:24Z
--- language: - br license: apache-2.0 tags: - hf-asr-leaderboard - generated_from_trainer datasets: - mozilla-foundation/common_voice_11_0 metrics: - wer model-index: - name: whisper-small-br results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 11.0 type: mozilla-foundation/common_voice_11_0 args: 'config: br, split: test' metrics: - name: Wer type: wer value: 49.98168162667155 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-small-br This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.8542 - Wer: 49.9817 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.1415 | 3.36 | 1000 | 0.7406 | 54.0117 | | 0.0147 | 6.71 | 2000 | 0.7909 | 51.5479 | | 0.0011 | 10.07 | 3000 | 0.8368 | 49.7710 | | 0.0007 | 13.42 | 4000 | 0.8542 | 49.9817 | ### Framework versions - Transformers 4.25.0.dev0 - Pytorch 1.12.1+cu113 - Datasets 2.7.0 - Tokenizers 0.13.2
research-backup/relbert-roberta-base-semeval2012-v6-average-prompt-a-triplet-1
research-backup
2022-11-22T17:19:46Z
105
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity_v6", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-11-22T07:32:32Z
--- datasets: - relbert/semeval2012_relational_similarity_v6 model-index: - name: relbert/relbert-roberta-base-semeval2012-v6-average-prompt-a-triplet-1 results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.775079365079365 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3716577540106952 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3768545994065282 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.34185658699277377 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.428 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.37719298245614036 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3541666666666667 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.899201446436643 - name: F1 (macro) type: f1_macro value: 0.888889751667277 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.7814553990610328 - name: F1 (macro) type: f1_macro value: 0.5516320672010655 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6408450704225352 - name: F1 (macro) type: f1_macro value: 0.6082440999373899 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9525631216526397 - name: F1 (macro) type: f1_macro value: 0.862670256588896 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.840802256345973 - name: F1 (macro) type: f1_macro value: 0.8106179148472547 --- # relbert/relbert-roberta-base-semeval2012-v6-average-prompt-a-triplet-1 RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on [relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-a-triplet-1/raw/main/analogy.json)): - Accuracy on SAT (full): 0.3716577540106952 - Accuracy on SAT: 0.3768545994065282 - Accuracy on BATS: 0.34185658699277377 - Accuracy on U2: 0.37719298245614036 - Accuracy on U4: 0.3541666666666667 - Accuracy on Google: 0.428 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-a-triplet-1/raw/main/classification.json)): - Micro F1 score on BLESS: 0.899201446436643 - Micro F1 score on CogALexV: 0.7814553990610328 - Micro F1 score on EVALution: 0.6408450704225352 - Micro F1 score on K&H+N: 0.9525631216526397 - Micro F1 score on ROOT09: 0.840802256345973 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-a-triplet-1/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.775079365079365 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-average-prompt-a-triplet-1") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-base - max_length: 64 - mode: average - data: relbert/semeval2012_relational_similarity_v6 - split: train - split_eval: validation - template_mode: manual - loss_function: triplet - classification_loss: False - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 9 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 1 - exclude_relation: None - n_sample: 320 - gradient_accumulation: 8 - relation_level: None The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-a-triplet-1/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
datasciencemmw/old-beta1
datasciencemmw
2022-11-22T17:15:53Z
104
0
transformers
[ "transformers", "pytorch", "autotrain", "text-classification", "en", "dataset:LiveEvil/autotrain-data-copuml-production", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
text-classification
2022-11-22T17:14:48Z
--- tags: - autotrain - text-classification language: - en widget: - text: "I love AutoTrain 🤗" datasets: - LiveEvil/autotrain-data-copuml-production co2_eq_emissions: emissions: 0.9758714074673083 --- # Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 2205570752 - CO2 Emissions (in grams): 0.9759 ## Validation Metrics - Loss: 1.092 - Accuracy: 0.701 - Macro F1: 0.416 - Micro F1: 0.701 - Weighted F1: 0.670 - Macro Precision: 0.399 - Micro Precision: 0.701 - Weighted Precision: 0.643 - Macro Recall: 0.436 - Micro Recall: 0.701 - Weighted Recall: 0.701 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/LiveEvil/autotrain-copuml-production-2205570752 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("LiveEvil/autotrain-copuml-production-2205570752", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("LiveEvil/autotrain-copuml-production-2205570752", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
research-backup/relbert-roberta-base-semeval2012-v6-mask-prompt-e-triplet-1
research-backup
2022-11-22T17:13:57Z
103
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity_v6", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-11-22T07:30:48Z
--- datasets: - relbert/semeval2012_relational_similarity_v6 model-index: - name: relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-triplet-1 results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.7387698412698412 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3342245989304813 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.34718100890207715 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5441912173429683 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.644 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.35526315789473684 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.37962962962962965 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8145246346240772 - name: F1 (macro) type: f1_macro value: 0.801802054210856 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.7774647887323943 - name: F1 (macro) type: f1_macro value: 0.5026184700694826 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.5980498374864572 - name: F1 (macro) type: f1_macro value: 0.5765100456864519 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8878069138206858 - name: F1 (macro) type: f1_macro value: 0.7711282513838499 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.827326856784707 - name: F1 (macro) type: f1_macro value: 0.824410778730745 --- # relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-triplet-1 RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on [relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-triplet-1/raw/main/analogy.json)): - Accuracy on SAT (full): 0.3342245989304813 - Accuracy on SAT: 0.34718100890207715 - Accuracy on BATS: 0.5441912173429683 - Accuracy on U2: 0.35526315789473684 - Accuracy on U4: 0.37962962962962965 - Accuracy on Google: 0.644 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-triplet-1/raw/main/classification.json)): - Micro F1 score on BLESS: 0.8145246346240772 - Micro F1 score on CogALexV: 0.7774647887323943 - Micro F1 score on EVALution: 0.5980498374864572 - Micro F1 score on K&H+N: 0.8878069138206858 - Micro F1 score on ROOT09: 0.827326856784707 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-triplet-1/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.7387698412698412 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-triplet-1") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-base - max_length: 64 - mode: mask - data: relbert/semeval2012_relational_similarity_v6 - split: train - split_eval: validation - template_mode: manual - loss_function: triplet - classification_loss: False - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 9 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 1 - exclude_relation: None - n_sample: 320 - gradient_accumulation: 8 - relation_level: None The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-triplet-1/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
research-backup/relbert-roberta-base-semeval2012-v6-mask-prompt-b-triplet-1
research-backup
2022-11-22T17:03:22Z
103
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity_v6", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-11-22T07:24:49Z
--- datasets: - relbert/semeval2012_relational_similarity_v6 model-index: - name: relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-b-triplet-1 results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.7523809523809524 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.35294117647058826 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.35014836795252224 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.4191217342968316 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.554 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.41228070175438597 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.4050925925925926 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8410426397468735 - name: F1 (macro) type: f1_macro value: 0.8153049654017815 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.7981220657276996 - name: F1 (macro) type: f1_macro value: 0.5156838585733334 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.605092091007584 - name: F1 (macro) type: f1_macro value: 0.5707468312851958 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9076997982889338 - name: F1 (macro) type: f1_macro value: 0.7719219859032024 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.855531181447822 - name: F1 (macro) type: f1_macro value: 0.8548547221202175 --- # relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-b-triplet-1 RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on [relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-b-triplet-1/raw/main/analogy.json)): - Accuracy on SAT (full): 0.35294117647058826 - Accuracy on SAT: 0.35014836795252224 - Accuracy on BATS: 0.4191217342968316 - Accuracy on U2: 0.41228070175438597 - Accuracy on U4: 0.4050925925925926 - Accuracy on Google: 0.554 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-b-triplet-1/raw/main/classification.json)): - Micro F1 score on BLESS: 0.8410426397468735 - Micro F1 score on CogALexV: 0.7981220657276996 - Micro F1 score on EVALution: 0.605092091007584 - Micro F1 score on K&H+N: 0.9076997982889338 - Micro F1 score on ROOT09: 0.855531181447822 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-b-triplet-1/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.7523809523809524 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-b-triplet-1") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-base - max_length: 64 - mode: mask - data: relbert/semeval2012_relational_similarity_v6 - split: train - split_eval: validation - template_mode: manual - loss_function: triplet - classification_loss: False - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 9 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 1 - exclude_relation: None - n_sample: 320 - gradient_accumulation: 8 - relation_level: None The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-b-triplet-1/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
research-backup/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-1
research-backup
2022-11-22T17:00:21Z
107
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity_v6", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-11-22T07:22:15Z
--- datasets: - relbert/semeval2012_relational_similarity_v6 model-index: - name: relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-1 results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.8430952380952381 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3582887700534759 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3649851632047478 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.4280155642023346 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.532 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3333333333333333 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3101851851851852 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8460147657073979 - name: F1 (macro) type: f1_macro value: 0.8315897128108677 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8084507042253521 - name: F1 (macro) type: f1_macro value: 0.5269777075808457 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6424702058504875 - name: F1 (macro) type: f1_macro value: 0.6178608994596904 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.913612019197329 - name: F1 (macro) type: f1_macro value: 0.7738790468743169 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8693199623942337 - name: F1 (macro) type: f1_macro value: 0.864532922094076 --- # relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-1 RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on [relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-1/raw/main/analogy.json)): - Accuracy on SAT (full): 0.3582887700534759 - Accuracy on SAT: 0.3649851632047478 - Accuracy on BATS: 0.4280155642023346 - Accuracy on U2: 0.3333333333333333 - Accuracy on U4: 0.3101851851851852 - Accuracy on Google: 0.532 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-1/raw/main/classification.json)): - Micro F1 score on BLESS: 0.8460147657073979 - Micro F1 score on CogALexV: 0.8084507042253521 - Micro F1 score on EVALution: 0.6424702058504875 - Micro F1 score on K&H+N: 0.913612019197329 - Micro F1 score on ROOT09: 0.8693199623942337 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-1/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.8430952380952381 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-1") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-base - max_length: 64 - mode: mask - data: relbert/semeval2012_relational_similarity_v6 - split: train - split_eval: validation - template_mode: manual - loss_function: triplet - classification_loss: False - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 9 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 1 - exclude_relation: None - n_sample: 320 - gradient_accumulation: 8 - relation_level: None The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-1/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
semindan/xnli_m_bert_only_en
semindan
2022-11-22T16:24:52Z
3
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "dataset:xnli", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-11-15T20:48:41Z
--- license: apache-2.0 tags: - text-classification - generated_from_trainer datasets: - xnli metrics: - accuracy model-index: - name: xnli_m_bert_only_en_single_gpu results: - task: name: Text Classification type: text-classification dataset: name: xnli type: xnli config: en split: train args: en metrics: - name: Accuracy type: accuracy value: 0.8076305220883534 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xnli_m_bert_only_en_single_gpu This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the xnli dataset. It achieves the following results on the evaluation set: - Loss: 1.0082 - Accuracy: 0.8076 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 7 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.3328 | 1.0 | 3068 | 0.5433 | 0.8036 | | 0.259 | 2.0 | 6136 | 0.5708 | 0.8008 | | 0.2023 | 3.0 | 9204 | 0.6475 | 0.8048 | | 0.1362 | 4.0 | 12272 | 0.7661 | 0.7972 | | 0.0945 | 5.0 | 15340 | 0.8333 | 0.8008 | | 0.0665 | 6.0 | 18408 | 0.9312 | 0.8092 | | 0.0463 | 7.0 | 21476 | 1.0082 | 0.8076 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.13.0 - Datasets 2.6.1 - Tokenizers 0.13.1
multimodalart/sd-sc
multimodalart
2022-11-22T16:19:18Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2022-11-22T16:05:03Z
--- license: creativeml-openrail-m --- Just the Safety Checker of Stable Diffusion. For the model refer to https://huggingface.co/runwayml/stable-diffusion-v1-5
SweepCake/LunarLander-v2-PPO-HFcourse
SweepCake
2022-11-22T15:44:29Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-11-22T15:44:07Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 239.22 +/- 13.04 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
huggingtweets/oryxspioenkop
huggingtweets
2022-11-22T15:10:21Z
111
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-11-22T15:09:05Z
--- language: en thumbnail: http://www.huggingtweets.com/oryxspioenkop/1669129816805/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/929707102083395584/tCWiYbO1_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Oryx</div> <div style="text-align: center; font-size: 14px;">@oryxspioenkop</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Oryx. | Data | Oryx | | --- | --- | | Tweets downloaded | 3246 | | Retweets | 2219 | | Short tweets | 266 | | Tweets kept | 761 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/qbqfz863/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @oryxspioenkop's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2es3q78b) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2es3q78b/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/oryxspioenkop') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
jjjunyeong/bart-finetuned-squad
jjjunyeong
2022-11-22T14:42:07Z
106
0
transformers
[ "transformers", "pytorch", "tensorboard", "bart", "text2text-generation", "generated_from_trainer", "dataset:squad", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-11-22T12:27:04Z
--- tags: - generated_from_trainer datasets: - squad metrics: - rouge model-index: - name: bart-finetuned-squad results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: squad type: squad config: plain_text split: train args: plain_text metrics: - name: Rouge1 type: rouge value: 50.1505 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-finetuned-squad This model is a fine-tuned version of [p208p2002/bart-squad-qg-hl](https://huggingface.co/p208p2002/bart-squad-qg-hl) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.8813 - Rouge1: 50.1505 - Rouge2: 26.8606 - Rougel: 46.0203 - Rougelsum: 46.0242 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.6e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:| | 1.5702 | 1.0 | 125 | 1.4266 | 49.7474 | 26.6965 | 46.3227 | 46.342 | | 0.84 | 2.0 | 250 | 1.4845 | 49.8379 | 26.3973 | 45.126 | 45.1791 | | 0.535 | 3.0 | 375 | 1.6037 | 50.1413 | 27.4581 | 46.7795 | 46.8001 | | 0.3621 | 4.0 | 500 | 1.6899 | 49.6087 | 25.9818 | 45.0914 | 45.1004 | | 0.2448 | 5.0 | 625 | 1.7540 | 49.7468 | 26.5312 | 45.5623 | 45.5296 | | 0.1756 | 6.0 | 750 | 1.8287 | 49.4987 | 26.2315 | 45.3515 | 45.4214 | | 0.13 | 7.0 | 875 | 1.8809 | 49.6426 | 26.4688 | 45.5167 | 45.5427 | | 0.1016 | 8.0 | 1000 | 1.8813 | 50.1505 | 26.8606 | 46.0203 | 46.0242 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.7.0 - Tokenizers 0.13.2
bitsanlp/deberta-v3-base_base
bitsanlp
2022-11-22T14:37:33Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "deberta-v2", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-11-22T13:49:27Z
--- license: mit tags: - generated_from_trainer model-index: - name: deberta-v3-base_base results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-v3-base_base This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 28 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.7.0 - Tokenizers 0.13.2
alexziweiwang/exp15-F01-both
alexziweiwang
2022-11-22T14:22:24Z
105
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-11-22T08:36:55Z
--- tags: - generated_from_trainer model-index: - name: exp15-F01-both results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # exp15-F01-both This model is a fine-tuned version of [yongjian/wav2vec2-large-a](https://huggingface.co/yongjian/wav2vec2-large-a) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.6149 - Wer: 1.0154 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 45.2628 | 0.33 | 500 | 2.9838 | 1.0 | | 3.1304 | 0.67 | 1000 | 2.8311 | 1.0 | | 2.9607 | 1.0 | 1500 | 2.6426 | 1.0039 | | 2.7429 | 1.33 | 2000 | 2.5365 | 1.2046 | | 2.5496 | 1.66 | 2500 | 2.2169 | 1.3050 | | 2.3134 | 2.0 | 3000 | 2.0450 | 1.3127 | | 2.1189 | 2.33 | 3500 | 1.8677 | 1.2780 | | 2.0075 | 2.66 | 4000 | 1.7450 | 1.2703 | | 1.9014 | 3.0 | 4500 | 1.8381 | 1.2664 | | 1.7246 | 3.33 | 5000 | 1.7980 | 1.2510 | | 1.6783 | 3.66 | 5500 | 1.7269 | 1.2510 | | 1.589 | 3.99 | 6000 | 1.5640 | 1.2664 | | 1.4085 | 4.33 | 6500 | 1.7296 | 1.2355 | | 1.4126 | 4.66 | 7000 | 1.5208 | 1.2317 | | 1.3506 | 4.99 | 7500 | 1.6253 | 1.2317 | | 1.2276 | 5.33 | 8000 | 1.6222 | 1.2239 | | 1.1842 | 5.66 | 8500 | 1.4836 | 1.1969 | | 1.1445 | 5.99 | 9000 | 1.5313 | 1.2046 | | 1.0254 | 6.32 | 9500 | 1.9130 | 1.2046 | | 1.0214 | 6.66 | 10000 | 1.8944 | 1.2085 | | 0.9677 | 6.99 | 10500 | 1.9039 | 1.1853 | | 0.8822 | 7.32 | 11000 | 1.7036 | 1.1892 | | 0.8824 | 7.66 | 11500 | 1.6062 | 1.1815 | | 0.8695 | 7.99 | 12000 | 1.7019 | 1.1853 | | 0.7536 | 8.32 | 12500 | 1.9117 | 1.1737 | | 0.775 | 8.66 | 13000 | 1.8778 | 1.1815 | | 0.7409 | 8.99 | 13500 | 1.7534 | 1.1776 | | 0.7035 | 9.32 | 14000 | 1.9860 | 1.1853 | | 0.6905 | 9.65 | 14500 | 1.9141 | 1.1892 | | 0.6536 | 9.99 | 15000 | 1.7848 | 1.1737 | | 0.6237 | 10.32 | 15500 | 2.0624 | 1.1544 | | 0.5986 | 10.65 | 16000 | 1.9958 | 1.1544 | | 0.5838 | 10.99 | 16500 | 1.8005 | 1.1622 | | 0.5231 | 11.32 | 17000 | 1.5967 | 1.1351 | | 0.5452 | 11.65 | 17500 | 1.8145 | 1.1274 | | 0.5446 | 11.98 | 18000 | 2.0214 | 1.1429 | | 0.4727 | 12.32 | 18500 | 1.8989 | 1.1313 | | 0.4908 | 12.65 | 19000 | 1.7152 | 1.1467 | | 0.483 | 12.98 | 19500 | 1.7354 | 1.1429 | | 0.4455 | 13.32 | 20000 | 1.9493 | 1.1506 | | 0.4456 | 13.65 | 20500 | 2.0869 | 1.1197 | | 0.4306 | 13.98 | 21000 | 1.9248 | 1.1236 | | 0.3827 | 14.31 | 21500 | 1.9245 | 1.1274 | | 0.4059 | 14.65 | 22000 | 1.9478 | 1.1313 | | 0.3941 | 14.98 | 22500 | 2.2373 | 1.1197 | | 0.4094 | 15.31 | 23000 | 2.0268 | 1.1158 | | 0.3584 | 15.65 | 23500 | 1.9292 | 1.1313 | | 0.3615 | 15.98 | 24000 | 2.1744 | 1.0965 | | 0.3564 | 16.31 | 24500 | 2.4167 | 1.0927 | | 0.3202 | 16.64 | 25000 | 2.6332 | 1.1081 | | 0.3099 | 16.98 | 25500 | 2.9448 | 1.1004 | | 0.3126 | 17.31 | 26000 | 2.4662 | 1.0927 | | 0.3189 | 17.64 | 26500 | 2.3619 | 1.0772 | | 0.3929 | 17.98 | 27000 | 2.3571 | 1.0618 | | 0.27 | 18.31 | 27500 | 2.2457 | 1.0734 | | 0.2664 | 18.64 | 28000 | 2.5133 | 1.0772 | | 0.2875 | 18.97 | 28500 | 2.2798 | 1.0618 | | 0.2336 | 19.31 | 29000 | 2.3515 | 1.0347 | | 0.2597 | 19.64 | 29500 | 2.3072 | 1.0463 | | 0.2573 | 19.97 | 30000 | 2.1702 | 1.0425 | | 0.2431 | 20.31 | 30500 | 2.2727 | 1.0618 | | 0.2362 | 20.64 | 31000 | 2.3082 | 1.0772 | | 0.2377 | 20.97 | 31500 | 2.5453 | 1.0734 | | 0.228 | 21.3 | 32000 | 2.6838 | 1.0618 | | 0.2082 | 21.64 | 32500 | 2.7629 | 1.0695 | | 0.2041 | 21.97 | 33000 | 2.4433 | 1.0347 | | 0.2208 | 22.3 | 33500 | 2.2516 | 1.0463 | | 0.2505 | 22.64 | 34000 | 2.4056 | 1.0541 | | 0.187 | 22.97 | 34500 | 2.6017 | 1.0347 | | 0.1987 | 23.3 | 35000 | 2.5061 | 1.0425 | | 0.1952 | 23.64 | 35500 | 2.4440 | 1.0463 | | 0.1777 | 23.97 | 36000 | 2.4333 | 1.0463 | | 0.1981 | 24.3 | 36500 | 2.4327 | 1.0309 | | 0.1729 | 24.63 | 37000 | 2.4114 | 1.0309 | | 0.1895 | 24.97 | 37500 | 2.3885 | 1.0347 | | 0.1766 | 25.3 | 38000 | 2.2978 | 1.0154 | | 0.1603 | 25.63 | 38500 | 2.3070 | 1.0039 | | 0.1764 | 25.97 | 39000 | 2.4975 | 1.0154 | | 0.1502 | 26.3 | 39500 | 2.3422 | 0.9923 | | 0.1574 | 26.63 | 40000 | 2.5013 | 1.0077 | | 0.1794 | 26.96 | 40500 | 2.4088 | 1.0039 | | 0.1481 | 27.3 | 41000 | 2.3456 | 1.0077 | | 0.1594 | 27.63 | 41500 | 2.4916 | 1.0154 | | 0.1384 | 27.96 | 42000 | 2.4173 | 1.0077 | | 0.1649 | 28.3 | 42500 | 2.5922 | 1.0116 | | 0.145 | 28.63 | 43000 | 2.5461 | 1.0039 | | 0.1654 | 28.96 | 43500 | 2.5312 | 1.0039 | | 0.1389 | 29.29 | 44000 | 2.5974 | 1.0077 | | 0.1592 | 29.63 | 44500 | 2.6050 | 1.0193 | | 0.1055 | 29.96 | 45000 | 2.6149 | 1.0154 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.12.1+cu113 - Datasets 1.18.3 - Tokenizers 0.13.2
gary109/ai-light-dance_drums_ft_pretrain_wav2vec2-base-new-v3
gary109
2022-11-22T14:06:09Z
76
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "gary109/AI_Light_Dance", "generated_from_trainer", "dataset:ai_light_dance", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-11-22T08:33:10Z
--- tags: - automatic-speech-recognition - gary109/AI_Light_Dance - generated_from_trainer datasets: - ai_light_dance metrics: - wer model-index: - name: ai-light-dance_drums_ft_pretrain_wav2vec2-base-new-v3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ai-light-dance_drums_ft_pretrain_wav2vec2-base-new-v3 This model is a fine-tuned version of [gary109/ai-light-dance_drums_ft_pretrain_wav2vec2-base-new-v3](https://huggingface.co/gary109/ai-light-dance_drums_ft_pretrain_wav2vec2-base-new-v3) on the GARY109/AI_LIGHT_DANCE - ONSET-IDMT-SMT-DRUMS-V2+MDBDRUMS dataset. It achieves the following results on the evaluation set: - Loss: 0.5550 - Wer: 0.3147 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 100.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.1747 | 1.0 | 45 | 0.5638 | 0.3337 | | 0.2339 | 2.0 | 90 | 0.5785 | 0.3254 | | 0.2849 | 3.0 | 135 | 0.5586 | 0.3397 | | 0.2396 | 4.0 | 180 | 0.5868 | 0.3266 | | 0.2272 | 5.0 | 225 | 0.6052 | 0.3230 | | 0.2497 | 6.0 | 270 | 0.5913 | 0.3278 | | 0.2218 | 7.0 | 315 | 0.5926 | 0.3349 | | 0.2584 | 8.0 | 360 | 0.5617 | 0.3218 | | 0.2741 | 9.0 | 405 | 0.5901 | 0.3230 | | 0.2481 | 10.0 | 450 | 0.5860 | 0.3278 | | 0.2504 | 11.0 | 495 | 0.5991 | 0.3123 | | 0.2125 | 12.0 | 540 | 0.5992 | 0.3218 | | 0.2482 | 13.0 | 585 | 0.5756 | 0.3194 | | 0.2135 | 14.0 | 630 | 0.5836 | 0.3302 | | 0.2345 | 15.0 | 675 | 0.6347 | 0.3254 | | 0.1912 | 16.0 | 720 | 0.6160 | 0.3206 | | 0.2117 | 17.0 | 765 | 0.6268 | 0.3099 | | 0.2217 | 18.0 | 810 | 0.6873 | 0.3182 | | 0.2165 | 19.0 | 855 | 0.6721 | 0.3159 | | 0.207 | 20.0 | 900 | 0.6312 | 0.3206 | | 0.2263 | 21.0 | 945 | 0.6223 | 0.3290 | | 0.2015 | 22.0 | 990 | 0.6319 | 0.3182 | | 0.1997 | 23.0 | 1035 | 0.6527 | 0.3135 | | 0.2318 | 24.0 | 1080 | 0.5987 | 0.3278 | | 0.2196 | 25.0 | 1125 | 0.6269 | 0.3242 | | 0.2298 | 26.0 | 1170 | 0.5774 | 0.3254 | | 0.2117 | 27.0 | 1215 | 0.5938 | 0.3027 | | 0.2553 | 28.0 | 1260 | 0.5831 | 0.3123 | | 0.226 | 29.0 | 1305 | 0.6151 | 0.3099 | | 0.1635 | 30.0 | 1350 | 0.5622 | 0.3230 | | 0.5734 | 31.0 | 1395 | 0.6198 | 0.2920 | | 0.2196 | 32.0 | 1440 | 0.5779 | 0.3039 | | 0.2019 | 33.0 | 1485 | 0.5866 | 0.3111 | | 0.2222 | 34.0 | 1530 | 0.5557 | 0.3063 | | 0.2167 | 35.0 | 1575 | 0.5740 | 0.3206 | | 0.2011 | 36.0 | 1620 | 0.5598 | 0.3004 | | 0.2032 | 37.0 | 1665 | 0.5550 | 0.3147 | | 0.225 | 38.0 | 1710 | 0.5794 | 0.3099 | | 0.2068 | 39.0 | 1755 | 0.6223 | 0.3063 | | 0.2105 | 40.0 | 1800 | 0.5797 | 0.3039 | | 0.1968 | 41.0 | 1845 | 0.5681 | 0.2968 | | 0.224 | 42.0 | 1890 | 0.5742 | 0.3170 | | 0.2351 | 43.0 | 1935 | 0.5567 | 0.3111 | | 0.2121 | 44.0 | 1980 | 0.5893 | 0.3039 | | 0.1913 | 45.0 | 2025 | 0.6030 | 0.3027 | | 0.1636 | 46.0 | 2070 | 0.5812 | 0.3004 | | 0.2062 | 47.0 | 2115 | 0.6081 | 0.3004 | | 0.2031 | 48.0 | 2160 | 0.5610 | 0.3159 | | 0.1892 | 49.0 | 2205 | 0.5863 | 0.3147 | | 0.1712 | 50.0 | 2250 | 0.5943 | 0.3159 | | 0.1886 | 51.0 | 2295 | 0.5953 | 0.3051 | | 0.1748 | 52.0 | 2340 | 0.5761 | 0.3087 | | 0.1705 | 53.0 | 2385 | 0.6045 | 0.2872 | | 0.1794 | 54.0 | 2430 | 0.5731 | 0.3075 | | 0.1815 | 55.0 | 2475 | 0.5949 | 0.2849 | | 0.1571 | 56.0 | 2520 | 0.5663 | 0.2884 | | 0.1902 | 57.0 | 2565 | 0.5903 | 0.2956 | | 0.2057 | 58.0 | 2610 | 0.5820 | 0.2872 | | 0.1904 | 59.0 | 2655 | 0.5923 | 0.2896 | | 0.1677 | 60.0 | 2700 | 0.5769 | 0.3075 | | 0.1859 | 61.0 | 2745 | 0.5566 | 0.3147 | | 0.2382 | 62.0 | 2790 | 0.5849 | 0.3051 | | 0.1753 | 63.0 | 2835 | 0.5773 | 0.3075 | | 0.1651 | 64.0 | 2880 | 0.5877 | 0.3039 | | 0.1781 | 65.0 | 2925 | 0.5905 | 0.3027 | | 0.1582 | 66.0 | 2970 | 0.5800 | 0.3015 | | 0.1538 | 67.0 | 3015 | 0.6025 | 0.3075 | | 0.1606 | 68.0 | 3060 | 0.5758 | 0.3039 | | 0.1522 | 69.0 | 3105 | 0.5860 | 0.2932 | | 0.1521 | 70.0 | 3150 | 0.5896 | 0.2956 | | 0.1592 | 71.0 | 3195 | 0.5738 | 0.3027 | | 0.2245 | 72.0 | 3240 | 0.5782 | 0.3039 | | 0.2185 | 73.0 | 3285 | 0.5722 | 0.3027 | | 0.1597 | 74.0 | 3330 | 0.5891 | 0.3004 | | 0.1713 | 75.0 | 3375 | 0.5650 | 0.3027 | | 0.1464 | 76.0 | 3420 | 0.5860 | 0.3063 | | 0.1551 | 77.0 | 3465 | 0.5755 | 0.3027 | | 0.1509 | 78.0 | 3510 | 0.5895 | 0.2944 | | 0.176 | 79.0 | 3555 | 0.5750 | 0.2992 | | 0.1695 | 80.0 | 3600 | 0.5759 | 0.3004 | | 0.1797 | 81.0 | 3645 | 0.5904 | 0.2992 | | 0.1371 | 82.0 | 3690 | 0.5923 | 0.3015 | | 0.1798 | 83.0 | 3735 | 0.5864 | 0.2992 | | 0.1386 | 84.0 | 3780 | 0.5733 | 0.3004 | | 0.2173 | 85.0 | 3825 | 0.5751 | 0.3004 | | 0.151 | 86.0 | 3870 | 0.5711 | 0.2968 | | 0.1579 | 87.0 | 3915 | 0.5750 | 0.2992 | | 0.1328 | 88.0 | 3960 | 0.5764 | 0.2944 | | 0.1657 | 89.0 | 4005 | 0.5769 | 0.3004 | | 0.1353 | 90.0 | 4050 | 0.5715 | 0.2956 | | 0.1982 | 91.0 | 4095 | 0.5754 | 0.2968 | | 0.1687 | 92.0 | 4140 | 0.5725 | 0.2980 | | 0.1842 | 93.0 | 4185 | 0.5750 | 0.2980 | | 0.1893 | 94.0 | 4230 | 0.5789 | 0.2944 | | 0.1744 | 95.0 | 4275 | 0.5750 | 0.3004 | | 0.1745 | 96.0 | 4320 | 0.5794 | 0.2980 | | 0.1665 | 97.0 | 4365 | 0.5755 | 0.3004 | | 0.1569 | 98.0 | 4410 | 0.5763 | 0.2968 | | 0.1449 | 99.0 | 4455 | 0.5779 | 0.2968 | | 0.1469 | 100.0 | 4500 | 0.5774 | 0.2968 | ### Framework versions - Transformers 4.25.0.dev0 - Pytorch 1.8.1+cu111 - Datasets 2.7.1.dev0 - Tokenizers 0.13.2
sd-concepts-library/ugly_sonic_enhanced
sd-concepts-library
2022-11-22T13:46:22Z
0
2
null
[ "license:openrail", "region:us" ]
null
2022-11-22T13:25:22Z
--- license: openrail --- Yes, he is back, better than ever. And with a beautiful Green Hill Zone. Renders in Automatic1111 ![04428-3036068214-uglyzonix.png](https://s3.amazonaws.com/moonup/production/uploads/1669124772659-630406f20907b9a115c620e6.png) ![04427-970404119-uglyzonix.png](https://s3.amazonaws.com/moonup/production/uploads/1669124772661-630406f20907b9a115c620e6.png) ![04426-3850462960-uglyzonix.png](https://s3.amazonaws.com/moonup/production/uploads/1669124772658-630406f20907b9a115c620e6.png)
research-backup/relbert-roberta-base-semeval2012-v6-average-prompt-b-triplet-1
research-backup
2022-11-22T13:00:28Z
97
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity_v6", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-11-22T07:34:42Z
--- datasets: - relbert/semeval2012_relational_similarity_v6 model-index: - name: relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-triplet-1 results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.7449603174603174 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3502673796791444 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3560830860534125 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3468593663146192 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.432 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.37719298245614036 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.38425925925925924 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8523429260207925 - name: F1 (macro) type: f1_macro value: 0.8411456349485952 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8157276995305164 - name: F1 (macro) type: f1_macro value: 0.5982289168562968 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6386782231852655 - name: F1 (macro) type: f1_macro value: 0.6034154846314037 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.95875356472143 - name: F1 (macro) type: f1_macro value: 0.8723815565345302 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.846443121278596 - name: F1 (macro) type: f1_macro value: 0.8238870756074439 --- # relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-triplet-1 RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on [relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-triplet-1/raw/main/analogy.json)): - Accuracy on SAT (full): 0.3502673796791444 - Accuracy on SAT: 0.3560830860534125 - Accuracy on BATS: 0.3468593663146192 - Accuracy on U2: 0.37719298245614036 - Accuracy on U4: 0.38425925925925924 - Accuracy on Google: 0.432 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-triplet-1/raw/main/classification.json)): - Micro F1 score on BLESS: 0.8523429260207925 - Micro F1 score on CogALexV: 0.8157276995305164 - Micro F1 score on EVALution: 0.6386782231852655 - Micro F1 score on K&H+N: 0.95875356472143 - Micro F1 score on ROOT09: 0.846443121278596 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-triplet-1/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.7449603174603174 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-triplet-1") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-base - max_length: 64 - mode: average - data: relbert/semeval2012_relational_similarity_v6 - split: train - split_eval: validation - template_mode: manual - loss_function: triplet - classification_loss: False - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 9 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 1 - exclude_relation: None - n_sample: 320 - gradient_accumulation: 8 - relation_level: None The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-triplet-1/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
research-backup/relbert-roberta-base-semeval2012-v6-average-prompt-b-nce-2
research-backup
2022-11-22T11:13:05Z
103
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity_v6", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-11-22T07:34:15Z
--- datasets: - relbert/semeval2012_relational_similarity_v6 model-index: - name: relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-nce-2 results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.682936507936508 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.4117647058823529 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.4065281899109792 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.44580322401334077 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.618 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.42543859649122806 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.4351851851851852 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.889709205966551 - name: F1 (macro) type: f1_macro value: 0.8856371272538675 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.7983568075117371 - name: F1 (macro) type: f1_macro value: 0.5722493642763411 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6034669555796316 - name: F1 (macro) type: f1_macro value: 0.5834867979418635 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9533977881338248 - name: F1 (macro) type: f1_macro value: 0.848937537646962 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8718270134753996 - name: F1 (macro) type: f1_macro value: 0.8714610694444686 --- # relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-nce-2 RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on [relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-nce-2/raw/main/analogy.json)): - Accuracy on SAT (full): 0.4117647058823529 - Accuracy on SAT: 0.4065281899109792 - Accuracy on BATS: 0.44580322401334077 - Accuracy on U2: 0.42543859649122806 - Accuracy on U4: 0.4351851851851852 - Accuracy on Google: 0.618 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-nce-2/raw/main/classification.json)): - Micro F1 score on BLESS: 0.889709205966551 - Micro F1 score on CogALexV: 0.7983568075117371 - Micro F1 score on EVALution: 0.6034669555796316 - Micro F1 score on K&H+N: 0.9533977881338248 - Micro F1 score on ROOT09: 0.8718270134753996 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-nce-2/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.682936507936508 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-nce-2") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-base - max_length: 64 - mode: average - data: relbert/semeval2012_relational_similarity_v6 - split: train - split_eval: validation - template_mode: manual - loss_function: nce_logout - classification_loss: False - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 6 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 2 - exclude_relation: None - n_sample: 320 - gradient_accumulation: 8 - relation_level: None The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-nce-2/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
research-backup/relbert-roberta-base-semeval2012-v6-average-prompt-b-nce-1
research-backup
2022-11-22T11:10:41Z
103
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity_v6", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-11-22T07:34:44Z
--- datasets: - relbert/semeval2012_relational_similarity_v6 model-index: - name: relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-nce-1 results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.8926984126984127 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.4572192513368984 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.4599406528189911 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5369649805447471 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.748 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.4298245614035088 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.4375 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8945306614434232 - name: F1 (macro) type: f1_macro value: 0.8889050346897381 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.7887323943661971 - name: F1 (macro) type: f1_macro value: 0.5429622796506292 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6132177681473456 - name: F1 (macro) type: f1_macro value: 0.5967298388536921 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9580580093204424 - name: F1 (macro) type: f1_macro value: 0.8772669717354012 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8733939204011282 - name: F1 (macro) type: f1_macro value: 0.865464870691388 --- # relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-nce-1 RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on [relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-nce-1/raw/main/analogy.json)): - Accuracy on SAT (full): 0.4572192513368984 - Accuracy on SAT: 0.4599406528189911 - Accuracy on BATS: 0.5369649805447471 - Accuracy on U2: 0.4298245614035088 - Accuracy on U4: 0.4375 - Accuracy on Google: 0.748 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-nce-1/raw/main/classification.json)): - Micro F1 score on BLESS: 0.8945306614434232 - Micro F1 score on CogALexV: 0.7887323943661971 - Micro F1 score on EVALution: 0.6132177681473456 - Micro F1 score on K&H+N: 0.9580580093204424 - Micro F1 score on ROOT09: 0.8733939204011282 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-nce-1/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.8926984126984127 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-nce-1") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-base - max_length: 64 - mode: average - data: relbert/semeval2012_relational_similarity_v6 - split: train - split_eval: validation - template_mode: manual - loss_function: nce_logout - classification_loss: False - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 9 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 1 - exclude_relation: None - n_sample: 320 - gradient_accumulation: 8 - relation_level: None The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-nce-1/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```