modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-06 06:27:01
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
542 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-06 06:26:44
card
stringlengths
11
1.01M
fxmarty/20220713-h14m38s16_example_conll2003
fxmarty
2022-07-13T14:38:21Z
0
0
null
[ "tensorboard", "distilbert", "token-classification", "dataset:conll2003", "region:us" ]
token-classification
2022-07-13T14:38:16Z
--- pipeline_tag: token-classification datasets: - conll2003 metrics: - precision - recall - f1 - accuracy tags: - distilbert --- **task**: `token-classification` **Backend:** `sagemaker-training` **Backend args:** `{'instance_type': 'ml.g4dn.2xlarge', 'supported_instructions': None}` **Number of evaluation samples:** `All dataset` Fixed parameters: * **model_name_or_path**: `elastic/distilbert-base-uncased-finetuned-conll03-english` * **dataset**: * **path**: `conll2003` * **eval_split**: `validation` * **data_keys**: `{'primary': 'tokens'}` * **ref_keys**: `['ner_tags']` * **calibration_split**: `train` * **quantization_approach**: `static` * **operators_to_quantize**: `['Add', 'MatMul']` * **per_channel**: `False` * **calibration**: * **method**: `minmax` * **num_calibration_samples**: `100` * **framework**: `onnxruntime` * **framework_args**: * **opset**: `11` * **optimization_level**: `1` * **aware_training**: `False` Benchmarked parameters: * **node_exclusion**: `[]`, `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` # Evaluation ## Non-time metrics | node_exclusion | | precision (original) | precision (optimized) | | recall (original) | recall (optimized) | | f1 (original) | f1 (optimized) | | accuracy (original) | accuracy (optimized) | | :------------------------------------------------------: | :-: | :------------------: | :-------------------: | :-: | :---------------: | :----------------: | :-: | :-----------: | :------------: | :-: | :-----------------: | :------------------: | | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 0.936 | 0.904 | \| | 0.944 | 0.921 | \| | 0.940 | 0.912 | \| | 0.988 | 0.984 | | `[]` | \| | 0.936 | 0.065 | \| | 0.944 | 0.243 | \| | 0.940 | 0.103 | \| | 0.988 | 0.357 | ## Time metrics Time benchmarks were run for 15 seconds per config. Below, time metrics for batch size = 4, input length = 64. | node_exclusion | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) | | :------------------------------------------------------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: | | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 114.51 | 53.59 | \| | 8.73 | 18.67 | | `[]` | \| | 90.67 | 59.55 | \| | 11.07 | 16.87 |
nawta/wav2vec2-onomatopoeia-finetune_smalldata_ESC50pretrained_3
nawta
2022-07-13T14:03:36Z
4
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-13T11:47:57Z
--- tags: - generated_from_trainer model-index: - name: wav2vec2-onomatopoeia-finetune_smalldata_ESC50pretrained_3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-onomatopoeia-finetune_smalldata_ESC50pretrained_3 This model is a fine-tuned version of [/root/workspace/wav2vec2-pretrained_with_ESC50_10000epochs_32batch_2022-07-09_22-16-46/pytorch_model.bin](https://huggingface.co//root/workspace/wav2vec2-pretrained_with_ESC50_10000epochs_32batch_2022-07-09_22-16-46/pytorch_model.bin) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.5350 - Cer: 1.2730 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 64 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Cer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 4.4243 | 4.67 | 500 | 2.6901 | 1.1259 | | 2.4282 | 9.35 | 1000 | 2.7495 | 1.1563 | | 2.3377 | 14.02 | 1500 | 2.2475 | 0.9617 | | 2.2434 | 18.69 | 2000 | 2.2765 | 1.1908 | | 2.2731 | 23.36 | 2500 | 2.2574 | 1.1669 | | 2.3436 | 28.04 | 3000 | 2.5350 | 1.2730 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.1+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
ArneD/distilbert-base-uncased-finetuned-emotion
ArneD
2022-07-13T13:43:21Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-06-21T06:42:29Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.922 - name: F1 type: f1 value: 0.9218894133133121 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2147 - Accuracy: 0.922 - F1: 0.9219 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8205 | 1.0 | 250 | 0.3028 | 0.909 | 0.9061 | | 0.245 | 2.0 | 500 | 0.2147 | 0.922 | 0.9219 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0+cu113 - Datasets 1.16.1 - Tokenizers 0.10.3
fxmarty/20220713-h13m33s02_example_conll2003
fxmarty
2022-07-13T13:33:09Z
0
0
null
[ "tensorboard", "distilbert", "token-classification", "dataset:conll2003", "region:us" ]
token-classification
2022-07-13T13:33:02Z
--- pipeline_tag: token-classification datasets: - conll2003 metrics: - precision - recall - f1 - accuracy tags: - distilbert --- **task**: `token-classification` **Backend:** `sagemaker-training` **Backend args:** `{'instance_type': 'ml.g4dn.2xlarge', 'supported_instructions': None}` **Number of evaluation samples:** `All dataset` Fixed parameters: * **model_name_or_path**: `elastic/distilbert-base-uncased-finetuned-conll03-english` * **dataset**: * **path**: `conll2003` * **eval_split**: `validation` * **data_keys**: `{'primary': 'tokens'}` * **ref_keys**: `['ner_tags']` * **calibration_split**: `train` * **quantization_approach**: `static` * **operators_to_quantize**: `['Add', 'MatMul']` * **per_channel**: `False` * **calibration**: * **method**: `minmax` * **num_calibration_samples**: `100` * **framework**: `onnxruntime` * **framework_args**: * **opset**: `11` * **optimization_level**: `1` * **aware_training**: `False` Benchmarked parameters: * **node_exclusion**: `[]`, `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` # Evaluation ## Non-time metrics | node_exclusion | | precision (original) | precision (optimized) | | recall (original) | recall (optimized) | | f1 (original) | f1 (optimized) | | accuracy (original) | accuracy (optimized) | | :------------------------------------------------------: | :-: | :------------------: | :-------------------: | :-: | :---------------: | :----------------: | :-: | :-----------: | :------------: | :-: | :-----------------: | :------------------: | | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 0.936 | 0.904 | \| | 0.944 | 0.921 | \| | 0.940 | 0.912 | \| | 0.988 | 0.984 | | `[]` | \| | 0.936 | 0.065 | \| | 0.944 | 0.243 | \| | 0.940 | 0.103 | \| | 0.988 | 0.357 | ## Time metrics Time benchmarks were run for 15 seconds per config. Below, time metrics for batch size = 4, input length = 64. | node_exclusion | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) | | :------------------------------------------------------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: | | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 103.46 | 53.77 | \| | 9.67 | 18.60 | | `[]` | \| | 90.62 | 65.86 | \| | 11.07 | 15.20 |
hossay/distilbert-base-uncased-finetuned-ner
hossay
2022-07-13T13:32:51Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "dataset:conll2003", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-06-10T00:51:00Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 - accuracy model-index: - name: distilbert-base-uncased-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 args: conll2003 metrics: - name: Precision type: precision value: 0.9263064854712186 - name: Recall type: recall value: 0.9379125181787672 - name: F1 type: f1 value: 0.9320733740967203 - name: Accuracy type: accuracy value: 0.9838117781625813 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0614 - Precision: 0.9263 - Recall: 0.9379 - F1: 0.9321 - Accuracy: 0.9838 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2418 | 1.0 | 878 | 0.0709 | 0.9168 | 0.9242 | 0.9204 | 0.9806 | | 0.0514 | 2.0 | 1756 | 0.0622 | 0.9175 | 0.9338 | 0.9255 | 0.9826 | | 0.0306 | 3.0 | 2634 | 0.0614 | 0.9263 | 0.9379 | 0.9321 | 0.9838 | ### Framework versions - Transformers 4.19.3 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
Chris1/q-FrozenLake-v1-4x4-noSlippery
Chris1
2022-07-13T13:15:57Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2022-07-13T10:45:52Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery --- # **Q-Learning** Agent playing **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="Chris1/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
xichenn/distilbert-base-uncased-finetuned-emotion
xichenn
2022-07-13T12:59:22Z
16
1
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-06-19T13:16:57Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.924 - name: F1 type: f1 value: 0.924047984825329 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2294 - Accuracy: 0.924 - F1: 0.9240 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 250 | 0.3316 | 0.9025 | 0.8985 | | No log | 2.0 | 500 | 0.2294 | 0.924 | 0.9240 | ### Framework versions - Transformers 4.20.0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
frahman/distilbert-base-uncased-finetuned-emotion
frahman
2022-07-13T12:58:49Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.9205 - name: F1 type: f1 value: 0.9206660865871332 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2202 - Accuracy: 0.9205 - F1: 0.9207 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8234 | 1.0 | 250 | 0.3185 | 0.9025 | 0.8992 | | 0.2466 | 2.0 | 500 | 0.2202 | 0.9205 | 0.9207 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.0
jordyvl/udpos28-sm-first-POS
jordyvl
2022-07-13T12:53:00Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:udpos28", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-07-13T12:33:01Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - udpos28 metrics: - precision - recall - f1 - accuracy model-index: - name: udpos28-sm-first-POS results: - task: name: Token Classification type: token-classification dataset: name: udpos28 type: udpos28 args: en metrics: - name: Precision type: precision value: 0.9511089206505667 - name: Recall type: recall value: 0.9546093116207286 - name: F1 type: f1 value: 0.9528559014062253 - name: Accuracy type: accuracy value: 0.9559133601686793 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # udpos28-sm-first-POS This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the udpos28 dataset. It achieves the following results on the evaluation set: - Loss: 0.1896 - Precision: 0.9511 - Recall: 0.9546 - F1: 0.9529 - Accuracy: 0.9559 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.1696 | 1.0 | 4978 | 0.1700 | 0.9440 | 0.9464 | 0.9452 | 0.9472 | | 0.0973 | 2.0 | 9956 | 0.1705 | 0.9487 | 0.9533 | 0.9510 | 0.9543 | | 0.0508 | 3.0 | 14934 | 0.1896 | 0.9511 | 0.9546 | 0.9529 | 0.9559 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.2+cu102 - Datasets 2.2.2 - Tokenizers 0.12.1
Sreevishnu/funnel-transformer-small-imdb
Sreevishnu
2022-07-13T12:17:17Z
6
1
transformers
[ "transformers", "pytorch", "funnel", "text-classification", "sentiment-analysis", "en", "dataset:imdb", "arxiv:2006.03236", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-05-15T18:48:18Z
--- license: apache-2.0 language: en widget: - text: "In the garden of wonderment that is the body of work by the animation master Hayao Miyazaki, his 2001 gem 'Spirited Away' is at once one of his most accessible films to a Western audience and the one most distinctly rooted in Japanese culture and lore. The tale of Chihiro, a 10 year old girl who resents being moved away from all her friends, only to find herself working in a bathhouse for the gods, doesn't just use its home country's fraught relationship with deities as a backdrop. Never remotely didactic, the film is ultimately a self-fulfilment drama that touches on religious, ethical, ecological and psychological issues. It's also a fine children's film, the kind that elicits a deepening bond across repeat viewings and the passage of time, mostly because Miyazaki refuses to talk down to younger viewers. That's been a constant in all of his filmography, but it's particularly conspicuous here because the stakes for its young protagonist are bigger than in most of his previous features aimed at younger viewers. It involves conquering fears and finding oneself in situations where safety is not a given. There are so many moving parts in Spirited Away, from both a thematic and technical point of view, that pinpointing what makes Spirited Away stand out from an already outstanding body of work becomes as challenging as a meeting with Yubaba. But I think it comes down to an ability to deal with heady, complex subject matter from a young girl's perspective without diluting or lessening its resonance. Miyazaki has made a loopy, demanding work of art that asks your inner child to come out and play. There are few high-wire acts in all of movie-dom as satisfying as that." datasets: - imdb tags: - sentiment-analysis --- # Funnel Transformer small (B4-4-4 with decoder) fine-tuned on IMDB for Sentiment Analysis These are the model weights for the Funnel Transformer small model fine-tuned on the IMDB dataset for performing Sentiment Analysis with `max_position_embeddings=1024`. The original model weights for English language are from [funnel-transformer/small](https://huggingface.co/funnel-transformer/small) and it uses a similar objective objective as [ELECTRA](https://huggingface.co/transformers/model_doc/electra.html). It was introduced in [this paper](https://arxiv.org/pdf/2006.03236.pdf) and first released in [this repository](https://github.com/laiguokun/Funnel-Transformer). This model is uncased: it does not make a difference between english and English. ## Fine-tuning Results | | Accuracy | Precision | Recall | F1 | |-------------------------------|----------|-----------|----------|----------| | funnel-transformer-small-imdb | 0.956530 | 0.952286 | 0.961075 | 0.956661 | ## Model description (from [funnel-transformer/small](https://huggingface.co/funnel-transformer/small)) Funnel Transformer is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, a small language model corrupts the input texts and serves as a generator of inputs for this model, and the pretraining objective is to predict which token is an original and which one has been replaced, a bit like a GAN training. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the BERT model as inputs. # How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained( "Sreevishnu/funnel-transformer-small-imdb", use_fast=True) model = AutoModelForSequenceClassification.from_pretrained( "Sreevishnu/funnel-transformer-small-imdb", num_labels=2, max_position_embeddings=1024) text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` # Example App https://lazy-film-reviews-7gif2bz4sa-ew.a.run.app/ Project repo: https://github.com/akshaydevml/lazy-film-reviews
facebook/deit-base-patch16-384
facebook
2022-07-13T11:41:03Z
349
1
transformers
[ "transformers", "pytorch", "tf", "vit", "image-classification", "dataset:imagenet-1k", "arxiv:2012.12877", "arxiv:2006.03677", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - image-classification datasets: - imagenet-1k --- # Data-efficient Image Transformer (base-sized model) Data-efficient Image Transformer (DeiT) model pre-trained and fine-tuned on ImageNet-1k (1 million images, 1,000 classes) at resolution 384x384. It was first introduced in the paper [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) by Touvron et al. and first released in [this repository](https://github.com/facebookresearch/deit). However, the weights were converted from the [timm repository](https://github.com/rwightman/pytorch-image-models) by Ross Wightman. Disclaimer: The team releasing DeiT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description This model is actually a more efficiently trained Vision Transformer (ViT). The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pre-trained at resolution 224 and fine-tuned at resolution 384 on a large collection of images in a supervised fashion, namely ImageNet-1k. Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder. By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image. ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=facebook/deit) to look for fine-tuned versions on a task that interests you. ### How to use Since this model is a more efficiently trained ViT model, you can plug it into ViTModel or ViTForImageClassification. Note that the model expects the data to be prepared using DeiTFeatureExtractor. Here we use AutoFeatureExtractor, which will automatically use the appropriate feature extractor given the model name. Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import AutoFeatureExtractor, ViTForImageClassification from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) feature_extractor = AutoFeatureExtractor.from_pretrained('facebook/deit-base-patch16-384') model = ViTForImageClassification.from_pretrained('facebook/deit-base-patch16-384') inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # model predicts one of the 1000 ImageNet classes predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ``` Currently, both the feature extractor and model support PyTorch. Tensorflow and JAX/FLAX are coming soon. ## Training data The ViT model was pretrained on [ImageNet-1k](http://www.image-net.org/challenges/LSVRC/2012/), a dataset consisting of 1 million images and 1k classes. ## Training procedure ### Preprocessing The exact details of preprocessing of images during training/validation can be found [here](https://github.com/facebookresearch/deit/blob/ab5715372db8c6cad5740714b2216d55aeae052e/datasets.py#L78). At inference time, images are resized/rescaled to the same resolution (438x438), center-cropped at 384x384 and normalized across the RGB channels with the ImageNet mean and standard deviation. ### Pretraining The model was trained on a single 8-GPU node for 3 days. Pre-training resolution is 224. For all hyperparameters (such as batch size and learning rate) we refer to table 9 of the original paper. ## Evaluation results | Model | ImageNet top-1 accuracy | ImageNet top-5 accuracy | # params | URL | |---------------------------------------|-------------------------|-------------------------|----------|------------------------------------------------------------------| | DeiT-tiny | 72.2 | 91.1 | 5M | https://huggingface.co/facebook/deit-tiny-patch16-224 | | DeiT-small | 79.9 | 95.0 | 22M | https://huggingface.co/facebook/deit-small-patch16-224 | | DeiT-base | 81.8 | 95.6 | 86M | https://huggingface.co/facebook/deit-base-patch16-224 | | DeiT-tiny distilled | 74.5 | 91.9 | 6M | https://huggingface.co/facebook/deit-tiny-distilled-patch16-224 | | DeiT-small distilled | 81.2 | 95.4 | 22M | https://huggingface.co/facebook/deit-small-distilled-patch16-224 | | DeiT-base distilled | 83.4 | 96.5 | 87M | https://huggingface.co/facebook/deit-base-distilled-patch16-224 | | **DeiT-base 384** | **82.9** | **96.2** | **87M** | **https://huggingface.co/facebook/deit-base-patch16-384** | | DeiT-base distilled 384 (1000 epochs) | 85.2 | 97.2 | 88M | https://huggingface.co/facebook/deit-base-distilled-patch16-384 | Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance. ### BibTeX entry and citation info ```bibtex @misc{touvron2021training, title={Training data-efficient image transformers & distillation through attention}, author={Hugo Touvron and Matthieu Cord and Matthijs Douze and Francisco Massa and Alexandre Sablayrolles and Hervé Jégou}, year={2021}, eprint={2012.12877}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` ```bibtex @misc{wu2020visual, title={Visual Transformers: Token-based Image Representation and Processing for Computer Vision}, author={Bichen Wu and Chenfeng Xu and Xiaoliang Dai and Alvin Wan and Peizhao Zhang and Zhicheng Yan and Masayoshi Tomizuka and Joseph Gonzalez and Kurt Keutzer and Peter Vajda}, year={2020}, eprint={2006.03677}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` ```bibtex @inproceedings{deng2009imagenet, title={Imagenet: A large-scale hierarchical image database}, author={Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li}, booktitle={2009 IEEE conference on computer vision and pattern recognition}, pages={248--255}, year={2009}, organization={Ieee} } ```
facebook/deit-base-patch16-224
facebook
2022-07-13T11:40:44Z
144,060
13
transformers
[ "transformers", "pytorch", "tf", "vit", "image-classification", "dataset:imagenet-1k", "arxiv:2012.12877", "arxiv:2006.03677", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - image-classification datasets: - imagenet-1k --- # Data-efficient Image Transformer (base-sized model) Data-efficient Image Transformer (DeiT) model pre-trained and fine-tuned on ImageNet-1k (1 million images, 1,000 classes) at resolution 224x224. It was first introduced in the paper [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) by Touvron et al. and first released in [this repository](https://github.com/facebookresearch/deit). However, the weights were converted from the [timm repository](https://github.com/rwightman/pytorch-image-models) by Ross Wightman. Disclaimer: The team releasing DeiT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description This model is actually a more efficiently trained Vision Transformer (ViT). The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pre-trained and fine-tuned on a large collection of images in a supervised fashion, namely ImageNet-1k, at a resolution of 224x224 pixels. Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder. By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image. ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=facebook/deit) to look for fine-tuned versions on a task that interests you. ### How to use Since this model is a more efficiently trained ViT model, you can plug it into ViTModel or ViTForImageClassification. Note that the model expects the data to be prepared using DeiTFeatureExtractor. Here we use AutoFeatureExtractor, which will automatically use the appropriate feature extractor given the model name. Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import AutoFeatureExtractor, ViTForImageClassification from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) feature_extractor = AutoFeatureExtractor.from_pretrained('facebook/deit-base-patch16-224') model = ViTForImageClassification.from_pretrained('facebook/deit-base-patch16-224') inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # model predicts one of the 1000 ImageNet classes predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ``` Currently, both the feature extractor and model support PyTorch. Tensorflow and JAX/FLAX are coming soon. ## Training data The ViT model was pretrained on [ImageNet-1k](http://www.image-net.org/challenges/LSVRC/2012/), a dataset consisting of 1 million images and 1k classes. ## Training procedure ### Preprocessing The exact details of preprocessing of images during training/validation can be found [here](https://github.com/facebookresearch/deit/blob/ab5715372db8c6cad5740714b2216d55aeae052e/datasets.py#L78). At inference time, images are resized/rescaled to the same resolution (256x256), center-cropped at 224x224 and normalized across the RGB channels with the ImageNet mean and standard deviation. ### Pretraining The model was trained on a single 8-GPU node for 3 days. Training resolution is 224. For all hyperparameters (such as batch size and learning rate) we refer to table 9 of the original paper. ## Evaluation results | Model | ImageNet top-1 accuracy | ImageNet top-5 accuracy | # params | URL | |---------------------------------------|-------------------------|-------------------------|----------|------------------------------------------------------------------| | DeiT-tiny | 72.2 | 91.1 | 5M | https://huggingface.co/facebook/deit-tiny-patch16-224 | | DeiT-small | 79.9 | 95.0 | 22M | https://huggingface.co/facebook/deit-small-patch16-224 | | **DeiT-base** | **81.8** | **95.6** | **86M** | **https://huggingface.co/facebook/deit-base-patch16-224** | | DeiT-tiny distilled | 74.5 | 91.9 | 6M | https://huggingface.co/facebook/deit-tiny-distilled-patch16-224 | | DeiT-small distilled | 81.2 | 95.4 | 22M | https://huggingface.co/facebook/deit-small-distilled-patch16-224 | | DeiT-base distilled | 83.4 | 96.5 | 87M | https://huggingface.co/facebook/deit-base-distilled-patch16-224 | | DeiT-base 384 | 82.9 | 96.2 | 87M | https://huggingface.co/facebook/deit-base-patch16-384 | | DeiT-base distilled 384 (1000 epochs) | 85.2 | 97.2 | 88M | https://huggingface.co/facebook/deit-base-distilled-patch16-384 | Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance. ### BibTeX entry and citation info ```bibtex @misc{touvron2021training, title={Training data-efficient image transformers & distillation through attention}, author={Hugo Touvron and Matthieu Cord and Matthijs Douze and Francisco Massa and Alexandre Sablayrolles and Hervé Jégou}, year={2021}, eprint={2012.12877}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` ```bibtex @misc{wu2020visual, title={Visual Transformers: Token-based Image Representation and Processing for Computer Vision}, author={Bichen Wu and Chenfeng Xu and Xiaoliang Dai and Alvin Wan and Peizhao Zhang and Zhicheng Yan and Masayoshi Tomizuka and Joseph Gonzalez and Kurt Keutzer and Peter Vajda}, year={2020}, eprint={2006.03677}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` ```bibtex @inproceedings{deng2009imagenet, title={Imagenet: A large-scale hierarchical image database}, author={Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li}, booktitle={2009 IEEE conference on computer vision and pattern recognition}, pages={248--255}, year={2009}, organization={Ieee} } ```
matjesg/deepflash2_demo
matjesg
2022-07-13T10:54:35Z
0
2
null
[ "onnx", "image-segmentation", "semantic-segmentation", "deepflash2", "arxiv:2111.06693", "license:apache-2.0", "region:us" ]
image-segmentation
2022-05-31T09:43:39Z
--- tags: - image-segmentation - semantic-segmentation - deepflash2 license: apache-2.0 datasets: - "cFOS in HC" - "YFP in CTX" --- # Demo models for ![deepflash2](https://raw.githubusercontent.com/matjesg/deepflash2/master/nbs/media/logo/deepflash2_logo_medium.png) **Try in [Hugging Face Spaces](https://huggingface.co/spaces/matjesg/deepflash2)** 🤗🤗🤗 - **Task**: Image Segmentation / Semantic Segmentation - **Paper**: The preprint of our paper is available on [arXiv](https://arxiv.org/pdf/2111.06693.pdf) - **Data**: The cFOS in HC dataset ([Article](https://doi.org/10.7554/eLife.59780), [Data](https://doi.org/10.5061/dryad.4b8gtht9d)) describes the indirect immunofluorescent labeling of the transcription factor cFOS in different subregions of the hippocampus after behavioral testing of the mice. - **Library**: See [github](https://github.com/matjesg/deepflash2/)
fxmarty/20220713-h10m20s05_example_conll2003
fxmarty
2022-07-13T10:20:11Z
0
0
null
[ "tensorboard", "distilbert", "token-classification", "dataset:conll2003", "region:us" ]
token-classification
2022-07-13T10:20:05Z
--- pipeline_tag: token-classification datasets: - conll2003 metrics: - precision - recall - f1 - accuracy tags: - distilbert --- **task**: `token-classification` **Backend:** `sagemaker-training` **Backend args:** `{'instance_type': 'ml.g4dn.2xlarge', 'supported_instructions': None}` **Number of evaluation samples:** `All dataset` Fixed parameters: * **model_name_or_path**: `elastic/distilbert-base-uncased-finetuned-conll03-english` * **dataset**: * **path**: `conll2003` * **eval_split**: `validation` * **data_keys**: `{'primary': 'tokens'}` * **ref_keys**: `['ner_tags']` * **calibration_split**: `train` * **quantization_approach**: `static` * **operators_to_quantize**: `['Add', 'MatMul']` * **per_channel**: `False` * **calibration**: * **method**: `minmax` * **num_calibration_samples**: `100` * **framework**: `onnxruntime` * **framework_args**: * **opset**: `11` * **optimization_level**: `1` * **aware_training**: `False` Benchmarked parameters: * **node_exclusion**: `[]`, `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` # Evaluation ## Non-time metrics | node_exclusion | | precision (original) | precision (optimized) | | recall (original) | recall (optimized) | | f1 (original) | f1 (optimized) | | accuracy (original) | accuracy (optimized) | | :------------------------------------------------------: | :-: | :------------------: | :-------------------: | :-: | :---------------: | :----------------: | :-: | :-----------: | :------------: | :-: | :-----------------: | :------------------: | | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 0.936 | 0.904 | \| | 0.944 | 0.921 | \| | 0.940 | 0.912 | \| | 0.988 | 0.984 | | `[]` | \| | 0.936 | 0.065 | \| | 0.944 | 0.243 | \| | 0.940 | 0.103 | \| | 0.988 | 0.357 | ## Time metrics Time benchmarks were run for 15 seconds per config. Below, time metrics for batch size = 4, input length = 64. | node_exclusion | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) | | :------------------------------------------------------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: | | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 120.53 | 46.41 | \| | 8.33 | 21.60 | | `[]` | \| | 119.97 | 59.50 | \| | 8.40 | 16.87 |
jordyvl/bert-base-cased_conll2003-sm-all-ner
jordyvl
2022-07-13T10:13:33Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:conll2003", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-07-13T09:59:29Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 - accuracy model-index: - name: bert-base-cased_conll2003-sm-all-ner results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 args: conll2003 metrics: - name: Precision type: precision value: 0.9487479131886477 - name: Recall type: recall value: 0.9564119824974756 - name: F1 type: f1 value: 0.9525645323499833 - name: Accuracy type: accuracy value: 0.9916085822203186 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-cased_conll2003-sm-all-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0489 - Precision: 0.9487 - Recall: 0.9564 - F1: 0.9526 - Accuracy: 0.9916 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.052 | 1.0 | 3511 | 0.0510 | 0.9374 | 0.9456 | 0.9415 | 0.9898 | | 0.0213 | 2.0 | 7022 | 0.0497 | 0.9484 | 0.9519 | 0.9501 | 0.9911 | | 0.0099 | 3.0 | 10533 | 0.0489 | 0.9487 | 0.9564 | 0.9526 | 0.9916 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.2+cu102 - Datasets 2.3.2 - Tokenizers 0.12.1
nickcpk/distilbert-base-uncased-finetuned-squad-d5716d28
nickcpk
2022-07-13T09:51:40Z
0
0
null
[ "pytorch", "question-answering", "en", "dataset:squad", "arxiv:1910.01108", "license:apache-2.0", "region:us" ]
question-answering
2022-07-13T09:51:27Z
--- language: - en thumbnail: https://github.com/karanchahal/distiller/blob/master/distiller.jpg tags: - question-answering license: apache-2.0 datasets: - squad metrics: - squad --- # DistilBERT with a second step of distillation ## Model description This model replicates the "DistilBERT (D)" model from Table 2 of the [DistilBERT paper](https://arxiv.org/pdf/1910.01108.pdf). In this approach, a DistilBERT student is fine-tuned on SQuAD v1.1, but with a BERT model (also fine-tuned on SQuAD v1.1) acting as a teacher for a second step of task-specific distillation. In this version, the following pre-trained models were used: * Student: `distilbert-base-uncased` * Teacher: `lewtun/bert-base-uncased-finetuned-squad-v1` ## Training data This model was trained on the SQuAD v1.1 dataset which can be obtained from the `datasets` library as follows: ```python from datasets import load_dataset squad = load_dataset('squad') ``` ## Training procedure ## Eval results | | Exact Match | F1 | |------------------|-------------|------| | DistilBERT paper | 79.1 | 86.9 | | Ours | 78.4 | 86.5 | The scores were calculated using the `squad` metric from `datasets`. ### BibTeX entry and citation info ```bibtex @misc{sanh2020distilbert, title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter}, author={Victor Sanh and Lysandre Debut and Julien Chaumond and Thomas Wolf}, year={2020}, eprint={1910.01108}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
casasdorjunior/t5-small-finetuned-cc-news-es-titles
casasdorjunior
2022-07-13T08:52:55Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "dataset:cc-news-es-titles", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-07-13T07:38:26Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - cc-news-es-titles metrics: - rouge model-index: - name: t5-small-finetuned-cc-news-es-titles results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: cc-news-es-titles type: cc-news-es-titles args: default metrics: - name: Rouge1 type: rouge value: 16.701 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-cc-news-es-titles This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the cc-news-es-titles dataset. It achieves the following results on the evaluation set: - Loss: 2.6383 - Rouge1: 16.701 - Rouge2: 4.1265 - Rougel: 14.8175 - Rougelsum: 14.8193 - Gen Len: 18.9159 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:-------:|:---------:|:-------:| | 2.8439 | 1.0 | 23133 | 2.6383 | 16.701 | 4.1265 | 14.8175 | 14.8193 | 18.9159 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
fxmarty/20220713-h08m45s49_example_squad
fxmarty
2022-07-13T08:46:02Z
0
0
null
[ "tensorboard", "distilbert", "question-answering", "dataset:squad", "region:us" ]
question-answering
2022-07-13T08:45:49Z
--- pipeline_tag: question-answering datasets: - squad metrics: - exact_match - f1 tags: - distilbert --- **task**: `question-answering` **Backend:** `sagemaker-training` **Backend args:** `{'instance_type': 'ml.g4dn.2xlarge', 'supported_instructions': None}` **Number of evaluation samples:** `1000` Fixed parameters: * **model_name_or_path**: `distilbert-base-uncased-distilled-squad` * **dataset**: * **path**: `squad` * **eval_split**: `validation` * **data_keys**: `{'question': 'question', 'context': 'context'}` * **ref_keys**: `['answers']` * **calibration_split**: `train` * **per_channel**: `False` * **calibration**: * **method**: `minmax` * **num_calibration_samples**: `100` * **framework**: `onnxruntime` * **framework_args**: * **opset**: `11` * **optimization_level**: `1` * **aware_training**: `False` Benchmarked parameters: * **quantization_approach**: `dynamic`, `static` * **operators_to_quantize**: `['Add']`, `['Add', 'MatMul']` * **node_exclusion**: `[]`, `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` # Evaluation ## Non-time metrics | quantization_approach | operators_to_quantize | node_exclusion | | exact_match (original) | exact_match (optimized) | | f1 (original) | f1 (optimized) | | :-------------------: | :-------------------: | :------------------------------------------------------: | :-: | :--------------------: | :---------------------: | :-: | :-----------: | :------------: | | `dynamic` | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 82.300 | 80.600 | \| | 87.232 | 86.097 | | `dynamic` | `['Add', 'MatMul']` | `[]` | \| | 82.300 | 80.600 | \| | 87.232 | 86.097 | | `dynamic` | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 82.300 | 82.300 | \| | 87.232 | 87.232 | | `dynamic` | `['Add']` | `[]` | \| | 82.300 | 82.300 | \| | 87.232 | 87.232 | | `static` | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 82.300 | 72.900 | \| | 87.232 | 79.964 | | `static` | `['Add', 'MatMul']` | `[]` | \| | 82.300 | 54.500 | \| | 87.232 | 64.292 | | `static` | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 82.300 | 76.900 | \| | 87.232 | 83.014 | | `static` | `['Add']` | `[]` | \| | 82.300 | 59.800 | \| | 87.232 | 69.217 | ## Time metrics Time benchmarks were run for 15 seconds per config. Below, time metrics for batch size = 1, input length = 32. | quantization_approach | operators_to_quantize | node_exclusion | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) | | :-------------------: | :-------------------: | :------------------------------------------------------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: | | `dynamic` | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 47.87 | 7.23 | \| | 20.93 | 138.40 | | `dynamic` | `['Add', 'MatMul']` | `[]` | \| | 48.10 | 7.14 | \| | 20.80 | 140.13 | | `dynamic` | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 43.83 | 17.16 | \| | 22.87 | 58.33 | | `dynamic` | `['Add']` | `[]` | \| | 34.13 | 17.02 | \| | 29.33 | 58.80 | | `static` | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 35.07 | 9.21 | \| | 28.53 | 108.53 | | `static` | `['Add', 'MatMul']` | `[]` | \| | 48.27 | 11.62 | \| | 20.73 | 86.13 | | `static` | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 34.11 | 19.23 | \| | 29.33 | 52.00 | | `static` | `['Add']` | `[]` | \| | 48.54 | 21.18 | \| | 20.67 | 47.27 | Below, time metrics for batch size = 1, input length = 64. | quantization_approach | operators_to_quantize | node_exclusion | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) | | :-------------------: | :-------------------: | :------------------------------------------------------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: | | `dynamic` | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 59.92 | 12.60 | \| | 16.73 | 79.40 | | `dynamic` | `['Add', 'MatMul']` | `[]` | \| | 59.64 | 13.25 | \| | 16.80 | 75.47 | | `dynamic` | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 60.13 | 29.65 | \| | 16.67 | 33.73 | | `dynamic` | `['Add']` | `[]` | \| | 59.62 | 29.51 | \| | 16.80 | 33.93 | | `static` | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 58.94 | 15.13 | \| | 17.00 | 66.13 | | `static` | `['Add', 'MatMul']` | `[]` | \| | 60.49 | 18.62 | \| | 16.53 | 53.73 | | `static` | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 43.32 | 28.00 | \| | 23.13 | 35.73 | | `static` | `['Add']` | `[]` | \| | 44.19 | 32.72 | \| | 22.67 | 30.60 | Below, time metrics for batch size = 1, input length = 128. | quantization_approach | operators_to_quantize | node_exclusion | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) | | :-------------------: | :-------------------: | :------------------------------------------------------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: | | `dynamic` | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 73.39 | 26.56 | \| | 13.67 | 37.67 | | `dynamic` | `['Add', 'MatMul']` | `[]` | \| | 57.64 | 23.42 | \| | 17.40 | 42.73 | | `dynamic` | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 64.04 | 50.14 | \| | 15.67 | 20.00 | | `dynamic` | `['Add']` | `[]` | \| | 72.81 | 57.05 | \| | 13.80 | 17.53 | | `static` | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 70.57 | 27.59 | \| | 14.20 | 36.27 | | `static` | `['Add', 'MatMul']` | `[]` | \| | 71.04 | 37.94 | \| | 14.13 | 26.40 | | `static` | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 57.65 | 57.95 | \| | 17.40 | 17.27 | | `static` | `['Add']` | `[]` | \| | 71.66 | 58.67 | \| | 14.00 | 17.07 | Below, time metrics for batch size = 4, input length = 32. | quantization_approach | operators_to_quantize | node_exclusion | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) | | :-------------------: | :-------------------: | :------------------------------------------------------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: | | `dynamic` | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 72.11 | 21.80 | \| | 13.93 | 45.93 | | `dynamic` | `['Add', 'MatMul']` | `[]` | \| | 73.15 | 20.70 | \| | 13.73 | 48.33 | | `dynamic` | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 72.05 | 53.68 | \| | 13.93 | 18.67 | | `dynamic` | `['Add']` | `[]` | \| | 55.97 | 53.60 | \| | 17.87 | 18.67 | | `static` | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 70.46 | 24.88 | \| | 14.20 | 40.20 | | `static` | `['Add', 'MatMul']` | `[]` | \| | 56.57 | 30.90 | \| | 17.73 | 32.40 | | `static` | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 62.38 | 53.64 | \| | 16.07 | 18.67 | | `static` | `['Add']` | `[]` | \| | 60.19 | 67.29 | \| | 16.67 | 14.87 | Below, time metrics for batch size = 4, input length = 64. | quantization_approach | operators_to_quantize | node_exclusion | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) | | :-------------------: | :-------------------: | :------------------------------------------------------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: | | `dynamic` | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 121.20 | 40.12 | \| | 8.27 | 24.93 | | `dynamic` | `['Add', 'MatMul']` | `[]` | \| | 90.97 | 41.51 | \| | 11.00 | 24.13 | | `dynamic` | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 120.85 | 106.50 | \| | 8.33 | 9.40 | | `dynamic` | `['Add']` | `[]` | \| | 118.58 | 106.55 | \| | 8.47 | 9.40 | | `static` | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 120.57 | 54.25 | \| | 8.33 | 18.47 | | `static` | `['Add', 'MatMul']` | `[]` | \| | 104.93 | 57.90 | \| | 9.60 | 17.33 | | `static` | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 90.85 | 110.46 | \| | 11.07 | 9.07 | | `static` | `['Add']` | `[]` | \| | 120.57 | 103.62 | \| | 8.33 | 9.67 | Below, time metrics for batch size = 4, input length = 128. | quantization_approach | operators_to_quantize | node_exclusion | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) | | :-------------------: | :-------------------: | :------------------------------------------------------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: | | `dynamic` | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 172.14 | 94.78 | \| | 5.87 | 10.60 | | `dynamic` | `['Add', 'MatMul']` | `[]` | \| | 220.38 | 84.18 | \| | 4.60 | 11.93 | | `dynamic` | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 221.22 | 221.37 | \| | 4.53 | 4.53 | | `dynamic` | `['Add']` | `[]` | \| | 203.90 | 175.16 | \| | 4.93 | 5.73 | | `static` | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 192.63 | 113.82 | \| | 5.20 | 8.80 | | `static` | `['Add', 'MatMul']` | `[]` | \| | 220.32 | 122.36 | \| | 4.60 | 8.20 | | `static` | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 220.58 | 207.51 | \| | 4.60 | 4.87 | | `static` | `['Add']` | `[]` | \| | 221.94 | 246.87 | \| | 4.53 | 4.07 | Below, time metrics for batch size = 8, input length = 32. | quantization_approach | operators_to_quantize | node_exclusion | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) | | :-------------------: | :-------------------: | :------------------------------------------------------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: | | `dynamic` | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 112.67 | 43.26 | \| | 8.93 | 23.13 | | `dynamic` | `['Add', 'MatMul']` | `[]` | \| | 95.78 | 40.66 | \| | 10.47 | 24.60 | | `dynamic` | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 117.38 | 104.28 | \| | 8.53 | 9.60 | | `dynamic` | `['Add']` | `[]` | \| | 89.81 | 91.00 | \| | 11.20 | 11.00 | | `static` | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 89.14 | 52.09 | \| | 11.27 | 19.20 | | `static` | `['Add', 'MatMul']` | `[]` | \| | 92.77 | 64.21 | \| | 10.80 | 15.60 | | `static` | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 119.10 | 114.43 | \| | 8.40 | 8.80 | | `static` | `['Add']` | `[]` | \| | 119.28 | 127.79 | \| | 8.40 | 7.87 | Below, time metrics for batch size = 8, input length = 64. | quantization_approach | operators_to_quantize | node_exclusion | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) | | :-------------------: | :-------------------: | :------------------------------------------------------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: | | `dynamic` | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 215.03 | 78.03 | \| | 4.67 | 12.87 | | `dynamic` | `['Add', 'MatMul']` | `[]` | \| | 214.76 | 87.19 | \| | 4.67 | 11.53 | | `dynamic` | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 216.48 | 162.64 | \| | 4.67 | 6.20 | | `dynamic` | `['Add']` | `[]` | \| | 204.29 | 212.33 | \| | 4.93 | 4.73 | | `static` | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 215.47 | 104.45 | \| | 4.67 | 9.60 | | `static` | `['Add', 'MatMul']` | `[]` | \| | 209.66 | 106.43 | \| | 4.80 | 9.40 | | `static` | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 166.13 | 220.92 | \| | 6.07 | 4.53 | | `static` | `['Add']` | `[]` | \| | 214.69 | 209.01 | \| | 4.67 | 4.80 | Below, time metrics for batch size = 8, input length = 128. | quantization_approach | operators_to_quantize | node_exclusion | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) | | :-------------------: | :-------------------: | :------------------------------------------------------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: | | `dynamic` | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 407.90 | 151.49 | \| | 2.47 | 6.67 | | `dynamic` | `['Add', 'MatMul']` | `[]` | \| | 407.34 | 154.55 | \| | 2.47 | 6.53 | | `dynamic` | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 406.51 | 394.85 | \| | 2.47 | 2.60 | | `dynamic` | `['Add']` | `[]` | \| | 309.53 | 445.24 | \| | 3.27 | 2.27 | | `static` | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 407.54 | 224.46 | \| | 2.47 | 4.47 | | `static` | `['Add', 'MatMul']` | `[]` | \| | 408.14 | 236.94 | \| | 2.47 | 4.27 | | `static` | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | \| | 309.91 | 357.87 | \| | 3.27 | 2.80 | | `static` | `['Add']` | `[]` | \| | 310.00 | 406.54 | \| | 3.27 | 2.47 |
huggingartists/queen
huggingartists
2022-07-13T06:52:09Z
5
1
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingartists", "lyrics", "lm-head", "causal-lm", "en", "dataset:huggingartists/queen", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - huggingartists/queen tags: - huggingartists - lyrics - lm-head - causal-lm widget: - text: "I am" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/97bcb5755cb9780d76b37726a0ce4bef.1000x1000x1.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Queen</div> <a href="https://genius.com/artists/queen"> <div style="text-align: center; font-size: 14px;">@queen</div> </a> </div> I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists). Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)! ## How does it work? To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist). ## Training data The model was trained on lyrics from Queen. Dataset is available [here](https://huggingface.co/datasets/huggingartists/queen). And can be used with: ```python from datasets import load_dataset dataset = load_dataset("huggingartists/queen") ``` [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/1jdprwq2/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Queen's lyrics. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/2lvkoamo) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/2lvkoamo/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingartists/queen') generator("I am", num_return_sequences=5) ``` Or with Transformers library: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("huggingartists/queen") model = AutoModelWithLMHead.from_pretrained("huggingartists/queen") ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Aleksey Korshuk* [![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk) [![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk) [![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
FelipeAD/mt5-small-SENTENCE_COMPRESSION
FelipeAD
2022-07-13T06:44:19Z
3
0
transformers
[ "transformers", "tf", "mt5", "text2text-generation", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-07-12T21:29:25Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: FelipeAD/mt5-small-SENTENCE_COMPRESSION results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # FelipeAD/mt5-small-SENTENCE_COMPRESSION This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.1433 - Validation Loss: 0.9768 - Epoch: 3 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 179848, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 2.6046 | 1.1992 | 0 | | 1.3586 | 1.0826 | 1 | | 1.2178 | 1.0241 | 2 | | 1.1433 | 0.9768 | 3 | ### Framework versions - Transformers 4.20.1 - TensorFlow 2.6.0 - Datasets 2.3.2 - Tokenizers 0.12.1
abx/bert-finetuned-ner
abx
2022-07-13T06:15:23Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:conll2003", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-07-13T06:04:39Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 - accuracy model-index: - name: bert-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 args: conll2003 metrics: - name: Precision type: precision value: 0.9341713529606351 - name: Recall type: recall value: 0.9505217098619994 - name: F1 type: f1 value: 0.9422756089422756 - name: Accuracy type: accuracy value: 0.9861070230176017 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0623 - Precision: 0.9342 - Recall: 0.9505 - F1: 0.9423 - Accuracy: 0.9861 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0865 | 1.0 | 1756 | 0.0667 | 0.9166 | 0.9379 | 0.9271 | 0.9829 | | 0.0397 | 2.0 | 3512 | 0.0560 | 0.9337 | 0.9522 | 0.9428 | 0.9867 | | 0.0194 | 3.0 | 5268 | 0.0623 | 0.9342 | 0.9505 | 0.9423 | 0.9861 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0+cu116 - Datasets 2.3.2 - Tokenizers 0.12.1
NimaBoscarino/STPushToHub-test2
NimaBoscarino
2022-07-13T05:57:37Z
4
0
sentence-transformers
[ "sentence-transformers", "pytorch", "distilbert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-07-13T05:49:12Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # NimaBoscarino/STPushToHub-test2 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('NimaBoscarino/STPushToHub-test2') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('NimaBoscarino/STPushToHub-test2') model = AutoModel.from_pretrained('NimaBoscarino/STPushToHub-test2') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=NimaBoscarino/STPushToHub-test2) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 360 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 4, "evaluation_steps": 1000, "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 144, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DistilBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
sun1638650145/Reinforce-Pong-PLE-v0
sun1638650145
2022-07-13T05:32:26Z
0
0
null
[ "Pong-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2022-07-13T05:32:13Z
--- tags: - Pong-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Pong-PLE-v0 results: - metrics: - type: mean_reward value: -16.00 +/- 0.00 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pong-PLE-v0 type: Pong-PLE-v0 --- # 使用**Reinforce**智能体来玩**Pong-PLE-v0** 这是一个使用**Reinforce**训练有素的模型玩**Pong-PLE-v0**. 要学习使用这个模型并训练你的模型, 请查阅深度强化学习课程第5单元: https://github.com/huggingface/deep-rl-class/tree/main/unit5
huggingtweets/majigglydoobers
huggingtweets
2022-07-13T02:58:05Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-07-13T02:56:45Z
--- language: en thumbnail: http://www.huggingtweets.com/majigglydoobers/1657681081092/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1542204712455241729/6E7rxSrt_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">doobers 👻❤️‍🩹</div> <div style="text-align: center; font-size: 14px;">@majigglydoobers</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from doobers 👻❤️‍🩹. | Data | doobers 👻❤️‍🩹 | | --- | --- | | Tweets downloaded | 3249 | | Retweets | 2046 | | Short tweets | 199 | | Tweets kept | 1004 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/36h6xok5/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @majigglydoobers's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/emkivtny) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/emkivtny/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/majigglydoobers') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
ariesutiono/scibert-lm-const-finetuned-20
ariesutiono
2022-07-13T00:15:55Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "fill-mask", "generated_from_trainer", "dataset:conll2003", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-07-12T23:32:22Z
--- tags: - generated_from_trainer datasets: - conll2003 model-index: - name: scibert-lm-const-finetuned-20 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # scibert-lm-const-finetuned-20 This model is a fine-tuned version of [allenai/scibert_scivocab_cased](https://huggingface.co/allenai/scibert_scivocab_cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 2.0099 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.6081 | 1.0 | 118 | 2.9156 | | 2.7954 | 2.0 | 236 | 2.5940 | | 2.5762 | 3.0 | 354 | 2.5017 | | 2.4384 | 4.0 | 472 | 2.3923 | | 2.3391 | 5.0 | 590 | 2.2996 | | 2.2417 | 6.0 | 708 | 2.3180 | | 2.2161 | 7.0 | 826 | 2.2336 | | 2.1918 | 8.0 | 944 | 2.2465 | | 2.1494 | 9.0 | 1062 | 2.1871 | | 2.1215 | 10.0 | 1180 | 2.1566 | | 2.1015 | 11.0 | 1298 | 2.1849 | | 2.05 | 12.0 | 1416 | 2.1092 | | 2.0653 | 13.0 | 1534 | 2.2221 | | 2.0261 | 14.0 | 1652 | 2.1572 | | 2.0117 | 15.0 | 1770 | 2.1452 | | 1.9845 | 16.0 | 1888 | 2.1433 | | 1.9791 | 17.0 | 2006 | 2.1225 | | 1.9979 | 18.0 | 2124 | 2.0777 | | 1.9688 | 19.0 | 2242 | 2.1765 | | 1.9873 | 20.0 | 2360 | 2.0099 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
hugginglearners/fastai-style-transfer
hugginglearners
2022-07-13T00:15:26Z
0
5
fastai
[ "fastai", "pytorch", "image-to-image", "region:us" ]
image-to-image
2022-07-09T14:16:38Z
--- tags: - fastai - pytorch - image-to-image --- ## Model description This repo contains the trained model for Style transfer using vgg16 as the backbone. Full credits go to [Nhu Hoang](https://www.linkedin.com/in/nhu-hoang/) Motivation: Style transfer is an interesting task with an amazing outcome. ## Training and evaluation data ### Training hyperparameters The following hyperparameters were used during training: | Hyperparameters | Value | | :-- | :-- | | name | Adam | | learning_rate | 3e-5 | | training_precision | float16 |
hugginglearners/multi-object-classification
hugginglearners
2022-07-13T00:14:55Z
0
2
fastai
[ "fastai", "image-classification", "region:us" ]
image-classification
2022-07-04T04:34:10Z
--- tags: - fastai - image-classification --- ## Model description This repo contains the trained model for Multi-object classification Full credits go to [Nhu Hoang](https://www.linkedin.com/in/nhu-hoang/) Motivation: Classifying multiple objects is a challenging task without using an object detection algorithm. This model was trained on resnet34 backbone and achieved a good accuracy. ## Training and evaluation data ### Training hyperparameters The following hyperparameters were used during training: | Hyperparameters | Value | | :-- | :-- | | name | Adam | | learning_rate | 3e-3 | | training_precision | float16 |
andrewzhang505/quad-swarm-rl-1
andrewzhang505
2022-07-13T00:02:06Z
5
0
sample-factory
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "region:us" ]
reinforcement-learning
2022-07-12T21:09:52Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory --- A(n) **APPO** model trained on the **quadrotor_multi** environment. This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
AntiSquid/Reinforce-model-666
AntiSquid
2022-07-12T21:52:02Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2022-07-12T21:51:51Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-model-666 results: - metrics: - type: mean_reward value: 117.10 +/- 4.85 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
Shaier/medqa_fine_tuned_generic_bert
Shaier
2022-07-12T20:33:17Z
1
0
transformers
[ "transformers", "pytorch", "bert", "multiple-choice", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
multiple-choice
2022-07-12T19:49:52Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: medqa_fine_tuned_generic_bert results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # medqa_fine_tuned_generic_bert This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.4239 - Accuracy: 0.2869 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 318 | 1.3851 | 0.2594 | | 1.3896 | 2.0 | 636 | 1.3805 | 0.2807 | | 1.3896 | 3.0 | 954 | 1.3852 | 0.2948 | | 1.3629 | 4.0 | 1272 | 1.3996 | 0.2980 | | 1.3068 | 5.0 | 1590 | 1.4239 | 0.2869 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0 - Datasets 2.3.2 - Tokenizers 0.11.0
huggingtweets/dylanfromsf
huggingtweets
2022-07-12T20:29:49Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-07-12T20:29:12Z
--- language: en thumbnail: http://www.huggingtweets.com/dylanfromsf/1657657784578/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1384643526772678657/O7Sz_ZxW_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">dylan</div> <div style="text-align: center; font-size: 14px;">@dylanfromsf</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from dylan. | Data | dylan | | --- | --- | | Tweets downloaded | 1288 | | Retweets | 116 | | Short tweets | 420 | | Tweets kept | 752 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2526mmm1/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dylanfromsf's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2ds3020w) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2ds3020w/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/dylanfromsf') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
ychenNLP/arabic-ner-ace
ychenNLP
2022-07-12T20:02:24Z
15
2
transformers
[ "transformers", "pytorch", "tf", "bert", "text-classification", "BERT", "token-classification", "sequence-tagger-model", "ar", "en", "dataset:ACE2005", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-06-29T18:45:26Z
--- tags: - BERT - token-classification - sequence-tagger-model language: - ar - en license: mit datasets: - ACE2005 --- # Arabic NER Model - [Github repo](https://github.com/edchengg/GigaBERT) - NER BIO tagging model based on [GigaBERTv4](https://huggingface.co/lanwuwei/GigaBERT-v4-Arabic-and-English). - ACE2005 Training data: English + Arabic - [NER tags](https://www.ldc.upenn.edu/sites/www.ldc.upenn.edu/files/english-entities-guidelines-v6.6.pdf) including: PER, VEH, GPE, WEA, ORG, LOC, FAC ## Hyperparameters - learning_rate=2e-5 - num_train_epochs=10 - weight_decay=0.01 ## ACE2005 Evaluation results (F1) | Language | Arabic | English | |:----:|:-----------:|:----:| | | 89.4 | 88.8 | ## How to use ```python >>> from transformers import pipeline, AutoModelForTokenClassification, AutoTokenizer >>> ner_model = AutoModelForTokenClassification.from_pretrained("ychenNLP/arabic-ner-ace") >>> ner_tokenizer = AutoTokenizer.from_pretrained("ychenNLP/arabic-ner-ace") >>> ner_pip = pipeline("ner", model=ner_model, tokenizer=ner_tokenizer, grouped_entities=True) >>> output = ner_pip('Protests break out across the US after Supreme Court overturns.') >>> print(output) [{'entity_group': 'GPE', 'score': 0.9979881, 'word': 'us', 'start': 30, 'end': 32}, {'entity_group': 'ORG', 'score': 0.99898684, 'word': 'supreme court', 'start': 39, 'end': 52}] >>> output = ner_pip('قال وزير العدل التركي بكير بوزداغ إن أنقرة تريد 12 مشتبهاً بهم من فنلندا و 21 من السويد') >>> print(output) [{'entity_group': 'PER', 'score': 0.9996214, 'word': 'وزير', 'start': 4, 'end': 8}, {'entity_group': 'ORG', 'score': 0.9952383, 'word': 'العدل', 'start': 9, 'end': 14}, {'entity_group': 'GPE', 'score': 0.9996675, 'word': 'التركي', 'start': 15, 'end': 21}, {'entity_group': 'PER', 'score': 0.9978992, 'word': 'بكير بوزداغ', 'start': 22, 'end': 33}, {'entity_group': 'GPE', 'score': 0.9997154, 'word': 'انقرة', 'start': 37, 'end': 42}, {'entity_group': 'PER', 'score': 0.9946885, 'word': 'مشتبها بهم', 'start': 51, 'end': 62}, {'entity_group': 'GPE', 'score': 0.99967396, 'word': 'فنلندا', 'start': 66, 'end': 72}, {'entity_group': 'PER', 'score': 0.99694425, 'word': '21', 'start': 75, 'end': 77}, {'entity_group': 'GPE', 'score': 0.99963355, 'word': 'السويد', 'start': 81, 'end': 87}] ``` ### BibTeX entry and citation info ```bibtex @inproceedings{lan2020gigabert, author = {Lan, Wuwei and Chen, Yang and Xu, Wei and Ritter, Alan}, title = {Giga{BERT}: Zero-shot Transfer Learning from {E}nglish to {A}rabic}, booktitle = {Proceedings of The 2020 Conference on Empirical Methods on Natural Language Processing (EMNLP)}, year = {2020} } ```
didi27/bloom-edu
didi27
2022-07-12T17:57:21Z
0
0
null
[ "license:bigscience-bloom-rail-1.0", "region:us" ]
null
2022-07-12T17:57:16Z
--- license: bigscience-bloom-rail-1.0 ---
huggingtweets/masonhaggerty
huggingtweets
2022-07-12T17:17:06Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-07-12T16:48:40Z
--- language: en thumbnail: http://www.huggingtweets.com/masonhaggerty/1657646221015/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1410026132121047041/LiYev7vQ_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Mason Haggerty</div> <div style="text-align: center; font-size: 14px;">@masonhaggerty</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Mason Haggerty. | Data | Mason Haggerty | | --- | --- | | Tweets downloaded | 785 | | Retweets | 71 | | Short tweets | 82 | | Tweets kept | 632 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/jpav9nmg/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @masonhaggerty's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/bs6k2tzz) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/bs6k2tzz/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/masonhaggerty') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
Li-Tang/rare-puppers
Li-Tang
2022-07-12T16:57:55Z
54
0
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "huggingpics", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-07-12T16:57:42Z
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: rare-puppers results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.9701492786407471 --- # rare-puppers Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### corgi ![corgi](images/corgi.jpg) #### samoyed ![samoyed](images/samoyed.jpg) #### shiba inu ![shiba inu](images/shiba_inu.jpg)
zluvolyote/s288cExpressionPrediction_k6
zluvolyote
2022-07-12T16:54:43Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-07-12T16:02:01Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: s288cExpressionPrediction_k6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # s288cExpressionPrediction_k6 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4418 - Accuracy: 0.8067 - F1: 0.7882 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 58 | 0.5315 | 0.7278 | 0.7572 | | No log | 2.0 | 116 | 0.4604 | 0.7853 | 0.7841 | | No log | 3.0 | 174 | 0.4418 | 0.8067 | 0.7882 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
reachrkr/TEST2ppo-LunarLander-v2
reachrkr
2022-07-12T16:20:36Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-07-12T16:20:08Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 266.96 +/- 25.94 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
fxmarty/20220712-h16m02s58_example_beans
fxmarty
2022-07-12T16:03:03Z
0
0
null
[ "tensorboard", "vit", "image-classification", "dataset:beans", "region:us" ]
image-classification
2022-07-12T16:02:58Z
--- pipeline_tag: image-classification datasets: - beans metrics: - accuracy tags: - vit --- **task**: `image-classification` **Backend:** `sagemaker-training` **Backend args:** `{'instance_type': 'ml.g4dn.2xlarge', 'supported_instructions': None}` **Number of evaluation samples:** `All dataset` Fixed parameters: * **model_name_or_path**: `nateraw/vit-base-beans` * **dataset**: * **path**: `beans` * **eval_split**: `validation` * **data_keys**: `{'primary': 'image'}` * **ref_keys**: `['labels']` * **calibration_split**: `train` * **quantization_approach**: `dynamic` * **calibration**: * **method**: `minmax` * **num_calibration_samples**: `100` * **framework**: `onnxruntime` * **framework_args**: * **opset**: `11` * **optimization_level**: `1` * **aware_training**: `False` Benchmarked parameters: * **operators_to_quantize**: `['Add']`, `['Add', 'MatMul']` * **node_exclusion**: `[]`, `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` * **per_channel**: `False`, `True` # Evaluation ## Non-time metrics | operators_to_quantize | node_exclusion | per_channel | | accuracy (original) | accuracy (optimized) | | :-------------------: | :------------------------------------------------------: | :---------: | :-: | :-----------------: | :------------------: | | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `False` | \| | 0.980 | 0.980 | | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `True` | \| | 0.980 | 0.980 | | `['Add', 'MatMul']` | `[]` | `False` | \| | 0.980 | 0.980 | | `['Add', 'MatMul']` | `[]` | `True` | \| | 0.980 | 0.980 | | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `False` | \| | 0.980 | 0.980 | | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `True` | \| | 0.980 | 0.980 | | `['Add']` | `[]` | `False` | \| | 0.980 | 0.980 | | `['Add']` | `[]` | `True` | \| | 0.980 | 0.980 | ## Time metrics Time benchmarks were run for 15 seconds per config. Below, time metrics for batch size = 1, input length = 32. | operators_to_quantize | node_exclusion | per_channel | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) | | :-------------------: | :------------------------------------------------------: | :---------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: | | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `False` | \| | 200.50 | 63.00 | \| | 5.00 | 15.93 | | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `True` | \| | 198.19 | 72.65 | \| | 5.07 | 13.80 | | `['Add', 'MatMul']` | `[]` | `False` | \| | 191.44 | 63.27 | \| | 5.27 | 15.87 | | `['Add', 'MatMul']` | `[]` | `True` | \| | 154.84 | 72.51 | \| | 6.47 | 13.80 | | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `False` | \| | 155.84 | 130.95 | \| | 6.47 | 7.67 | | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `True` | \| | 201.76 | 131.25 | \| | 5.00 | 7.67 | | `['Add']` | `[]` | `False` | \| | 198.96 | 128.82 | \| | 5.07 | 7.80 | | `['Add']` | `[]` | `True` | \| | 163.76 | 129.62 | \| | 6.13 | 7.73 | Below, time metrics for batch size = 1, input length = 64. | operators_to_quantize | node_exclusion | per_channel | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) | | :-------------------: | :------------------------------------------------------: | :---------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: | | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `False` | \| | 162.75 | 67.18 | \| | 6.20 | 14.93 | | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `True` | \| | 159.69 | 72.77 | \| | 6.33 | 13.80 | | `['Add', 'MatMul']` | `[]` | `False` | \| | 183.10 | 64.02 | \| | 5.47 | 15.67 | | `['Add', 'MatMul']` | `[]` | `True` | \| | 157.21 | 64.16 | \| | 6.40 | 15.60 | | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `False` | \| | 155.32 | 130.74 | \| | 6.47 | 7.67 | | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `True` | \| | 198.56 | 162.51 | \| | 5.07 | 6.20 | | `['Add']` | `[]` | `False` | \| | 186.58 | 163.38 | \| | 5.40 | 6.13 | | `['Add']` | `[]` | `True` | \| | 199.75 | 131.46 | \| | 5.07 | 7.67 | Below, time metrics for batch size = 1, input length = 128. | operators_to_quantize | node_exclusion | per_channel | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) | | :-------------------: | :------------------------------------------------------: | :---------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: | | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `False` | \| | 160.58 | 67.65 | \| | 6.27 | 14.80 | | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `True` | \| | 158.60 | 72.53 | \| | 6.33 | 13.80 | | `['Add', 'MatMul']` | `[]` | `False` | \| | 200.46 | 62.95 | \| | 5.00 | 15.93 | | `['Add', 'MatMul']` | `[]` | `True` | \| | 195.39 | 72.28 | \| | 5.13 | 13.87 | | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `False` | \| | 197.59 | 128.80 | \| | 5.07 | 7.80 | | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `True` | \| | 156.24 | 162.63 | \| | 6.47 | 6.20 | | `['Add']` | `[]` | `False` | \| | 157.25 | 129.13 | \| | 6.40 | 7.80 | | `['Add']` | `[]` | `True` | \| | 176.08 | 161.79 | \| | 5.73 | 6.20 | Below, time metrics for batch size = 4, input length = 32. | operators_to_quantize | node_exclusion | per_channel | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) | | :-------------------: | :------------------------------------------------------: | :---------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: | | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `False` | \| | 503.83 | 219.62 | \| | 2.00 | 4.60 | | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `True` | \| | 603.26 | 266.15 | \| | 1.67 | 3.80 | | `['Add', 'MatMul']` | `[]` | `False` | \| | 654.79 | 217.45 | \| | 1.53 | 4.60 | | `['Add', 'MatMul']` | `[]` | `True` | \| | 654.33 | 219.54 | \| | 1.53 | 4.60 | | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `False` | \| | 654.20 | 481.61 | \| | 1.53 | 2.13 | | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `True` | \| | 609.81 | 632.73 | \| | 1.67 | 1.60 | | `['Add']` | `[]` | `False` | \| | 588.86 | 602.91 | \| | 1.73 | 1.67 | | `['Add']` | `[]` | `True` | \| | 666.98 | 655.32 | \| | 1.53 | 1.53 | Below, time metrics for batch size = 4, input length = 64. | operators_to_quantize | node_exclusion | per_channel | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) | | :-------------------: | :------------------------------------------------------: | :---------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: | | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `False` | \| | 656.87 | 216.32 | \| | 1.53 | 4.67 | | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `True` | \| | 507.24 | 265.62 | \| | 2.00 | 3.80 | | `['Add', 'MatMul']` | `[]` | `False` | \| | 655.36 | 219.61 | \| | 1.53 | 4.60 | | `['Add', 'MatMul']` | `[]` | `True` | \| | 613.28 | 220.96 | \| | 1.67 | 4.53 | | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `False` | \| | 656.30 | 652.72 | \| | 1.53 | 1.53 | | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `True` | \| | 521.09 | 472.90 | \| | 1.93 | 2.13 | | `['Add']` | `[]` | `False` | \| | 655.37 | 473.77 | \| | 1.53 | 2.13 | | `['Add']` | `[]` | `True` | \| | 653.62 | 468.82 | \| | 1.53 | 2.13 | Below, time metrics for batch size = 4, input length = 128. | operators_to_quantize | node_exclusion | per_channel | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) | | :-------------------: | :------------------------------------------------------: | :---------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: | | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `False` | \| | 654.24 | 216.82 | \| | 1.53 | 4.67 | | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `True` | \| | 657.16 | 240.11 | \| | 1.53 | 4.20 | | `['Add', 'MatMul']` | `[]` | `False` | \| | 504.14 | 217.47 | \| | 2.00 | 4.60 | | `['Add', 'MatMul']` | `[]` | `True` | \| | 655.94 | 220.12 | \| | 1.53 | 4.60 | | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `False` | \| | 653.99 | 479.06 | \| | 1.53 | 2.13 | | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `True` | \| | 642.48 | 666.28 | \| | 1.60 | 1.53 | | `['Add']` | `[]` | `False` | \| | 656.34 | 661.24 | \| | 1.53 | 1.53 | | `['Add']` | `[]` | `True` | \| | 661.86 | 472.49 | \| | 1.53 | 2.13 | Below, time metrics for batch size = 8, input length = 32. | operators_to_quantize | node_exclusion | per_channel | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) | | :-------------------: | :------------------------------------------------------: | :---------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: | | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `False` | \| | 1294.07 | 472.54 | \| | 0.80 | 2.13 | | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `True` | \| | 1287.58 | 542.72 | \| | 0.80 | 1.87 | | `['Add', 'MatMul']` | `[]` | `False` | \| | 1033.37 | 433.32 | \| | 1.00 | 2.33 | | `['Add', 'MatMul']` | `[]` | `True` | \| | 1030.14 | 542.36 | \| | 1.00 | 1.87 | | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `False` | \| | 953.27 | 926.14 | \| | 1.07 | 1.13 | | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `True` | \| | 1173.01 | 995.22 | \| | 0.87 | 1.07 | | `['Add']` | `[]` | `False` | \| | 1280.07 | 926.97 | \| | 0.80 | 1.13 | | `['Add']` | `[]` | `True` | \| | 1283.70 | 927.87 | \| | 0.80 | 1.13 | Below, time metrics for batch size = 8, input length = 64. | operators_to_quantize | node_exclusion | per_channel | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) | | :-------------------: | :------------------------------------------------------: | :---------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: | | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `False` | \| | 1273.61 | 435.27 | \| | 0.80 | 2.33 | | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `True` | \| | 1157.00 | 542.75 | \| | 0.87 | 1.87 | | `['Add', 'MatMul']` | `[]` | `False` | \| | 968.85 | 537.65 | \| | 1.07 | 1.87 | | `['Add', 'MatMul']` | `[]` | `True` | \| | 1107.66 | 472.53 | \| | 0.93 | 2.13 | | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `False` | \| | 1270.30 | 1092.10 | \| | 0.80 | 0.93 | | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `True` | \| | 1263.29 | 1012.66 | \| | 0.80 | 1.00 | | `['Add']` | `[]` | `False` | \| | 1007.19 | 1331.12 | \| | 1.07 | 0.80 | | `['Add']` | `[]` | `True` | \| | 1286.51 | 1317.96 | \| | 0.80 | 0.80 | Below, time metrics for batch size = 8, input length = 128. | operators_to_quantize | node_exclusion | per_channel | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) | | :-------------------: | :------------------------------------------------------: | :---------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: | | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `False` | \| | 1188.98 | 537.58 | \| | 0.87 | 1.87 | | `['Add', 'MatMul']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `True` | \| | 951.31 | 489.40 | \| | 1.07 | 2.07 | | `['Add', 'MatMul']` | `[]` | `False` | \| | 1278.73 | 537.52 | \| | 0.80 | 1.87 | | `['Add', 'MatMul']` | `[]` | `True` | \| | 1005.38 | 440.01 | \| | 1.07 | 2.33 | | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `False` | \| | 1265.55 | 1304.51 | \| | 0.80 | 0.80 | | `['Add']` | `['layernorm', 'gelu', 'residual', 'gather', 'softmax']` | `True` | \| | 1186.54 | 934.09 | \| | 0.87 | 1.13 | | `['Add']` | `[]` | `False` | \| | 1276.38 | 1319.84 | \| | 0.80 | 0.80 | | `['Add']` | `[]` | `True` | \| | 981.81 | 940.69 | \| | 1.07 | 1.07 |
MarLac/wav2vec2-base-timit-demo-google-colab
MarLac
2022-07-12T15:41:51Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-12T08:24:30Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-timit-demo-google-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-google-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5816 - Wer: 0.3533 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 2.243 | 0.5 | 500 | 1.0798 | 0.7752 | | 0.834 | 1.01 | 1000 | 0.6206 | 0.5955 | | 0.5503 | 1.51 | 1500 | 0.5387 | 0.5155 | | 0.4548 | 2.01 | 2000 | 0.4660 | 0.4763 | | 0.3412 | 2.51 | 2500 | 0.8381 | 0.4836 | | 0.3128 | 3.02 | 3000 | 0.4818 | 0.4519 | | 0.2547 | 3.52 | 3500 | 0.4415 | 0.4230 | | 0.2529 | 4.02 | 4000 | 0.4624 | 0.4219 | | 0.2103 | 4.52 | 4500 | 0.4714 | 0.4096 | | 0.2102 | 5.03 | 5000 | 0.4968 | 0.4087 | | 0.1838 | 5.53 | 5500 | 0.4643 | 0.4131 | | 0.1721 | 6.03 | 6000 | 0.4676 | 0.3979 | | 0.1548 | 6.53 | 6500 | 0.4765 | 0.4085 | | 0.1595 | 7.04 | 7000 | 0.4797 | 0.3941 | | 0.1399 | 7.54 | 7500 | 0.4753 | 0.3902 | | 0.1368 | 8.04 | 8000 | 0.4697 | 0.3945 | | 0.1276 | 8.54 | 8500 | 0.5438 | 0.3869 | | 0.1255 | 9.05 | 9000 | 0.5660 | 0.3841 | | 0.1077 | 9.55 | 9500 | 0.4964 | 0.3947 | | 0.1197 | 10.05 | 10000 | 0.5349 | 0.3849 | | 0.1014 | 10.55 | 10500 | 0.5558 | 0.3883 | | 0.0949 | 11.06 | 11000 | 0.5673 | 0.3785 | | 0.0882 | 11.56 | 11500 | 0.5589 | 0.3955 | | 0.0906 | 12.06 | 12000 | 0.5752 | 0.4120 | | 0.1064 | 12.56 | 12500 | 0.5080 | 0.3727 | | 0.0854 | 13.07 | 13000 | 0.5398 | 0.3798 | | 0.0754 | 13.57 | 13500 | 0.5237 | 0.3816 | | 0.0791 | 14.07 | 14000 | 0.4967 | 0.3725 | | 0.0731 | 14.57 | 14500 | 0.5287 | 0.3744 | | 0.0719 | 15.08 | 15000 | 0.5633 | 0.3596 | | 0.062 | 15.58 | 15500 | 0.5399 | 0.3752 | | 0.0681 | 16.08 | 16000 | 0.5151 | 0.3759 | | 0.0559 | 16.58 | 16500 | 0.5564 | 0.3709 | | 0.0533 | 17.09 | 17000 | 0.5933 | 0.3743 | | 0.0563 | 17.59 | 17500 | 0.5381 | 0.3670 | | 0.0527 | 18.09 | 18000 | 0.5685 | 0.3731 | | 0.0492 | 18.59 | 18500 | 0.5728 | 0.3725 | | 0.0509 | 19.1 | 19000 | 0.6074 | 0.3807 | | 0.0436 | 19.6 | 19500 | 0.5762 | 0.3628 | | 0.0434 | 20.1 | 20000 | 0.6721 | 0.3729 | | 0.0416 | 20.6 | 20500 | 0.5842 | 0.3700 | | 0.0431 | 21.11 | 21000 | 0.5374 | 0.3607 | | 0.037 | 21.61 | 21500 | 0.5556 | 0.3667 | | 0.036 | 22.11 | 22000 | 0.5608 | 0.3592 | | 0.04 | 22.61 | 22500 | 0.5272 | 0.3637 | | 0.047 | 23.12 | 23000 | 0.5234 | 0.3625 | | 0.0506 | 23.62 | 23500 | 0.5427 | 0.3629 | | 0.0418 | 24.12 | 24000 | 0.5590 | 0.3626 | | 0.037 | 24.62 | 24500 | 0.5615 | 0.3555 | | 0.0429 | 25.13 | 25000 | 0.5806 | 0.3616 | | 0.045 | 25.63 | 25500 | 0.5777 | 0.3639 | | 0.0283 | 26.13 | 26000 | 0.5987 | 0.3617 | | 0.0253 | 26.63 | 26500 | 0.5671 | 0.3551 | | 0.032 | 27.14 | 27000 | 0.5464 | 0.3582 | | 0.0321 | 27.64 | 27500 | 0.5634 | 0.3573 | | 0.0274 | 28.14 | 28000 | 0.5513 | 0.3575 | | 0.0245 | 28.64 | 28500 | 0.5745 | 0.3537 | | 0.0251 | 29.15 | 29000 | 0.5759 | 0.3547 | | 0.0222 | 29.65 | 29500 | 0.5816 | 0.3533 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 1.18.3 - Tokenizers 0.12.1
andreaschandra/xlm-roberta-base-finetuned-panx-fr
andreaschandra
2022-07-12T15:30:15Z
6
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-07-12T15:15:58Z
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-fr results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.fr metrics: - name: F1 type: f1 value: 0.9275221167113059 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1059 - F1: 0.9275 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.5416 | 1.0 | 191 | 0.2322 | 0.8378 | | 0.2614 | 2.0 | 382 | 0.1544 | 0.8866 | | 0.1758 | 3.0 | 573 | 0.1059 | 0.9275 | ### Framework versions - Transformers 4.19.4 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
Kuro96/q-FrozenLake-v1-4x4-noSlippery
Kuro96
2022-07-12T14:35:27Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2022-07-12T14:35:21Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery --- # **Q-Learning** Agent playing **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="Kuro96/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
Kuro96/q-Taxi-v3
Kuro96
2022-07-12T14:25:59Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2022-07-12T14:25:52Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - metrics: - type: mean_reward value: 7.50 +/- 2.76 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 --- # **Q-Learning** Agent playing **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
hugginglearners/pokemon-card-checker
hugginglearners
2022-07-12T13:21:57Z
0
1
fastai
[ "fastai", "resnet", "computer-vision", "classification", "image-classification", "binary-classification", "license:cc0-1.0", "region:us" ]
image-classification
2022-06-20T14:02:07Z
--- tags: - fastai - resnet - computer-vision - classification - image-classification - binary-classification license: - cc0-1.0 --- # Resnet34 Pokemon Card Classifier ## Model Description This is a resnet34 model fine-tuned with fastai to [classify real and fake Pokemon cards (dataset)](https://www.kaggle.com/datasets/ongshujian/real-and-fake-pokemon-cards). Here is a colab notebook that shows how the model was trained and pushed to the hub: [link](https://github.com/mindwrapped/pokemon-card-checker/blob/main/pokemon_card_checker.ipynb). ## Intended uses & limitation This model is trained to identify real vs fake cards based on the backs of the cards, not the front. ## How to use ```python from huggingface_hub import from_pretrained_fastai # Pull model from hub learn = from_pretrained_fastai('hugginglearners/pokemon-card-checker') # Get prediction for this image pred_label, _, scores = learn.predict(img) ``` ## Training data Dataset located here: [link](https://www.kaggle.com/datasets/ongshujian/real-and-fake-pokemon-cards).
cffl/bart-base-styletransfer-subjective-to-neutral
cffl
2022-07-12T11:58:08Z
286
3
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "arxiv:1911.09709", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-07-01T18:41:46Z
--- license: apache-2.0 --- # bart-base-styletransfer-subjective-to-neutral ## Model description This [facebook/bart-base](https://huggingface.co/facebook/bart-base) model has been fine-tuned on the [Wiki Neutrality Corpus (WNC)](https://arxiv.org/pdf/1911.09709.pdf) - a parallel corpus of 180,000 biased and neutralized sentence pairs along with contextual sentences and metadata. The model can be used to transfer style in text from subjectively biased to neutrally toned. The development and modeling efforts that produced this model are documented in detail through [this blog series](https://blog.fastforwardlabs.com/2022/05/05/neutralizing-subjectivity-bias-with-huggingface-transformers.html). ## Intended uses & limitations The model is intended purely as a research output for NLP and data science communities. We imagine this model will be used by researchers to better understand the limitations, robustness, and generalization of text style transfer models. Ultimately, we hope this model will inspire future work on text style transfer and serve as a benchmarking tool for the style attribute of subjectivity bias, specifically. Any production use of this model - whether commercial or not - is currently not intended. This is because, as [the team at OpenAI points out](https://github.com/openai/gpt-2/blob/master/model_card.md#out-of-scope-use-cases), large langauge models like BART reflect biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans, unless the deployers first carry out a study of biases relevant to the intended use-case. Neither the model nor the WNC dataset has been sufficiently evaluated for performance and bias. Our efforts quantified model performance using two custom evaluation metrics, neither of which have been correlated to human evaluation for the task. As we discuss in the blog series, since the WNC is a parallel dataset and we formulate the learning task as a supervised problem, the model indirectly adopts Wikipedia's NPOV policy as the definition for "neutrality" and "subjectivity". The NPOV policy may not fully reflect an end users assumed/intended meaning of subjectivity because the notion of subjectivity itself can be...well, subjective. We discovered through our exploratory work that the WNC does contain data quality issues that will contribute to unintended bias in the model. For example, some NPOV revisions introduce factual information outside the context of the prompt as a means to correct bias. We believe these factual based edits are out of scope for a subjective-to-neutral style transfer modeling task, but exist here nonetheless. ## How to use This model can be used directly with a HuggingFace pipeline for `text2text-generation`. ```python >>> from transformers import pipeline >>> styletransfer = pipeline( task="text2text-generation", model="cffl/bart-base-styletransfer-subjective-to-neutral", max_length=200, ) >>> input_text = "chemical abstracts service (cas), a prominent division of the american chemical society, is the world's leading source of chemical information." >>> styletransfer(input_text) [{'generated_text': 'chemical abstracts service (cas), a division of the american chemical society, is a source of chemical information.'}] ``` ## Training procedure For modeling, we made extensive use of the Huggingface transformers library by initializing the [BartForConditionalGeneration](https://huggingface.co/docs/transformers/model_doc/bart#transformers.BartForConditionalGeneration) model with [facebook/bart-base](https://huggingface.co/facebook/bart-base) pretrained weights and adapting the [summarization fine-tuning script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/summarization) for our TST-specific needs. We fine-tune the model for 15 epochs on an NVIDIA Tesla V100 GPU with a batch size of 32. (Note that when fine-tuning the model with the parallel examples, the noising function is turned off so an uncorrupted document is passed to BART's encoder and decoder.) Please refer to [our blog series](https://blog.fastforwardlabs.com/2022/05/05/neutralizing-subjectivity-bias-with-huggingface-transformers.html) for a discussion of evaluation metrics and results.
cffl/bert-base-styleclassification-subjective-neutral
cffl
2022-07-12T11:57:42Z
2,297
8
transformers
[ "transformers", "pytorch", "bert", "text-classification", "arxiv:1911.09709", "arxiv:1703.01365", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-07-01T19:35:53Z
--- license: apache-2.0 --- # bert-base-styleclassification-subjective-neutral ## Model description This [bert-base-uncased](https://huggingface.co/bert-base-uncased) model has been fine-tuned on the [Wiki Neutrality Corpus (WNC)](https://arxiv.org/pdf/1911.09709.pdf) - a parallel corpus of 180,000 biased and neutralized sentence pairs along with contextual sentences and metadata. The model can be used to classify text as subjectively biased vs. neutrally toned. The development and modeling efforts that produced this model are documented in detail through [this blog series](https://blog.fastforwardlabs.com/2022/05/05/neutralizing-subjectivity-bias-with-huggingface-transformers.html). ## Intended uses & limitations The model is intended purely as a research output for NLP and data science communities. We developed this model for the purpose of evaluating text style transfer output. Specifically, we derive a Style Transfer Intensity (STI) metric from the classifier's output distributions. We also extract feautre importances from the model via [Integrated Gradients](https://arxiv.org/pdf/1703.01365.pdf) with support a Content Preservation Score (CPS). We imagine this model will be used by researchers to better understand the limitations, robustness, and generalization of text style transfer models. Ultimately, we hope this model will inspire future work on text style transfer and serve as a benchmarking tool for the style attribute of subjectivity bias, specifically. Any production use of this model - whether commercial or not - is currently not intended. This is because, as [the team at OpenAI points out](https://github.com/openai/gpt-2/blob/master/model_card.md#out-of-scope-use-cases), large langauge models like BERT reflect biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans, unless the deployers first carry out a study of biases relevant to the intended use-case. Neither the model nor the WNC dataset has been sufficiently evaluated for performance and bias. As we discuss in the blog series, since the WNC is a parallel dataset and we formulate the learning task as a supervised problem, the model indirectly adopts Wikipedia's NPOV policy as the definition for "neutrality" and "subjectivity". The NPOV policy may not fully reflect an end users assumed/intended meaning of subjectivity because the notion of subjectivity itself can be...well, subjective. We discovered through our exploratory work that the WNC does contain data quality issues that will contribute to unintended bias in the model. For example, some NPOV revisions introduce factual information outside the context of the prompt as a means to correct bias. We believe these factual based edits are out of scope for a subjective-to-neutral style transfer modeling task, but exist here nonetheless. ## How to use This model can be used directly with a HuggingFace pipeline for `text2text-generation`. ```python >>> from transformers import pipeline >>> classify = pipeline( task="text-classification", model="cffl/bert-base-styleclassification-subjective-neutral", return_all_scores=True, ) >>> input_text = "chemical abstracts service (cas), a prominent division of the american chemical society, is the world's leading source of chemical information." >>> classify(input_text) [[{'label': 'SUBJECTIVE', 'score': 0.9765084385871887}, {'label': 'NEUTRAL', 'score': 0.023491567000746727}]] ``` ## Training procedure For training, we initialize HuggingFace’s [AutoModelforSequenceClassification](https://huggingface.co/docs/transformers/model_doc/auto#transformers.AutoModelForSequenceClassification) with [bert-base-uncased](https://huggingface.co/bert-base-uncased) pre-trained weights and perform a hyperparameter search over: batch size [16, 32], learning rate [3e-05, 3e-06, 3e-07], weight decay [0, 0.01, 0.1] and batch shuffling [True, False] while training for 15 epochs. We monitor performance using accuracy as we have a perfectly balanced dataset and assign equal cost to false positives and false negatives. The best performing model produces an overall accuracy of 72.50% -- please reference our [training script](https://github.com/fastforwardlabs/text-style-transfer/blob/main/scripts/train/classifier/train_classifier.py) and [classifier evaluation notebook](https://github.com/fastforwardlabs/text-style-transfer/blob/main/notebooks/WNC_full_style_classifier_evaluation.ipynb) for further details.
Vikasbhandari/wav2vec2-train
Vikasbhandari
2022-07-12T11:51:48Z
3
0
transformers
[ "transformers", "pytorch", "tf", "jax", "tensorboard", "wav2vec2", "automatic-speech-recognition", "speech", "audio", "hf-asr-leaderboard", "en", "dataset:librispeech_asr", "arxiv:2010.11430", "arxiv:2006.11477", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-12T11:11:37Z
--- language: en datasets: - librispeech_asr tags: - speech - audio - automatic-speech-recognition - hf-asr-leaderboard license: apache-2.0 model-index: - name: wav2vec2-large-960h-lv60 results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: LibriSpeech (clean) type: librispeech_asr config: clean split: test args: language: en metrics: - name: Test WER type: wer value: 1.9 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: LibriSpeech (other) type: librispeech_asr config: other split: test args: language: en metrics: - name: Test WER type: wer value: 3.9 --- # Wav2Vec2-Large-960h-Lv60 + Self-Training [Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) The large model pretrained and fine-tuned on 960 hours of Libri-Light and Librispeech on 16kHz sampled speech audio. Model was trained with [Self-Training objective](https://arxiv.org/abs/2010.11430). When using the model make sure that your speech input is also sampled at 16Khz. [Paper](https://arxiv.org/abs/2006.11477) Authors: Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli **Abstract** We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech recognition with limited amounts of labeled data. The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20. # Usage To transcribe audio files the model can be used as a standalone acoustic model as follows: ```python from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC from datasets import load_dataset import torch # load model and processor processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-large-960h-lv60-self") model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-large-960h-lv60-self") # load dummy dataset and read soundfiles ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation") # tokenize input_values = processor(ds[0]["audio"]["array"], return_tensors="pt", padding="longest").input_values # retrieve logits logits = model(input_values).logits # take argmax and decode predicted_ids = torch.argmax(logits, dim=-1) transcription = processor.batch_decode(predicted_ids) ``` ## Evaluation This code snippet shows how to evaluate **facebook/wav2vec2-large-960h-lv60-self** on LibriSpeech's "clean" and "other" test data. ```python from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import torch from jiwer import wer librispeech_eval = load_dataset("librispeech_asr", "clean", split="test") model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-large-960h-lv60-self").to("cuda") processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-large-960h-lv60-self") def map_to_pred(batch): inputs = processor(batch["audio"]["array"], return_tensors="pt", padding="longest") input_values = inputs.input_values.to("cuda") attention_mask = inputs.attention_mask.to("cuda") with torch.no_grad(): logits = model(input_values, attention_mask=attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) transcription = processor.batch_decode(predicted_ids) batch["transcription"] = transcription return batch result = librispeech_eval.map(map_to_pred, remove_columns=["audio"]) print("WER:", wer(result["text"], result["transcription"])) ``` *Result (WER)*: | "clean" | "other" | |---|---| | 1.9 | 3.9 |
MiguelCosta/finetuning-sentiment-model-24000-samples
MiguelCosta
2022-07-12T10:48:14Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-07-12T06:17:23Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb metrics: - accuracy - f1 model-index: - name: finetuning-sentiment-model-24000-samples results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb args: plain_text metrics: - name: Accuracy type: accuracy value: 0.9266666666666666 - name: F1 type: f1 value: 0.9273927392739274 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-24000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3505 - Accuracy: 0.9267 - F1: 0.9274 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
paola-md/recipe-distilbert-upper-tIs
paola-md
2022-07-12T10:28:07Z
7
0
transformers
[ "transformers", "pytorch", "distilbert", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-07-12T07:36:46Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: recipe-distilbert-upper-tIs results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # recipe-distilbert-upper-tIs This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8746 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.67 | 1.0 | 1353 | 1.2945 | | 1.2965 | 2.0 | 2706 | 1.1547 | | 1.1904 | 3.0 | 4059 | 1.0846 | | 1.1272 | 4.0 | 5412 | 1.0407 | | 1.0857 | 5.0 | 6765 | 1.0039 | | 1.0549 | 6.0 | 8118 | 0.9802 | | 1.03 | 7.0 | 9471 | 0.9660 | | 1.01 | 8.0 | 10824 | 0.9474 | | 0.9931 | 9.0 | 12177 | 0.9365 | | 0.9807 | 10.0 | 13530 | 0.9252 | | 0.9691 | 11.0 | 14883 | 0.9105 | | 0.9601 | 12.0 | 16236 | 0.9079 | | 0.9503 | 13.0 | 17589 | 0.8979 | | 0.9436 | 14.0 | 18942 | 0.8930 | | 0.9371 | 15.0 | 20295 | 0.8875 | | 0.9322 | 16.0 | 21648 | 0.8851 | | 0.9279 | 17.0 | 23001 | 0.8801 | | 0.9254 | 18.0 | 24354 | 0.8812 | | 0.9227 | 19.0 | 25707 | 0.8768 | | 0.9232 | 20.0 | 27060 | 0.8746 | ### Framework versions - Transformers 4.19.0.dev0 - Pytorch 1.11.0+cu102 - Datasets 2.3.2 - Tokenizers 0.12.1
nawta/wav2vec2-onomatopoeia-finetune_smalldata_ESC50pretrained
nawta
2022-07-12T10:20:53Z
15
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-12T05:31:38Z
--- tags: - generated_from_trainer model-index: - name: wav2vec2-onomatopoeia-finetune_smalldata_ESC50pretrained results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-onomatopoeia-finetune_smalldata_ESC50pretrained This model is a fine-tuned version of [/root/workspace/wav2vec2-pretrained_with_ESC50_10000epochs_32batch_2022-07-09_22-16-46/pytorch_model.bin](https://huggingface.co//root/workspace/wav2vec2-pretrained_with_ESC50_10000epochs_32batch_2022-07-09_22-16-46/pytorch_model.bin) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.2963 - Cer: 0.9002 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 64 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Cer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 4.3287 | 23.81 | 500 | 2.2963 | 0.9002 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.1+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
luke-thorburn/suggest-objections-bias-only
luke-thorburn
2022-07-12T10:08:02Z
6
1
transformers
[ "transformers", "pytorch", "gpt_neo", "text-generation", "argumentation", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: - en tags: - argumentation license: apache-2.0 metrics: - perplexity --- # Generate objections to a claim This model is a version of [`gpt-neo-2.7B`](https://huggingface.co/EleutherAI/gpt-neo-2.7B), where some parameters (only the bias parameters, not weights) have been finetuned on the task of generating the objections to a claim, optionally given some example objections to that claim. It was trained as part of a University of Melbourne [research project](https://github.com/Hunt-Laboratory/language-model-optimization) evaluating how large language models can best be optimized to perform argumentative reasoning tasks. Code used for optimization and evaluation can be found in the project [GitHub repository](https://github.com/Hunt-Laboratory/language-model-optimization). A paper reporting on model evaluation is currently under review. # Prompt Template ``` List objections to the claim that: [original claim] Objections: * [objection 1] * [objection 2] ... * [objection n] * [generated objection] ``` # Dataset The parameters were finetuned using argument maps scraped from the crowdsourced argument-mapping platform [Kialo](https://kialo.com/). # Limitations and Biases The model is a finetuned version of [`gpt-neo-2.7B`](https://huggingface.co/EleutherAI/gpt-neo-2.7B), so likely has many of the same limitations and biases. Additionally, note that while the goal of the model is to produce coherent and valid reasoning, many generated model outputs will be illogical or nonsensical and should not be relied upon. # Acknowledgements This research was funded by the Australian Department of Defence and the Office of National Intelligence under the AI for Decision Making Program, delivered in partnership with the Defence Science Institute in Victoria, Australia.
luke-thorburn/suggest-reasons-bias-only
luke-thorburn
2022-07-12T10:07:19Z
8
0
transformers
[ "transformers", "pytorch", "gpt_neo", "text-generation", "argumentation", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: - en tags: - argumentation license: apache-2.0 metrics: - perplexity --- # Generate reasons that support a claim This model is a version of [`gpt-neo-2.7B`](https://huggingface.co/EleutherAI/gpt-neo-2.7B), where some parameters (only the bias parameters, not weights) have been finetuned on the task of generating reasons that support a claim, optionally given some example reasons. It was trained as part of a University of Melbourne [research project](https://github.com/Hunt-Laboratory/language-model-optimization) evaluating how large language models can best be optimized to perform argumentative reasoning tasks. Code used for optimization and evaluation can be found in the project [GitHub repository](https://github.com/Hunt-Laboratory/language-model-optimization). A paper reporting on model evaluation is currently under review. # Prompt Template ``` List reasons why: [original claim] Reasons: * [reason 1] * [reason 2] ... * [reason n] * [generated reason] ``` # Dataset The parameters were finetuned using argument maps scraped from the crowdsourced argument-mapping platform [Kialo](https://kialo.com/). # Limitations and Biases The model is a finetuned version of [`gpt-neo-2.7B`](https://huggingface.co/EleutherAI/gpt-neo-2.7B), so likely has many of the same limitations and biases. Additionally, note that while the goal of the model is to produce coherent and valid reasoning, many generated model outputs will be illogical or nonsensical and should not be relied upon. # Acknowledgements This research was funded by the Australian Department of Defence and the Office of National Intelligence under the AI for Decision Making Program, delivered in partnership with the Defence Science Institute in Victoria, Australia.
luke-thorburn/suggest-intermediary-claims-full-finetune
luke-thorburn
2022-07-12T09:56:47Z
10
0
transformers
[ "transformers", "pytorch", "gpt_neo", "text-generation", "argumentation", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: - en tags: - argumentation license: apache-2.0 metrics: - perplexity --- # Generate a chain of reasoning from one claim to another This model is a version of [`gpt-neo-2.7B`](https://huggingface.co/EleutherAI/gpt-neo-2.7B), where all parameters (both weights and biases) have been finetuned on the task of generating a sequence of claims (a 'chain of reasoning') that joins one claim to another. It was trained as part of a University of Melbourne [research project](https://github.com/Hunt-Laboratory/language-model-optimization) evaluating how large language models can best be optimized to perform argumentative reasoning tasks. Code used for optimization and evaluation can be found in the project [GitHub repository](https://github.com/Hunt-Laboratory/language-model-optimization). A paper reporting on model evaluation is currently under review. # Prompt Template ``` Input: [start claim] -> [end claim] Output: [start claim] -> [generated intermediate claim 1] -> ... -> [generated intermediate claim n] -> [end claim] ``` # Dataset The parameters were finetuned using argument maps scraped from the crowdsourced argument-mapping platform [Kialo](https://kialo.com/). # Limitations and Biases The model is a finetuned version of [`gpt-neo-2.7B`](https://huggingface.co/EleutherAI/gpt-neo-2.7B), so likely has many of the same limitations and biases. Additionally, note that while the goal of the model is to produce coherent and valid reasoning, many generated model outputs will be illogical or nonsensical and should not be relied upon. # Acknowledgements This research was funded by the Australian Department of Defence and the Office of National Intelligence under the AI for Decision Making Program, delivered in partnership with the Defence Science Institute in Victoria, Australia.
luke-thorburn/suggest-intermediary-claims-soft
luke-thorburn
2022-07-12T09:48:47Z
7
0
transformers
[ "transformers", "pytorch", "gpt_neo", "text-generation", "argumentation", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: - en tags: - argumentation license: apache-2.0 metrics: - perplexity --- # Generate a chain of reasoning from one claim to another This model has the same model parameters as [`gpt-neo-2.7B`](https://huggingface.co/EleutherAI/gpt-neo-2.7B), but with an additional soft prompt which has been optimized on the task of generating a sequence of claims (a 'chain of reasoning') that joins one claim to another. It was trained as part of a University of Melbourne [research project](https://github.com/Hunt-Laboratory/language-model-optimization) evaluating how large language models can best be optimized to perform argumentative reasoning tasks. Code used for optimization and evaluation can be found in the project [GitHub repository](https://github.com/Hunt-Laboratory/language-model-optimization). A paper reporting on model evaluation is currently under review. # Prompt Template ``` [prepended soft prompt][start claim] -> [end claim] Answer: [start claim] -> [generated intermediate claim 1] -> ... -> [generated intermediate claim n] -> [end claim] ``` # Dataset The soft prompt was trained using argument maps scraped from the crowdsourced argument-mapping platform [Kialo](https://kialo.com/). # Limitations and Biases The model is a finetuned version of [`gpt-neo-2.7B`](https://huggingface.co/EleutherAI/gpt-neo-2.7B), so likely has many of the same limitations and biases. Additionally, note that while the goal of the model is to produce coherent and valid reasoning, many generated model outputs will be illogical or nonsensical and should not be relied upon. # Acknowledgements This research was funded by the Australian Department of Defence and the Office of National Intelligence under the AI for Decision Making Program, delivered in partnership with the Defence Science Institute in Victoria, Australia.
luke-thorburn/suggest-objections-soft
luke-thorburn
2022-07-12T09:43:28Z
7
0
transformers
[ "transformers", "pytorch", "gpt_neo", "text-generation", "argumentation", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: - en tags: - argumentation license: apache-2.0 metrics: - perplexity --- # Generate objections to a claim This model has the same model parameters as [`gpt-neo-2.7B`](https://huggingface.co/EleutherAI/gpt-neo-2.7B), but with an additional soft prompt which has been optimized on the task of generating the objections to a claim, optionally given some example objections to that claim. It was trained as part of a University of Melbourne [research project](https://github.com/Hunt-Laboratory/language-model-optimization) evaluating how large language models can best be optimized to perform argumentative reasoning tasks. Code used for optimization and evaluation can be found in the project [GitHub repository](https://github.com/Hunt-Laboratory/language-model-optimization). A paper reporting on model evaluation is currently under review. # Prompt Template ``` [prepended soft prompt][original claim] Cons: - [objection 1] - [objection 2] ... - [objection n] - [generated objection] ``` # Dataset The soft prompt was trained using argument maps scraped from the crowdsourced argument-mapping platform [Kialo](https://kialo.com/). # Limitations and Biases The model is a finetuned version of [`gpt-neo-2.7B`](https://huggingface.co/EleutherAI/gpt-neo-2.7B), so likely has many of the same limitations and biases. Additionally, note that while the goal of the model is to produce coherent and valid reasoning, many generated model outputs will be illogical or nonsensical and should not be relied upon. # Acknowledgements This research was funded by the Australian Department of Defence and the Office of National Intelligence under the AI for Decision Making Program, delivered in partnership with the Defence Science Institute in Victoria, Australia.
robingeibel/led-base-16384-finetuned-big_patent
robingeibel
2022-07-12T09:28:33Z
6
1
transformers
[ "transformers", "pytorch", "tf", "tensorboard", "led", "feature-extraction", "generated_from_keras_callback", "endpoints_compatible", "region:us" ]
feature-extraction
2022-06-10T08:18:34Z
--- tags: - generated_from_keras_callback model-index: - name: led-base-16384-finetuned-big_patent results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # led-base-16384-finetuned-big_patent This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float32 ### Training results ### Framework versions - Transformers 4.20.1 - TensorFlow 2.8.2 - Datasets 2.3.2 - Tokenizers 0.12.1
fxmarty/20220712-h08m05s32_
fxmarty
2022-07-12T08:05:37Z
0
0
null
[ "tensorboard", "vit", "image-classification", "dataset:beans", "region:us" ]
image-classification
2022-07-12T08:05:32Z
--- pipeline_tag: image-classification datasets: - beans metrics: - accuracy tags: - vit --- **task**: `image-classification` **Backend:** `sagemaker-training` **Backend args:** `{'instance_type': 'ml.g4dn.2xlarge', 'supported_instructions': None}` **Number of evaluation samples:** `All dataset` Fixed parameters: * **model_name_or_path**: `nateraw/vit-base-beans` * **dataset**: * **path**: `beans` * **eval_split**: `validation` * **data_keys**: `{'primary': 'image'}` * **ref_keys**: `['labels']` * **quantization_approach**: `dynamic` * **node_exclusion**: `[]` * **framework**: `onnxruntime` * **framework_args**: * **opset**: `11` * **optimization_level**: `1` * **aware_training**: `False` Benchmarked parameters: * **operators_to_quantize**: `['Add', 'MatMul']`, `['Add']`, `[]` * **per_channel**: `False`, `True` # Evaluation ## Non-time metrics | operators_to_quantize | per_channel | | accuracy (original) | accuracy (optimized) | | :-------------------: | :---------: | :-: | :-----------------: | :------------------: | | `['Add', 'MatMul']` | `False` | \| | 0.980 | 0.980 | | `['Add', 'MatMul']` | `True` | \| | 0.980 | 0.980 | | `['Add']` | `False` | \| | 0.980 | 0.980 | | `['Add']` | `True` | \| | 0.980 | 0.980 | | `[]` | `False` | \| | 0.980 | 0.980 | | `[]` | `True` | \| | 0.980 | 0.980 | ## Time metrics Time benchmarks were run for 15 seconds per config. Below, time metrics for batch size = 1, input length = 32. | operators_to_quantize | per_channel | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) | | :-------------------: | :---------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: | | `['Add', 'MatMul']` | `False` | \| | 201.25 | 70.30 | \| | 5.00 | 14.27 | | `['Add', 'MatMul']` | `True` | \| | 203.52 | 72.48 | \| | 4.93 | 13.80 | | `['Add']` | `False` | \| | 166.03 | 150.93 | \| | 6.07 | 6.67 | | `['Add']` | `True` | \| | 200.82 | 163.17 | \| | 5.00 | 6.13 | | `[]` | `False` | \| | 190.99 | 162.06 | \| | 5.27 | 6.20 | | `[]` | `True` | \| | 155.15 | 162.52 | \| | 6.47 | 6.20 | Below, time metrics for batch size = 1, input length = 64. | operators_to_quantize | per_channel | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) | | :-------------------: | :---------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: | | `['Add', 'MatMul']` | `False` | \| | 165.85 | 70.60 | \| | 6.07 | 14.20 | | `['Add', 'MatMul']` | `True` | \| | 161.41 | 72.71 | \| | 6.20 | 13.80 | | `['Add']` | `False` | \| | 200.45 | 129.40 | \| | 5.00 | 7.73 | | `['Add']` | `True` | \| | 154.68 | 136.42 | \| | 6.47 | 7.40 | | `[]` | `False` | \| | 166.97 | 162.15 | \| | 6.00 | 6.20 | | `[]` | `True` | \| | 166.32 | 162.81 | \| | 6.07 | 6.20 | Below, time metrics for batch size = 1, input length = 128. | operators_to_quantize | per_channel | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) | | :-------------------: | :---------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: | | `['Add', 'MatMul']` | `False` | \| | 199.48 | 70.98 | \| | 5.07 | 14.13 | | `['Add', 'MatMul']` | `True` | \| | 199.65 | 71.78 | \| | 5.07 | 13.93 | | `['Add']` | `False` | \| | 199.08 | 137.97 | \| | 5.07 | 7.27 | | `['Add']` | `True` | \| | 189.93 | 162.45 | \| | 5.33 | 6.20 | | `[]` | `False` | \| | 191.63 | 162.54 | \| | 5.27 | 6.20 | | `[]` | `True` | \| | 200.38 | 162.55 | \| | 5.00 | 6.20 | Below, time metrics for batch size = 4, input length = 32. | operators_to_quantize | per_channel | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) | | :-------------------: | :---------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: | | `['Add', 'MatMul']` | `False` | \| | 655.84 | 243.33 | \| | 1.53 | 4.13 | | `['Add', 'MatMul']` | `True` | \| | 661.27 | 221.16 | \| | 1.53 | 4.53 | | `['Add']` | `False` | \| | 662.84 | 529.28 | \| | 1.53 | 1.93 | | `['Add']` | `True` | \| | 512.47 | 470.66 | \| | 2.00 | 2.13 | | `[]` | `False` | \| | 562.81 | 501.77 | \| | 1.80 | 2.00 | | `[]` | `True` | \| | 505.81 | 521.20 | \| | 2.00 | 1.93 | Below, time metrics for batch size = 4, input length = 64. | operators_to_quantize | per_channel | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) | | :-------------------: | :---------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: | | `['Add', 'MatMul']` | `False` | \| | 654.58 | 258.54 | \| | 1.53 | 3.93 | | `['Add', 'MatMul']` | `True` | \| | 617.44 | 234.05 | \| | 1.67 | 4.33 | | `['Add']` | `False` | \| | 661.51 | 478.81 | \| | 1.53 | 2.13 | | `['Add']` | `True` | \| | 657.01 | 660.23 | \| | 1.53 | 1.53 | | `[]` | `False` | \| | 661.64 | 474.28 | \| | 1.53 | 2.13 | | `[]` | `True` | \| | 661.29 | 471.09 | \| | 1.53 | 2.13 | Below, time metrics for batch size = 4, input length = 128. | operators_to_quantize | per_channel | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) | | :-------------------: | :---------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: | | `['Add', 'MatMul']` | `False` | \| | 654.80 | 219.38 | \| | 1.53 | 4.60 | | `['Add', 'MatMul']` | `True` | \| | 663.50 | 222.37 | \| | 1.53 | 4.53 | | `['Add']` | `False` | \| | 625.56 | 529.02 | \| | 1.60 | 1.93 | | `['Add']` | `True` | \| | 655.08 | 499.41 | \| | 1.53 | 2.07 | | `[]` | `False` | \| | 655.92 | 473.01 | \| | 1.53 | 2.13 | | `[]` | `True` | \| | 505.54 | 659.92 | \| | 2.00 | 1.53 | Below, time metrics for batch size = 8, input length = 32. | operators_to_quantize | per_channel | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) | | :-------------------: | :---------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: | | `['Add', 'MatMul']` | `False` | \| | 968.83 | 443.80 | \| | 1.07 | 2.27 | | `['Add', 'MatMul']` | `True` | \| | 1255.70 | 489.55 | \| | 0.80 | 2.07 | | `['Add']` | `False` | \| | 1301.35 | 938.14 | \| | 0.80 | 1.07 | | `['Add']` | `True` | \| | 1279.54 | 931.91 | \| | 0.80 | 1.13 | | `[]` | `False` | \| | 1292.66 | 1318.07 | \| | 0.80 | 0.80 | | `[]` | `True` | \| | 1290.35 | 1314.74 | \| | 0.80 | 0.80 | Below, time metrics for batch size = 8, input length = 64. | operators_to_quantize | per_channel | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) | | :-------------------: | :---------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: | | `['Add', 'MatMul']` | `False` | \| | 1305.45 | 438.06 | \| | 0.80 | 2.33 | | `['Add', 'MatMul']` | `True` | \| | 1296.68 | 450.40 | \| | 0.80 | 2.27 | | `['Add']` | `False` | \| | 968.21 | 949.81 | \| | 1.07 | 1.07 | | `['Add']` | `True` | \| | 1012.35 | 1317.46 | \| | 1.00 | 0.80 | | `[]` | `False` | \| | 1213.91 | 961.79 | \| | 0.87 | 1.07 | | `[]` | `True` | \| | 956.39 | 945.41 | \| | 1.07 | 1.07 | Below, time metrics for batch size = 8, input length = 128. | operators_to_quantize | per_channel | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) | | :-------------------: | :---------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: | | `['Add', 'MatMul']` | `False` | \| | 1120.12 | 497.17 | \| | 0.93 | 2.07 | | `['Add', 'MatMul']` | `True` | \| | 1289.50 | 443.46 | \| | 0.80 | 2.27 | | `['Add']` | `False` | \| | 1294.65 | 930.97 | \| | 0.80 | 1.13 | | `['Add']` | `True` | \| | 1181.21 | 933.82 | \| | 0.87 | 1.13 | | `[]` | `False` | \| | 1245.61 | 1318.07 | \| | 0.87 | 0.80 | | `[]` | `True` | \| | 1285.81 | 1318.82 | \| | 0.80 | 0.80 |
fxmarty/20220712-h08m02s04_example
fxmarty
2022-07-12T08:02:09Z
0
0
null
[ "tensorboard", "distilbert", "token-classification", "dataset:conll2003", "region:us" ]
token-classification
2022-07-12T08:02:04Z
--- pipeline_tag: token-classification datasets: - conll2003 metrics: - precision - recall - f1 - accuracy tags: - distilbert --- **task**: `token-classification` **Backend:** `sagemaker-training` **Backend args:** `{'instance_type': 'ml.m5.2xlarge', 'supported_instructions': 'avx512'}` **Number of evaluation samples:** `All dataset` Fixed parameters: * **model_name_or_path**: `elastic/distilbert-base-uncased-finetuned-conll03-english` * **dataset**: * **path**: `conll2003` * **eval_split**: `validation` * **data_keys**: `{'primary': 'tokens'}` * **ref_keys**: `['ner_tags']` * **calibration_split**: `train` * **node_exclusion**: `[]` * **per_channel**: `False` * **calibration**: * **method**: `minmax` * **num_calibration_samples**: `100` * **framework**: `onnxruntime` * **framework_args**: * **opset**: `11` * **optimization_level**: `1` * **aware_training**: `False` Benchmarked parameters: * **quantization_approach**: `dynamic`, `static` * **operators_to_quantize**: `['Add', 'MatMul']`, `['Add']` # Evaluation ## Non-time metrics | quantization_approach | operators_to_quantize | | precision (original) | precision (optimized) | | recall (original) | recall (optimized) | | f1 (original) | f1 (optimized) | | accuracy (original) | accuracy (optimized) | | :-------------------: | :-------------------: | :-: | :------------------: | :-------------------: | :-: | :---------------: | :----------------: | :-: | :-----------: | :------------: | :-: | :-----------------: | :------------------: | | `dynamic` | `['Add', 'MatMul']` | \| | 0.936 | 0.935 | \| | 0.944 | 0.943 | \| | 0.940 | 0.939 | \| | 0.988 | 0.988 | | `dynamic` | `['Add']` | \| | 0.936 | 0.936 | \| | 0.944 | 0.944 | \| | 0.940 | 0.940 | \| | 0.988 | 0.988 | | `static` | `['Add', 'MatMul']` | \| | 0.936 | 0.063 | \| | 0.944 | 0.246 | \| | 0.940 | 0.100 | \| | 0.988 | 0.343 | | `static` | `['Add']` | \| | 0.936 | 0.050 | \| | 0.944 | 0.160 | \| | 0.940 | 0.076 | \| | 0.988 | 0.311 | ## Time metrics Time benchmarks were run for 15 seconds per config. Below, time metrics for batch size = 1, input length = 32. | quantization_approach | operators_to_quantize | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) | | :-------------------: | :-------------------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: | | `dynamic` | `['Add', 'MatMul']` | \| | 46.38 | 9.96 | \| | 21.60 | 100.47 | | `dynamic` | `['Add']` | \| | 36.59 | 13.98 | \| | 27.33 | 71.60 | | `static` | `['Add', 'MatMul']` | \| | 33.84 | 14.46 | \| | 29.60 | 69.20 | | `static` | `['Add']` | \| | 33.23 | 20.11 | \| | 30.13 | 49.73 | Below, time metrics for batch size = 1, input length = 64. | quantization_approach | operators_to_quantize | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) | | :-------------------: | :-------------------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: | | `dynamic` | `['Add', 'MatMul']` | \| | 58.92 | 19.68 | \| | 17.00 | 50.87 | | `dynamic` | `['Add']` | \| | 58.59 | 24.81 | \| | 17.13 | 40.33 | | `static` | `['Add', 'MatMul']` | \| | 51.41 | 29.36 | \| | 19.47 | 34.07 | | `static` | `['Add']` | \| | 44.22 | 38.56 | \| | 22.67 | 25.93 | Below, time metrics for batch size = 1, input length = 128. | quantization_approach | operators_to_quantize | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) | | :-------------------: | :-------------------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: | | `dynamic` | `['Add', 'MatMul']` | \| | 72.38 | 36.47 | \| | 13.87 | 27.47 | | `dynamic` | `['Add']` | \| | 70.21 | 46.30 | \| | 14.27 | 21.60 | | `static` | `['Add', 'MatMul']` | \| | 70.76 | 48.24 | \| | 14.13 | 20.80 | | `static` | `['Add']` | \| | 72.47 | 71.10 | \| | 13.80 | 14.07 | Below, time metrics for batch size = 4, input length = 32. | quantization_approach | operators_to_quantize | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) | | :-------------------: | :-------------------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: | | `dynamic` | `['Add', 'MatMul']` | \| | 69.76 | 38.50 | \| | 14.40 | 26.00 | | `dynamic` | `['Add']` | \| | 56.02 | 51.32 | \| | 17.87 | 19.53 | | `static` | `['Add', 'MatMul']` | \| | 55.05 | 46.80 | \| | 18.20 | 21.40 | | `static` | `['Add']` | \| | 71.03 | 56.82 | \| | 14.13 | 17.67 | Below, time metrics for batch size = 4, input length = 64. | quantization_approach | operators_to_quantize | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) | | :-------------------: | :-------------------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: | | `dynamic` | `['Add', 'MatMul']` | \| | 119.91 | 61.51 | \| | 8.40 | 16.27 | | `dynamic` | `['Add']` | \| | 108.43 | 105.65 | \| | 9.27 | 9.47 | | `static` | `['Add', 'MatMul']` | \| | 119.89 | 86.76 | \| | 8.40 | 11.53 | | `static` | `['Add']` | \| | 96.99 | 102.03 | \| | 10.33 | 9.87 | Below, time metrics for batch size = 4, input length = 128. | quantization_approach | operators_to_quantize | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) | | :-------------------: | :-------------------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: | | `dynamic` | `['Add', 'MatMul']` | \| | 219.78 | 123.71 | \| | 4.60 | 8.13 | | `dynamic` | `['Add']` | \| | 220.13 | 187.21 | \| | 4.60 | 5.40 | | `static` | `['Add', 'MatMul']` | \| | 186.39 | 176.99 | \| | 5.40 | 5.67 | | `static` | `['Add']` | \| | 219.57 | 203.71 | \| | 4.60 | 4.93 | Below, time metrics for batch size = 8, input length = 32. | quantization_approach | operators_to_quantize | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) | | :-------------------: | :-------------------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: | | `dynamic` | `['Add', 'MatMul']` | \| | 118.32 | 59.22 | \| | 8.47 | 16.93 | | `dynamic` | `['Add']` | \| | 116.52 | 80.17 | \| | 8.60 | 12.53 | | `static` | `['Add', 'MatMul']` | \| | 116.59 | 83.55 | \| | 8.60 | 12.00 | | `static` | `['Add']` | \| | 115.81 | 126.53 | \| | 8.67 | 7.93 | Below, time metrics for batch size = 8, input length = 64. | quantization_approach | operators_to_quantize | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) | | :-------------------: | :-------------------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: | | `dynamic` | `['Add', 'MatMul']` | \| | 172.71 | 117.89 | \| | 5.80 | 8.53 | | `dynamic` | `['Add']` | \| | 166.05 | 156.99 | \| | 6.07 | 6.40 | | `static` | `['Add', 'MatMul']` | \| | 215.00 | 148.93 | \| | 4.67 | 6.73 | | `static` | `['Add']` | \| | 214.55 | 200.16 | \| | 4.67 | 5.00 | Below, time metrics for batch size = 8, input length = 128. | quantization_approach | operators_to_quantize | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) | | :-------------------: | :-------------------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: | | `dynamic` | `['Add', 'MatMul']` | \| | 403.69 | 307.36 | \| | 2.53 | 3.27 | | `dynamic` | `['Add']` | \| | 372.85 | 317.53 | \| | 2.73 | 3.20 | | `static` | `['Add', 'MatMul']` | \| | 352.18 | 320.85 | \| | 2.87 | 3.13 | | `static` | `['Add']` | \| | 403.55 | 410.17 | \| | 2.53 | 2.47 |
ArneD/xlm-roberta-base-finetuned-panx-all
ArneD
2022-07-12T07:50:58Z
4
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-07-12T06:47:20Z
--- license: mit tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-all results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-all This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset (EN, FR, DE, IT). It achieves the following results on the evaluation set: - Loss: 0.1769 - F1: 0.8535 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2934 | 1.0 | 835 | 0.1853 | 0.8250 | | 0.1569 | 2.0 | 1670 | 0.1714 | 0.8438 | | 0.1008 | 3.0 | 2505 | 0.1769 | 0.8535 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0+cu113 - Datasets 1.16.1 - Tokenizers 0.10.3
fxmarty/20220712-h07m20s32_example_conll2003
fxmarty
2022-07-12T07:20:37Z
0
0
null
[ "tensorboard", "distilbert", "token-classification", "dataset:conll2003", "region:us" ]
token-classification
2022-07-12T07:20:32Z
--- pipeline_tag: token-classification datasets: - conll2003 metrics: - precision - recall - f1 - accuracy tags: - distilbert --- **task**: `token-classification` **Backend:** `sagemaker-training` **Backend args:** `{'instance_type': 'ml.g4dn.2xlarge', 'supported_instructions': 'avx512_vnni'}` **Number of evaluation samples:** `1000` Fixed parameters: * **model_name_or_path**: `elastic/distilbert-base-uncased-finetuned-conll03-english` * **dataset**: * **path**: `conll2003` * **eval_split**: `validation` * **data_keys**: `{'primary': 'tokens'}` * **ref_keys**: `['ner_tags']` * **calibration_split**: `train` * **node_exclusion**: `[]` * **per_channel**: `False` * **calibration**: * **method**: `minmax` * **num_calibration_samples**: `100` * **framework**: `onnxruntime` * **framework_args**: * **opset**: `11` * **optimization_level**: `1` * **aware_training**: `False` Benchmarked parameters: * **quantization_approach**: `dynamic`, `static` * **operators_to_quantize**: `['Add', 'MatMul']`, `['Add']` # Evaluation ## Non-time metrics | quantization_approach | operators_to_quantize | | precision (original) | precision (optimized) | | recall (original) | recall (optimized) | | f1 (original) | f1 (optimized) | | accuracy (original) | accuracy (optimized) | | :-------------------: | :-------------------: | :-: | :------------------: | :-------------------: | :-: | :---------------: | :----------------: | :-: | :-----------: | :------------: | :-: | :-----------------: | :------------------: | | `dynamic` | `['Add', 'MatMul']` | \| | 0.937 | 0.937 | \| | 0.953 | 0.953 | \| | 0.945 | 0.945 | \| | 0.988 | 0.988 | | `dynamic` | `['Add']` | \| | 0.937 | 0.937 | \| | 0.953 | 0.953 | \| | 0.945 | 0.945 | \| | 0.988 | 0.988 | | `static` | `['Add', 'MatMul']` | \| | 0.937 | 0.074 | \| | 0.953 | 0.253 | \| | 0.945 | 0.114 | \| | 0.988 | 0.363 | | `static` | `['Add']` | \| | 0.937 | 0.065 | \| | 0.953 | 0.186 | \| | 0.945 | 0.096 | \| | 0.988 | 0.340 | ## Time metrics Time benchmarks were run for 3 seconds per config. Below, time metrics for batch size = 1, input length = 64. | quantization_approach | operators_to_quantize | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) | | :-------------------: | :-------------------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: | | `dynamic` | `['Add', 'MatMul']` | \| | 57.64 | 12.30 | \| | 17.67 | 81.33 | | `dynamic` | `['Add']` | \| | 43.51 | 29.42 | \| | 23.00 | 34.00 | | `static` | `['Add', 'MatMul']` | \| | 43.05 | 21.11 | \| | 23.33 | 47.67 | | `static` | `['Add']` | \| | 43.50 | 37.93 | \| | 23.00 | 26.67 | Below, time metrics for batch size = 4, input length = 64. | quantization_approach | operators_to_quantize | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) | | :-------------------: | :-------------------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: | | `dynamic` | `['Add', 'MatMul']` | \| | 119.50 | 39.92 | \| | 8.67 | 25.33 | | `dynamic` | `['Add']` | \| | 119.62 | 107.42 | \| | 8.67 | 9.33 | | `static` | `['Add', 'MatMul']` | \| | 120.23 | 56.94 | \| | 8.33 | 17.67 | | `static` | `['Add']` | \| | 119.10 | 130.78 | \| | 8.67 | 7.67 | Below, time metrics for batch size = 8, input length = 64. | quantization_approach | operators_to_quantize | | latency_mean (original, ms) | latency_mean (optimized, ms) | | throughput (original, /s) | throughput (optimized, /s) | | :-------------------: | :-------------------: | :-: | :-------------------------: | :--------------------------: | :-: | :-----------------------: | :------------------------: | | `dynamic` | `['Add', 'MatMul']` | \| | 165.84 | 75.45 | \| | 6.33 | 13.33 | | `dynamic` | `['Add']` | \| | 214.65 | 211.41 | \| | 4.67 | 5.00 | | `static` | `['Add', 'MatMul']` | \| | 166.53 | 129.00 | \| | 6.33 | 8.00 | | `static` | `['Add']` | \| | 214.81 | 256.95 | \| | 4.67 | 4.00 |
AntiSquid/TEST2ppo-LunarLander-v2
AntiSquid
2022-07-12T07:10:57Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-07-06T21:53:51Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 285.66 +/- 15.86 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
sun1638650145/Reinforce-Pixelcopter-PLE-v0
sun1638650145
2022-07-12T07:06:35Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2022-07-12T07:06:22Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Pixelcopter-PLE-v0 results: - metrics: - type: mean_reward value: 13.30 +/- 9.12 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 --- # 使用**Reinforce**智能体来玩**Pixelcopter-PLE-v0** 这是一个使用**Reinforce**训练有素的模型玩**Pixelcopter-PLE-v0**. 要学习使用这个模型并训练你的模型, 请查阅深度强化学习课程第5单元: https://github.com/huggingface/deep-rl-class/tree/main/unit5
botika/Indobert-QA-finetuned-squad
botika
2022-07-12T06:55:08Z
20
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-06-27T08:19:17Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: Indobert-QA-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Indobert-QA-finetuned-squad This model is a fine-tuned version of [Rifky/Indobert-QA](https://huggingface.co/Rifky/Indobert-QA) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 15.2477 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:------:|:---------------:| | 1.1181 | 1.0 | 5510 | 4.8523 | | 0.9746 | 2.0 | 11020 | 5.4560 | | 0.8135 | 3.0 | 16530 | 5.7017 | | 0.6964 | 4.0 | 22040 | 6.2898 | | 0.6052 | 5.0 | 27550 | 6.0962 | | 0.512 | 6.0 | 33060 | 6.4996 | | 0.4303 | 7.0 | 38570 | 6.9570 | | 0.3532 | 8.0 | 44080 | 7.4206 | | 0.3199 | 9.0 | 49590 | 7.4004 | | 0.4247 | 10.0 | 55100 | 6.9846 | | 0.3641 | 11.0 | 60610 | 6.8940 | | 0.3277 | 12.0 | 66120 | 7.0796 | | 0.2899 | 13.0 | 71630 | 7.4511 | | 0.2794 | 14.0 | 77140 | 7.2660 | | 0.2496 | 15.0 | 82650 | 7.9774 | | 0.2299 | 16.0 | 88160 | 7.6985 | | 0.2082 | 17.0 | 93670 | 7.8321 | | 0.1975 | 18.0 | 99180 | 8.1735 | | 0.1784 | 19.0 | 104690 | 8.5620 | | 0.1675 | 20.0 | 110200 | 8.7616 | | 0.1613 | 21.0 | 115710 | 8.8350 | | 0.1484 | 22.0 | 121220 | 8.9582 | | 0.1482 | 23.0 | 126730 | 9.0406 | | 0.1381 | 24.0 | 132240 | 8.9652 | | 0.1411 | 25.0 | 137750 | 9.4613 | | 0.1236 | 26.0 | 143260 | 9.6738 | | 0.1216 | 27.0 | 148770 | 9.8708 | | 0.1192 | 28.0 | 154280 | 10.3220 | | 0.12 | 29.0 | 159790 | 10.0470 | | 0.1041 | 30.0 | 165300 | 10.6753 | | 0.1055 | 31.0 | 170810 | 10.2775 | | 0.1083 | 32.0 | 176320 | 10.4515 | | 0.0924 | 33.0 | 181830 | 10.2080 | | 0.0959 | 34.0 | 187340 | 10.8958 | | 0.0978 | 35.0 | 192850 | 10.8256 | | 0.0865 | 36.0 | 198360 | 11.6631 | | 0.0825 | 37.0 | 203870 | 11.9017 | | 0.0807 | 38.0 | 209380 | 11.4407 | | 0.0674 | 39.0 | 214890 | 11.5917 | | 0.0809 | 40.0 | 220400 | 11.4535 | | 0.0708 | 41.0 | 225910 | 12.1592 | | 0.0778 | 42.0 | 231420 | 12.0278 | | 0.0726 | 43.0 | 236930 | 11.7701 | | 0.0627 | 44.0 | 242440 | 12.2976 | | 0.0681 | 45.0 | 247950 | 12.7727 | | 0.0672 | 46.0 | 253460 | 12.8623 | | 0.0608 | 47.0 | 258970 | 12.9669 | | 0.067 | 48.0 | 264480 | 13.4741 | | 0.0625 | 49.0 | 269990 | 13.6245 | | 0.0585 | 50.0 | 275500 | 13.4891 | | 0.0568 | 51.0 | 281010 | 13.4374 | | 0.0583 | 52.0 | 286520 | 12.8947 | | 0.0467 | 53.0 | 292030 | 13.6060 | | 0.0416 | 54.0 | 297540 | 14.3267 | | 0.0504 | 55.0 | 303050 | 13.7715 | | 0.0431 | 56.0 | 308560 | 13.8461 | | 0.0412 | 57.0 | 314070 | 13.7060 | | 0.0383 | 58.0 | 319580 | 14.3548 | | 0.0327 | 59.0 | 325090 | 14.4535 | | 0.0448 | 60.0 | 330600 | 14.2505 | | 0.0409 | 61.0 | 336110 | 13.8177 | | 0.0332 | 62.0 | 341620 | 13.0098 | | 0.0345 | 63.0 | 347130 | 13.8678 | | 0.0267 | 64.0 | 352640 | 14.3916 | | 0.0302 | 65.0 | 358150 | 14.1668 | | 0.0292 | 66.0 | 363660 | 13.6313 | | 0.0302 | 67.0 | 369170 | 14.1120 | | 0.0265 | 68.0 | 374680 | 15.0709 | | 0.0276 | 69.0 | 380190 | 14.6093 | | 0.0223 | 70.0 | 385700 | 15.0999 | | 0.0306 | 71.0 | 391210 | 15.1224 | | 0.0281 | 72.0 | 396720 | 15.5029 | | 0.019 | 73.0 | 402230 | 15.3474 | | 0.02 | 74.0 | 407740 | 14.7976 | | 0.018 | 75.0 | 413250 | 15.3104 | | 0.0184 | 76.0 | 418760 | 15.3137 | | 0.0171 | 77.0 | 424270 | 14.8188 | | 0.0164 | 78.0 | 429780 | 15.4378 | | 0.0165 | 79.0 | 435290 | 15.1186 | | 0.0168 | 80.0 | 440800 | 14.7998 | | 0.0115 | 81.0 | 446310 | 14.4591 | | 0.0138 | 82.0 | 451820 | 15.2517 | | 0.0117 | 83.0 | 457330 | 14.7899 | | 0.0118 | 84.0 | 462840 | 15.5304 | | 0.0119 | 85.0 | 468350 | 14.6794 | | 0.0134 | 86.0 | 473860 | 14.5271 | | 0.0076 | 87.0 | 479370 | 15.7098 | | 0.0076 | 88.0 | 484880 | 14.2286 | | 0.01 | 89.0 | 490390 | 15.2608 | | 0.0094 | 90.0 | 495900 | 14.9055 | | 0.0069 | 91.0 | 501410 | 14.8540 | | 0.0082 | 92.0 | 506920 | 15.2562 | | 0.0068 | 93.0 | 512430 | 14.9342 | | 0.0047 | 94.0 | 517940 | 15.3755 | | 0.0062 | 95.0 | 523450 | 15.2753 | | 0.0046 | 96.0 | 528960 | 15.0191 | | 0.0057 | 97.0 | 534470 | 14.9508 | | 0.0033 | 98.0 | 539980 | 15.4440 | | 0.0045 | 99.0 | 545490 | 15.4171 | | 0.0048 | 100.0 | 551000 | 15.2477 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu102 - Datasets 2.2.2 - Tokenizers 0.12.1
thunlp/Lawformer
thunlp
2022-07-12T06:23:13Z
155
19
transformers
[ "transformers", "pytorch", "longformer", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
## Lawformer ### Introduction This repository provides the source code and checkpoints of the paper "Lawformer: A Pre-trained Language Model forChinese Legal Long Documents". You can download the checkpoint from the [huggingface model hub](https://huggingface.co/xcjthu/Lawformer) or from [here](https://data.thunlp.org/legal/Lawformer.zip). ### Easy Start We have uploaded our model to the huggingface model hub. Make sure you have installed transformers. ```python >>> from transformers import AutoModel, AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("thunlp/Lawformer") >>> model = AutoModel.from_pretrained("thunlp/Lawformer") >>> inputs = tokenizer("任某提起诉讼,请求判令解除婚姻关系并对夫妻共同财产进行分割。", return_tensors="pt") >>> outputs = model(**inputs) ``` ### Cite If you use the pre-trained models, please cite this paper: ``` @article{xiao2021lawformer, title={Lawformer: A Pre-trained Language Model forChinese Legal Long Documents}, author={Xiao, Chaojun and Hu, Xueyu and Liu, Zhiyuan and Tu, Cunchao and Sun, Maosong}, year={2021} } ```
MiguelCosta/finetuning-sentiment-model-3000-samples
MiguelCosta
2022-07-12T06:06:41Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-07-12T04:48:06Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb metrics: - accuracy - f1 model-index: - name: finetuning-sentiment-model-3000-samples results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb args: plain_text metrics: - name: Accuracy type: accuracy value: 0.8766666666666667 - name: F1 type: f1 value: 0.8810289389067525 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.5805 - Accuracy: 0.8767 - F1: 0.8810 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
Evelyn18/legalectra-small-spanish-becasv3-6
Evelyn18
2022-07-12T05:05:14Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "electra", "question-answering", "generated_from_trainer", "dataset:becasv2", "endpoints_compatible", "region:us" ]
question-answering
2022-07-12T04:49:13Z
--- tags: - generated_from_trainer datasets: - becasv2 model-index: - name: legalectra-small-spanish-becasv3-6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # legalectra-small-spanish-becasv3-6 This model is a fine-tuned version of [mrm8488/legalectra-small-spanish](https://huggingface.co/mrm8488/legalectra-small-spanish) on the becasv2 dataset. It achieves the following results on the evaluation set: - Loss: 3.8441 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 150 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 5 | 5.6469 | | No log | 2.0 | 10 | 5.5104 | | No log | 3.0 | 15 | 5.4071 | | No log | 4.0 | 20 | 5.3313 | | No log | 5.0 | 25 | 5.2629 | | No log | 6.0 | 30 | 5.1972 | | No log | 7.0 | 35 | 5.1336 | | No log | 8.0 | 40 | 5.0667 | | No log | 9.0 | 45 | 5.0030 | | No log | 10.0 | 50 | 4.9302 | | No log | 11.0 | 55 | 4.8646 | | No log | 12.0 | 60 | 4.7963 | | No log | 13.0 | 65 | 4.7328 | | No log | 14.0 | 70 | 4.6735 | | No log | 15.0 | 75 | 4.6258 | | No log | 16.0 | 80 | 4.5869 | | No log | 17.0 | 85 | 4.5528 | | No log | 18.0 | 90 | 4.5177 | | No log | 19.0 | 95 | 4.4916 | | No log | 20.0 | 100 | 4.4685 | | No log | 21.0 | 105 | 4.4371 | | No log | 22.0 | 110 | 4.4271 | | No log | 23.0 | 115 | 4.3905 | | No log | 24.0 | 120 | 4.3931 | | No log | 25.0 | 125 | 4.3902 | | No log | 26.0 | 130 | 4.3772 | | No log | 27.0 | 135 | 4.3981 | | No log | 28.0 | 140 | 4.4463 | | No log | 29.0 | 145 | 4.4501 | | No log | 30.0 | 150 | 4.4654 | | No log | 31.0 | 155 | 4.4069 | | No log | 32.0 | 160 | 4.4108 | | No log | 33.0 | 165 | 4.4394 | | No log | 34.0 | 170 | 4.4320 | | No log | 35.0 | 175 | 4.3541 | | No log | 36.0 | 180 | 4.4534 | | No log | 37.0 | 185 | 4.2616 | | No log | 38.0 | 190 | 4.2474 | | No log | 39.0 | 195 | 4.4358 | | No log | 40.0 | 200 | 4.3060 | | No log | 41.0 | 205 | 4.1866 | | No log | 42.0 | 210 | 4.2735 | | No log | 43.0 | 215 | 4.2739 | | No log | 44.0 | 220 | 4.1812 | | No log | 45.0 | 225 | 4.2484 | | No log | 46.0 | 230 | 4.3706 | | No log | 47.0 | 235 | 4.3487 | | No log | 48.0 | 240 | 4.2805 | | No log | 49.0 | 245 | 4.3180 | | No log | 50.0 | 250 | 4.3574 | | No log | 51.0 | 255 | 4.2823 | | No log | 52.0 | 260 | 4.0643 | | No log | 53.0 | 265 | 4.0729 | | No log | 54.0 | 270 | 4.2368 | | No log | 55.0 | 275 | 4.2845 | | No log | 56.0 | 280 | 4.1009 | | No log | 57.0 | 285 | 4.0629 | | No log | 58.0 | 290 | 4.1250 | | No log | 59.0 | 295 | 4.2048 | | No log | 60.0 | 300 | 4.2412 | | No log | 61.0 | 305 | 4.1653 | | No log | 62.0 | 310 | 4.1433 | | No log | 63.0 | 315 | 4.1309 | | No log | 64.0 | 320 | 4.1381 | | No log | 65.0 | 325 | 4.2162 | | No log | 66.0 | 330 | 4.1858 | | No log | 67.0 | 335 | 4.1342 | | No log | 68.0 | 340 | 4.1247 | | No log | 69.0 | 345 | 4.1701 | | No log | 70.0 | 350 | 4.1915 | | No log | 71.0 | 355 | 4.1356 | | No log | 72.0 | 360 | 4.1766 | | No log | 73.0 | 365 | 4.1296 | | No log | 74.0 | 370 | 4.0594 | | No log | 75.0 | 375 | 4.0601 | | No log | 76.0 | 380 | 4.0328 | | No log | 77.0 | 385 | 3.9978 | | No log | 78.0 | 390 | 4.0070 | | No log | 79.0 | 395 | 4.0519 | | No log | 80.0 | 400 | 4.1000 | | No log | 81.0 | 405 | 3.9550 | | No log | 82.0 | 410 | 3.9159 | | No log | 83.0 | 415 | 3.9494 | | No log | 84.0 | 420 | 4.0546 | | No log | 85.0 | 425 | 4.2223 | | No log | 86.0 | 430 | 4.2665 | | No log | 87.0 | 435 | 3.8892 | | No log | 88.0 | 440 | 3.7763 | | No log | 89.0 | 445 | 3.8576 | | No log | 90.0 | 450 | 4.0089 | | No log | 91.0 | 455 | 4.1495 | | No log | 92.0 | 460 | 4.1545 | | No log | 93.0 | 465 | 4.0164 | | No log | 94.0 | 470 | 3.9175 | | No log | 95.0 | 475 | 3.9308 | | No log | 96.0 | 480 | 3.9658 | | No log | 97.0 | 485 | 3.9856 | | No log | 98.0 | 490 | 3.9691 | | No log | 99.0 | 495 | 3.9082 | | 3.2873 | 100.0 | 500 | 3.8736 | | 3.2873 | 101.0 | 505 | 3.8963 | | 3.2873 | 102.0 | 510 | 3.9391 | | 3.2873 | 103.0 | 515 | 3.9408 | | 3.2873 | 104.0 | 520 | 3.9075 | | 3.2873 | 105.0 | 525 | 3.8258 | | 3.2873 | 106.0 | 530 | 3.7917 | | 3.2873 | 107.0 | 535 | 3.7981 | | 3.2873 | 108.0 | 540 | 3.8272 | | 3.2873 | 109.0 | 545 | 3.8655 | | 3.2873 | 110.0 | 550 | 3.8234 | | 3.2873 | 111.0 | 555 | 3.7126 | | 3.2873 | 112.0 | 560 | 3.6981 | | 3.2873 | 113.0 | 565 | 3.7327 | | 3.2873 | 114.0 | 570 | 3.8470 | | 3.2873 | 115.0 | 575 | 4.0036 | | 3.2873 | 116.0 | 580 | 4.0412 | | 3.2873 | 117.0 | 585 | 4.0487 | | 3.2873 | 118.0 | 590 | 4.0524 | | 3.2873 | 119.0 | 595 | 4.0375 | | 3.2873 | 120.0 | 600 | 3.9971 | | 3.2873 | 121.0 | 605 | 3.8959 | | 3.2873 | 122.0 | 610 | 3.8834 | | 3.2873 | 123.0 | 615 | 3.9279 | | 3.2873 | 124.0 | 620 | 3.9374 | | 3.2873 | 125.0 | 625 | 3.9515 | | 3.2873 | 126.0 | 630 | 3.9625 | | 3.2873 | 127.0 | 635 | 3.9635 | | 3.2873 | 128.0 | 640 | 3.9596 | | 3.2873 | 129.0 | 645 | 3.8871 | | 3.2873 | 130.0 | 650 | 3.8307 | | 3.2873 | 131.0 | 655 | 3.8318 | | 3.2873 | 132.0 | 660 | 3.8403 | | 3.2873 | 133.0 | 665 | 3.8560 | | 3.2873 | 134.0 | 670 | 3.8650 | | 3.2873 | 135.0 | 675 | 3.8734 | | 3.2873 | 136.0 | 680 | 3.8756 | | 3.2873 | 137.0 | 685 | 3.8613 | | 3.2873 | 138.0 | 690 | 3.8447 | | 3.2873 | 139.0 | 695 | 3.8362 | | 3.2873 | 140.0 | 700 | 3.8328 | | 3.2873 | 141.0 | 705 | 3.8350 | | 3.2873 | 142.0 | 710 | 3.8377 | | 3.2873 | 143.0 | 715 | 3.8399 | | 3.2873 | 144.0 | 720 | 3.8414 | | 3.2873 | 145.0 | 725 | 3.8422 | | 3.2873 | 146.0 | 730 | 3.8435 | | 3.2873 | 147.0 | 735 | 3.8437 | | 3.2873 | 148.0 | 740 | 3.8437 | | 3.2873 | 149.0 | 745 | 3.8440 | | 3.2873 | 150.0 | 750 | 3.8441 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
Evelyn18/legalectra-small-spanish-becasv3-5
Evelyn18
2022-07-12T04:45:36Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "electra", "question-answering", "generated_from_trainer", "dataset:becasv2", "endpoints_compatible", "region:us" ]
question-answering
2022-07-12T04:43:31Z
--- tags: - generated_from_trainer datasets: - becasv2 model-index: - name: legalectra-small-spanish-becasv3-5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # legalectra-small-spanish-becasv3-5 This model is a fine-tuned version of [mrm8488/legalectra-small-spanish](https://huggingface.co/mrm8488/legalectra-small-spanish) on the becasv2 dataset. It achieves the following results on the evaluation set: - Loss: 4.7020 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 5 | 5.7715 | | No log | 2.0 | 10 | 5.7001 | | No log | 3.0 | 15 | 5.6206 | | No log | 4.0 | 20 | 5.5463 | | No log | 5.0 | 25 | 5.4866 | | No log | 6.0 | 30 | 5.4369 | | No log | 7.0 | 35 | 5.3939 | | No log | 8.0 | 40 | 5.3545 | | No log | 9.0 | 45 | 5.3168 | | No log | 10.0 | 50 | 5.2824 | | No log | 11.0 | 55 | 5.2504 | | No log | 12.0 | 60 | 5.2193 | | No log | 13.0 | 65 | 5.1864 | | No log | 14.0 | 70 | 5.1515 | | No log | 15.0 | 75 | 5.1174 | | No log | 16.0 | 80 | 5.0839 | | No log | 17.0 | 85 | 5.0497 | | No log | 18.0 | 90 | 5.0188 | | No log | 19.0 | 95 | 4.9937 | | No log | 20.0 | 100 | 4.9726 | | No log | 21.0 | 105 | 4.9483 | | No log | 22.0 | 110 | 4.9205 | | No log | 23.0 | 115 | 4.8993 | | No log | 24.0 | 120 | 4.8802 | | No log | 25.0 | 125 | 4.8612 | | No log | 26.0 | 130 | 4.8498 | | No log | 27.0 | 135 | 4.8294 | | No log | 28.0 | 140 | 4.8176 | | No log | 29.0 | 145 | 4.8144 | | No log | 30.0 | 150 | 4.8012 | | No log | 31.0 | 155 | 4.7890 | | No log | 32.0 | 160 | 4.7745 | | No log | 33.0 | 165 | 4.7641 | | No log | 34.0 | 170 | 4.7558 | | No log | 35.0 | 175 | 4.7474 | | No log | 36.0 | 180 | 4.7384 | | No log | 37.0 | 185 | 4.7319 | | No log | 38.0 | 190 | 4.7262 | | No log | 39.0 | 195 | 4.7225 | | No log | 40.0 | 200 | 4.7201 | | No log | 41.0 | 205 | 4.7165 | | No log | 42.0 | 210 | 4.7129 | | No log | 43.0 | 215 | 4.7111 | | No log | 44.0 | 220 | 4.7086 | | No log | 45.0 | 225 | 4.7060 | | No log | 46.0 | 230 | 4.7049 | | No log | 47.0 | 235 | 4.7036 | | No log | 48.0 | 240 | 4.7028 | | No log | 49.0 | 245 | 4.7023 | | No log | 50.0 | 250 | 4.7020 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
Evelyn18/legalectra-small-spanish-becasv3-4
Evelyn18
2022-07-12T04:38:19Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "electra", "question-answering", "generated_from_trainer", "dataset:becasv2", "endpoints_compatible", "region:us" ]
question-answering
2022-07-12T04:36:14Z
--- tags: - generated_from_trainer datasets: - becasv2 model-index: - name: legalectra-small-spanish-becasv3-4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # legalectra-small-spanish-becasv3-4 This model is a fine-tuned version of [mrm8488/legalectra-small-spanish](https://huggingface.co/mrm8488/legalectra-small-spanish) on the becasv2 dataset. It achieves the following results on the evaluation set: - Loss: 4.1290 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 5 | 5.6625 | | No log | 2.0 | 10 | 5.4940 | | No log | 3.0 | 15 | 5.3886 | | No log | 4.0 | 20 | 5.3004 | | No log | 5.0 | 25 | 5.2210 | | No log | 6.0 | 30 | 5.1434 | | No log | 7.0 | 35 | 5.0546 | | No log | 8.0 | 40 | 4.9726 | | No log | 9.0 | 45 | 4.9227 | | No log | 10.0 | 50 | 4.8344 | | No log | 11.0 | 55 | 4.7749 | | No log | 12.0 | 60 | 4.7381 | | No log | 13.0 | 65 | 4.7016 | | No log | 14.0 | 70 | 4.6581 | | No log | 15.0 | 75 | 4.6231 | | No log | 16.0 | 80 | 4.5900 | | No log | 17.0 | 85 | 4.5446 | | No log | 18.0 | 90 | 4.5041 | | No log | 19.0 | 95 | 4.4635 | | No log | 20.0 | 100 | 4.4356 | | No log | 21.0 | 105 | 4.3985 | | No log | 22.0 | 110 | 4.3650 | | No log | 23.0 | 115 | 4.3540 | | No log | 24.0 | 120 | 4.3270 | | No log | 25.0 | 125 | 4.2873 | | No log | 26.0 | 130 | 4.2808 | | No log | 27.0 | 135 | 4.2623 | | No log | 28.0 | 140 | 4.2466 | | No log | 29.0 | 145 | 4.2488 | | No log | 30.0 | 150 | 4.2410 | | No log | 31.0 | 155 | 4.2187 | | No log | 32.0 | 160 | 4.2000 | | No log | 33.0 | 165 | 4.1883 | | No log | 34.0 | 170 | 4.1803 | | No log | 35.0 | 175 | 4.1773 | | No log | 36.0 | 180 | 4.1652 | | No log | 37.0 | 185 | 4.1614 | | No log | 38.0 | 190 | 4.1609 | | No log | 39.0 | 195 | 4.1652 | | No log | 40.0 | 200 | 4.1560 | | No log | 41.0 | 205 | 4.1435 | | No log | 42.0 | 210 | 4.1463 | | No log | 43.0 | 215 | 4.1434 | | No log | 44.0 | 220 | 4.1340 | | No log | 45.0 | 225 | 4.1259 | | No log | 46.0 | 230 | 4.1212 | | No log | 47.0 | 235 | 4.1224 | | No log | 48.0 | 240 | 4.1257 | | No log | 49.0 | 245 | 4.1284 | | No log | 50.0 | 250 | 4.1290 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
Saraswati/q-FrozenLake-v1-4x4-noSlippery
Saraswati
2022-07-12T04:25:49Z
0
1
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2022-07-12T04:25:40Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery --- # **Q-Learning** Agent playing **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="Saraswati/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
Evelyn18/legalectra-small-spanish-becasv3-1
Evelyn18
2022-07-12T03:54:49Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "electra", "question-answering", "generated_from_trainer", "dataset:becasv2", "endpoints_compatible", "region:us" ]
question-answering
2022-07-12T03:49:49Z
--- tags: - generated_from_trainer datasets: - becasv2 model-index: - name: legalectra-small-spanish-becasv3-1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # legalectra-small-spanish-becasv3-1 This model is a fine-tuned version of [mrm8488/legalectra-small-spanish](https://huggingface.co/mrm8488/legalectra-small-spanish) on the becasv2 dataset. It achieves the following results on the evaluation set: - Loss: 5.5694 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 10 - eval_batch_size: 10 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 8 | 5.8980 | | No log | 2.0 | 16 | 5.8136 | | No log | 3.0 | 24 | 5.7452 | | No log | 4.0 | 32 | 5.6940 | | No log | 5.0 | 40 | 5.6554 | | No log | 6.0 | 48 | 5.6241 | | No log | 7.0 | 56 | 5.5997 | | No log | 8.0 | 64 | 5.5830 | | No log | 9.0 | 72 | 5.5730 | | No log | 10.0 | 80 | 5.5694 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
paola-md/recipe-distilbert-upper-Is
paola-md
2022-07-12T03:03:14Z
13
0
transformers
[ "transformers", "pytorch", "distilbert", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-07-12T00:16:41Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: recipe-distilbert-upper-Is results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # recipe-distilbert-upper-Is This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8565 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.6309 | 1.0 | 1305 | 1.2607 | | 1.2639 | 2.0 | 2610 | 1.1291 | | 1.1592 | 3.0 | 3915 | 1.0605 | | 1.0987 | 4.0 | 5220 | 1.0128 | | 1.0569 | 5.0 | 6525 | 0.9796 | | 1.0262 | 6.0 | 7830 | 0.9592 | | 1.0032 | 7.0 | 9135 | 0.9352 | | 0.9815 | 8.0 | 10440 | 0.9186 | | 0.967 | 9.0 | 11745 | 0.9086 | | 0.9532 | 10.0 | 13050 | 0.8973 | | 0.9436 | 11.0 | 14355 | 0.8888 | | 0.9318 | 12.0 | 15660 | 0.8835 | | 0.9243 | 13.0 | 16965 | 0.8748 | | 0.9169 | 14.0 | 18270 | 0.8673 | | 0.9117 | 15.0 | 19575 | 0.8610 | | 0.9066 | 16.0 | 20880 | 0.8562 | | 0.9028 | 17.0 | 22185 | 0.8566 | | 0.901 | 18.0 | 23490 | 0.8583 | | 0.8988 | 19.0 | 24795 | 0.8557 | | 0.8958 | 20.0 | 26100 | 0.8565 | ### Framework versions - Transformers 4.19.0.dev0 - Pytorch 1.11.0+cu102 - Datasets 2.3.2 - Tokenizers 0.12.1
huggingtweets/hhelafifi
huggingtweets
2022-07-12T02:49:51Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-07-12T02:32:46Z
--- language: en thumbnail: http://www.huggingtweets.com/hhelafifi/1657594186366/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1147337070920097793/06CZyryx_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Hussein</div> <div style="text-align: center; font-size: 14px;">@hhelafifi</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Hussein. | Data | Hussein | | --- | --- | | Tweets downloaded | 820 | | Retweets | 191 | | Short tweets | 95 | | Tweets kept | 534 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1j7uxays/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @hhelafifi's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/20d5foa3) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/20d5foa3/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/hhelafifi') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
nateraw/yolov6s
nateraw
2022-07-12T02:01:18Z
0
0
pytorch
[ "pytorch", "object-detection", "yolo", "autogenerated-modelcard", "en", "arxiv:1910.09700", "license:gpl-3.0", "region:us" ]
object-detection
2022-07-08T04:01:40Z
--- language: en license: gpl-3.0 library_name: pytorch tags: - object-detection - yolo - autogenerated-modelcard model_name: yolov6s --- # Model Card for yolov6s <!-- Provide a quick summary of what the model is/does. --> # Table of Contents 1. [Model Details](#model-details) 2. [Uses](#uses) 3. [Bias, Risks, and Limitations](#bias-risks-and-limitations) 4. [Training Details](#training-details) 5. [Evaluation](#evaluation) 6. [Model Examination](#model-examination) 7. [Environmental Impact](#environmental-impact) 8. [Technical Specifications](#technical-specifications-optional) 9. [Citation](#citation) 10. [Glossary](#glossary-optional) 11. [More Information](#more-information-optional) 12. [Model Card Authors](#model-card-authors-optional) 13. [Model Card Contact](#model-card-contact) 14. [How To Get Started With the Model](#how-to-get-started-with-the-model) # Model Details ## Model Description <!-- Provide a longer summary of what this model is. --> YOLOv6 is a single-stage object detection framework dedicated to industrial applications, with hardware-friendly efficient design and high performance. - **Developed by:** [More Information Needed] - **Shared by [Optional]:** [@nateraw](https://hf.co/nateraw) - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Related Models:** [yolov6t](https://hf.co/nateraw/yolov6t), [yolov6n](https://hf.co/nateraw/yolov6n) - **Parent Model:** N/A - **Resources for more information:** The [official GitHub Repository](https://github.com/meituan/YOLOv6) # Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ## Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> This model is meant to be used as a general object detector. ## Downstream Use [Optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> You can fine-tune this model for your specific task ## Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> Don't be evil. # Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> This model often classifies objects incorrectly, especially when applied to videos. It does not handle crowds very well. ## Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recomendations. # Training Details ## Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ## Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> ### Preprocessing [More Information Needed] ### Speeds, Sizes, Times <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] # Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ## Testing Data, Factors & Metrics ### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] ### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] ### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ## Results [More Information Needed] # Model Examination [More Information Needed] # Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] # Technical Specifications [optional] ## Model Architecture and Objective [More Information Needed] ## Compute Infrastructure [More Information Needed] ### Hardware [More Information Needed] ### Software [More Information Needed] # Citation <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] # Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] # More Information [optional] Please refer to the [official GitHub Repository](https://github.com/meituan/YOLOv6) # Model Card Authors [optional] [@nateraw](https://hf.co/nateraw) # Model Card Contact [@nateraw](https://hf.co/nateraw) - please leave a note in the discussions tab here # How to Get Started with the Model Use the code below to get started with the model. <details> <summary> Click to expand </summary> [More Information Needed] </details>
nateraw/yolov6t
nateraw
2022-07-12T02:01:04Z
0
0
pytorch
[ "pytorch", "object-detection", "yolo", "autogenerated-modelcard", "en", "arxiv:1910.09700", "license:gpl-3.0", "region:us" ]
object-detection
2022-07-08T04:19:38Z
--- language: en license: gpl-3.0 library_name: pytorch tags: - object-detection - yolo - autogenerated-modelcard model_name: yolov6t --- # Model Card for yolov6t <!-- Provide a quick summary of what the model is/does. --> # Table of Contents 1. [Model Details](#model-details) 2. [Uses](#uses) 3. [Bias, Risks, and Limitations](#bias-risks-and-limitations) 4. [Training Details](#training-details) 5. [Evaluation](#evaluation) 6. [Model Examination](#model-examination) 7. [Environmental Impact](#environmental-impact) 8. [Technical Specifications](#technical-specifications-optional) 9. [Citation](#citation) 10. [Glossary](#glossary-optional) 11. [More Information](#more-information-optional) 12. [Model Card Authors](#model-card-authors-optional) 13. [Model Card Contact](#model-card-contact) 14. [How To Get Started With the Model](#how-to-get-started-with-the-model) # Model Details ## Model Description <!-- Provide a longer summary of what this model is. --> YOLOv6 is a single-stage object detection framework dedicated to industrial applications, with hardware-friendly efficient design and high performance. - **Developed by:** [More Information Needed] - **Shared by [Optional]:** [@nateraw](https://hf.co/nateraw) - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Related Models:** [yolov6s](https://hf.co/nateraw/yolov6s), [yolov6n](https://hf.co/nateraw/yolov6n) - **Parent Model:** N/A - **Resources for more information:** The [official GitHub Repository](https://github.com/meituan/YOLOv6) # Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ## Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> This model is meant to be used as a general object detector. ## Downstream Use [Optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> You can fine-tune this model for your specific task ## Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> Don't be evil. # Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> This model often classifies objects incorrectly, especially when applied to videos. It does not handle crowds very well. ## Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recomendations. # Training Details ## Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ## Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> ### Preprocessing [More Information Needed] ### Speeds, Sizes, Times <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] # Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ## Testing Data, Factors & Metrics ### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] ### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] ### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ## Results [More Information Needed] # Model Examination [More Information Needed] # Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] # Technical Specifications [optional] ## Model Architecture and Objective [More Information Needed] ## Compute Infrastructure [More Information Needed] ### Hardware [More Information Needed] ### Software [More Information Needed] # Citation <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] # Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] # More Information [optional] Please refer to the [official GitHub Repository](https://github.com/meituan/YOLOv6) # Model Card Authors [optional] [@nateraw](https://hf.co/nateraw) # Model Card Contact [@nateraw](https://hf.co/nateraw) - please leave a note in the discussions tab here # How to Get Started with the Model Use the code below to get started with the model. <details> <summary> Click to expand </summary> [More Information Needed] </details>
ManqingLiu/pegasus-samsum
ManqingLiu
2022-07-11T22:33:51Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "pegasus", "text2text-generation", "generated_from_trainer", "dataset:samsum", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-07-11T21:16:06Z
--- tags: - generated_from_trainer datasets: - samsum model-index: - name: pegasus-samsum results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pegasus-samsum This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset. It achieves the following results on the evaluation set: - Loss: 1.4858 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.7236 | 0.54 | 500 | 1.4858 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0+cu113 - Datasets 2.0.0 - Tokenizers 0.10.3
mariastull/testpyramidsrnd
mariastull
2022-07-11T22:28:45Z
8
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "unity-ml-agents", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2022-07-11T22:28:40Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids library_name: ml-agents --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids 2. Step 1: Write your model_id: mariastull/testpyramidsrnd 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
camilag/t5-end2end-questions-generation
camilag
2022-07-11T20:52:28Z
4
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "generated_from_trainer", "dataset:squad_modified_for_t5_qg", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-07-11T20:12:30Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad_modified_for_t5_qg model-index: - name: t5-end2end-questions-generation results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-end2end-questions-generation This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the squad_modified_for_t5_qg dataset. It achieves the following results on the evaluation set: - Loss: 1.7927 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.5425 | 0.34 | 100 | 1.9416 | | 2.0221 | 0.68 | 200 | 1.7927 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
jonatasgrosman/exp_w2v2t_pt_vp-it_s996
jonatasgrosman
2022-07-11T19:59:08Z
4
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "pt", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-11T19:58:21Z
--- language: - pt license: apache-2.0 tags: - automatic-speech-recognition - pt datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_pt_vp-it_s996 Fine-tuned [facebook/wav2vec2-large-it-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-it-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_pt_r-wav2vec2_s468
jonatasgrosman
2022-07-11T19:48:19Z
4
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "pt", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-11T19:47:54Z
--- language: - pt license: apache-2.0 tags: - automatic-speech-recognition - pt datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_pt_r-wav2vec2_s468 Fine-tuned [facebook/wav2vec2-large-robust](https://huggingface.co/facebook/wav2vec2-large-robust) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_pt_xls-r_s657
jonatasgrosman
2022-07-11T19:45:15Z
5
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "pt", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-11T19:44:32Z
--- language: - pt license: apache-2.0 tags: - automatic-speech-recognition - pt datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_pt_xls-r_s657 Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_pt_xls-r_s689
jonatasgrosman
2022-07-11T19:41:36Z
4
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "pt", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-11T19:40:50Z
--- language: - pt license: apache-2.0 tags: - automatic-speech-recognition - pt datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_pt_xls-r_s689 Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_pt_unispeech-sat_s103
jonatasgrosman
2022-07-11T19:34:07Z
3
0
transformers
[ "transformers", "pytorch", "unispeech-sat", "automatic-speech-recognition", "pt", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-11T19:33:36Z
--- language: - pt license: apache-2.0 tags: - automatic-speech-recognition - pt datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_pt_unispeech-sat_s103 Fine-tuned [microsoft/unispeech-sat-large](https://huggingface.co/microsoft/unispeech-sat-large) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_pt_unispeech-sat_s756
jonatasgrosman
2022-07-11T19:26:48Z
3
0
transformers
[ "transformers", "pytorch", "unispeech-sat", "automatic-speech-recognition", "pt", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-11T19:26:24Z
--- language: - pt license: apache-2.0 tags: - automatic-speech-recognition - pt datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_pt_unispeech-sat_s756 Fine-tuned [microsoft/unispeech-sat-large](https://huggingface.co/microsoft/unispeech-sat-large) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_pt_vp-nl_s6
jonatasgrosman
2022-07-11T19:17:20Z
3
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "pt", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-11T19:16:53Z
--- language: - pt license: apache-2.0 tags: - automatic-speech-recognition - pt datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_pt_vp-nl_s6 Fine-tuned [facebook/wav2vec2-large-nl-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-nl-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_pt_vp-nl_s833
jonatasgrosman
2022-07-11T19:13:31Z
3
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "pt", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-11T19:12:53Z
--- language: - pt license: apache-2.0 tags: - automatic-speech-recognition - pt datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_pt_vp-nl_s833 Fine-tuned [facebook/wav2vec2-large-nl-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-nl-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_pt_vp-es_s506
jonatasgrosman
2022-07-11T19:05:37Z
3
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "pt", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-11T19:04:54Z
--- language: - pt license: apache-2.0 tags: - automatic-speech-recognition - pt datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_pt_vp-es_s506 Fine-tuned [facebook/wav2vec2-large-es-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-es-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_pt_vp-fr_s675
jonatasgrosman
2022-07-11T18:49:06Z
3
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "pt", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-11T18:48:25Z
--- language: - pt license: apache-2.0 tags: - automatic-speech-recognition - pt datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_pt_vp-fr_s675 Fine-tuned [facebook/wav2vec2-large-fr-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-fr-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_pt_unispeech-ml_s808
jonatasgrosman
2022-07-11T18:31:15Z
4
0
transformers
[ "transformers", "pytorch", "unispeech", "automatic-speech-recognition", "pt", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-11T18:30:46Z
--- language: - pt license: apache-2.0 tags: - automatic-speech-recognition - pt datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_pt_unispeech-ml_s808 Fine-tuned [microsoft/unispeech-large-multi-lingual-1500h-cv](https://huggingface.co/microsoft/unispeech-large-multi-lingual-1500h-cv) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_pt_unispeech-ml_s324
jonatasgrosman
2022-07-11T18:27:29Z
3
0
transformers
[ "transformers", "pytorch", "unispeech", "automatic-speech-recognition", "pt", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-11T18:26:59Z
--- language: - pt license: apache-2.0 tags: - automatic-speech-recognition - pt datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_pt_unispeech-ml_s324 Fine-tuned [microsoft/unispeech-large-multi-lingual-1500h-cv](https://huggingface.co/microsoft/unispeech-large-multi-lingual-1500h-cv) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_pt_wavlm_s118
jonatasgrosman
2022-07-11T18:23:23Z
3
0
transformers
[ "transformers", "pytorch", "wavlm", "automatic-speech-recognition", "pt", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-11T18:22:59Z
--- language: - pt license: apache-2.0 tags: - automatic-speech-recognition - pt datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_pt_wavlm_s118 Fine-tuned [microsoft/wavlm-large](https://huggingface.co/microsoft/wavlm-large) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_pt_wavlm_s691
jonatasgrosman
2022-07-11T18:13:28Z
3
0
transformers
[ "transformers", "pytorch", "wavlm", "automatic-speech-recognition", "pt", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-11T18:13:02Z
--- language: - pt license: apache-2.0 tags: - automatic-speech-recognition - pt datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_pt_wavlm_s691 Fine-tuned [microsoft/wavlm-large](https://huggingface.co/microsoft/wavlm-large) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_pt_no-pretraining_s541
jonatasgrosman
2022-07-11T18:01:26Z
3
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "pt", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-11T18:01:02Z
--- language: - pt license: apache-2.0 tags: - automatic-speech-recognition - pt datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_pt_no-pretraining_s541 Fine-tuned randomly initialized wav2vec2 model for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_pt_no-pretraining_s84
jonatasgrosman
2022-07-11T17:57:59Z
3
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "pt", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-11T17:57:34Z
--- language: - pt license: apache-2.0 tags: - automatic-speech-recognition - pt datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_pt_no-pretraining_s84 Fine-tuned randomly initialized wav2vec2 model for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_pt_vp-sv_s894
jonatasgrosman
2022-07-11T17:54:51Z
3
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "pt", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-11T17:54:09Z
--- language: - pt license: apache-2.0 tags: - automatic-speech-recognition - pt datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_pt_vp-sv_s894 Fine-tuned [facebook/wav2vec2-large-sv-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-sv-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_pt_vp-sv_s563
jonatasgrosman
2022-07-11T17:51:15Z
5
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "pt", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-11T17:50:36Z
--- language: - pt license: apache-2.0 tags: - automatic-speech-recognition - pt datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_pt_vp-sv_s563 Fine-tuned [facebook/wav2vec2-large-sv-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-sv-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
AdiKompella/Reinforce-Pixelcopter-PLE-v0
AdiKompella
2022-07-11T17:48:01Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2022-07-11T17:47:44Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Pixelcopter-PLE-v0 results: - metrics: - type: mean_reward value: 12.70 +/- 11.50 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
jonatasgrosman/exp_w2v2t_pt_vp-sv_s612
jonatasgrosman
2022-07-11T17:47:36Z
3
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "pt", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-11T17:47:09Z
--- language: - pt license: apache-2.0 tags: - automatic-speech-recognition - pt datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_pt_vp-sv_s612 Fine-tuned [facebook/wav2vec2-large-sv-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-sv-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
jonatasgrosman/exp_w2v2t_pt_hubert_s486
jonatasgrosman
2022-07-11T17:43:15Z
3
0
transformers
[ "transformers", "pytorch", "hubert", "automatic-speech-recognition", "pt", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-11T17:42:50Z
--- language: - pt license: apache-2.0 tags: - automatic-speech-recognition - pt datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_pt_hubert_s486 Fine-tuned [facebook/hubert-large-ll60k](https://huggingface.co/facebook/hubert-large-ll60k) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
ianspektor/reinforce-CartPole-v1
ianspektor
2022-07-11T17:36:19Z
0
0
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2022-07-11T16:33:35Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: reinforce-CartPole-v1 results: - metrics: - type: mean_reward value: 359.42 +/- 89.49 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
jonatasgrosman/exp_w2v2t_pt_unispeech_s474
jonatasgrosman
2022-07-11T17:29:59Z
3
0
transformers
[ "transformers", "pytorch", "unispeech", "automatic-speech-recognition", "pt", "dataset:mozilla-foundation/common_voice_7_0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-07-11T17:29:33Z
--- language: - pt license: apache-2.0 tags: - automatic-speech-recognition - pt datasets: - mozilla-foundation/common_voice_7_0 --- # exp_w2v2t_pt_unispeech_s474 Fine-tuned [microsoft/unispeech-large-1500h-cv](https://huggingface.co/microsoft/unispeech-large-1500h-cv) for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
kinanmartin/xlm-roberta-large-ner-hrl-finetuned-ner
kinanmartin
2022-07-11T17:29:06Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:toydata", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-07-11T03:49:46Z
--- tags: - generated_from_trainer datasets: - toydata metrics: - precision - recall - f1 - accuracy model-index: - name: xlm-roberta-large-ner-hrl-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: toydata type: toydata args: SDN metrics: - name: Precision type: precision value: 0.9132452695465905 - name: Recall type: recall value: 0.9205854126679462 - name: F1 type: f1 value: 0.9169006511739053 - name: Accuracy type: accuracy value: 0.9784804945824268 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-large-ner-hrl-finetuned-ner This model is a fine-tuned version of [Davlan/xlm-roberta-large-ner-hrl](https://huggingface.co/Davlan/xlm-roberta-large-ner-hrl) on the toydata dataset. It achieves the following results on the evaluation set: - Loss: 0.0944 - Precision: 0.9132 - Recall: 0.9206 - F1: 0.9169 - Accuracy: 0.9785 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 408 | 0.0900 | 0.8508 | 0.9303 | 0.8888 | 0.9719 | | 0.1087 | 2.0 | 816 | 0.0827 | 0.9043 | 0.9230 | 0.9136 | 0.9783 | | 0.0503 | 3.0 | 1224 | 0.0944 | 0.9132 | 0.9206 | 0.9169 | 0.9785 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1