modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-11 18:29:29
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 555
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-11 18:25:24
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
public-data/DeepDanbooru
|
public-data
| 2022-01-23T22:31:55Z | 0 | 1 | null |
[
"region:us"
] | null | 2023-05-22T00:53:07Z |
# DeepDanbooru
- https://github.com/KichangKim/DeepDanbooru
- https://github.com/KichangKim/DeepDanbooru/releases/tag/v3-20200915-sgd-e30
- https://github.com/KichangKim/DeepDanbooru/releases/download/v3-20200915-sgd-e30/deepdanbooru-v3-20200915-sgd-e30.zip
|
huggingtweets/twmatthieuh
|
huggingtweets
| 2022-01-23T21:14:21Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/twmatthieuh/1642972456953/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1484525847176691715/BwsIu8hd_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Matthieu H.</div>
<div style="text-align: center; font-size: 14px;">@twmatthieuh</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Matthieu H..
| Data | Matthieu H. |
| --- | --- |
| Tweets downloaded | 1225 |
| Retweets | 507 |
| Short tweets | 26 |
| Tweets kept | 692 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2hx6jinu/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @twmatthieuh's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/nrhuqdse) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/nrhuqdse/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/twmatthieuh')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
mattchurgin/xls-r-eng
|
mattchurgin
| 2022-01-23T17:31:10Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"ab",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- ab
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [patrickvonplaten/wav2vec2_tiny_random_robust](https://huggingface.co/patrickvonplaten/wav2vec2_tiny_random_robust) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - AB dataset.
It achieves the following results on the evaluation set:
- Loss: inf
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1
- Datasets 1.18.1.dev0
- Tokenizers 0.11.0
|
shivam/wav2vec2-xls-r-300m-hindi
|
shivam
| 2022-01-23T16:37:08Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"hi",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- hi
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - HI dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4031
- Wer: 0.6827
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 5.3156 | 3.4 | 500 | 4.5583 | 1.0 |
| 3.3329 | 6.8 | 1000 | 3.4274 | 1.0001 |
| 2.1275 | 10.2 | 1500 | 1.7221 | 0.8763 |
| 1.5737 | 13.6 | 2000 | 1.4188 | 0.8143 |
| 1.3835 | 17.01 | 2500 | 1.2251 | 0.7447 |
| 1.3247 | 20.41 | 3000 | 1.2827 | 0.7394 |
| 1.231 | 23.81 | 3500 | 1.2216 | 0.7074 |
| 1.1819 | 27.21 | 4000 | 1.2210 | 0.6863 |
| 1.1546 | 30.61 | 4500 | 1.3233 | 0.7308 |
| 1.0902 | 34.01 | 5000 | 1.3251 | 0.7010 |
| 1.0749 | 37.41 | 5500 | 1.3274 | 0.7235 |
| 1.0412 | 40.81 | 6000 | 1.2942 | 0.6856 |
| 1.0064 | 44.22 | 6500 | 1.2581 | 0.6732 |
| 1.0006 | 47.62 | 7000 | 1.2767 | 0.6885 |
| 0.9518 | 51.02 | 7500 | 1.2966 | 0.6925 |
| 0.9514 | 54.42 | 8000 | 1.2981 | 0.7067 |
| 0.9241 | 57.82 | 8500 | 1.3835 | 0.7124 |
| 0.9059 | 61.22 | 9000 | 1.3318 | 0.7083 |
| 0.8906 | 64.62 | 9500 | 1.3640 | 0.6962 |
| 0.8468 | 68.03 | 10000 | 1.4727 | 0.6982 |
| 0.8631 | 71.43 | 10500 | 1.3401 | 0.6809 |
| 0.8154 | 74.83 | 11000 | 1.4124 | 0.6955 |
| 0.7953 | 78.23 | 11500 | 1.4245 | 0.6950 |
| 0.818 | 81.63 | 12000 | 1.3944 | 0.6995 |
| 0.7772 | 85.03 | 12500 | 1.3735 | 0.6785 |
| 0.7857 | 88.43 | 13000 | 1.3696 | 0.6808 |
| 0.7705 | 91.84 | 13500 | 1.4101 | 0.6870 |
| 0.7537 | 95.24 | 14000 | 1.4178 | 0.6832 |
| 0.7734 | 98.64 | 14500 | 1.4027 | 0.6831 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu113
- Datasets 1.18.1.dev0
- Tokenizers 0.11.0
|
ylh1013/fintune-ja-chatbot
|
ylh1013
| 2022-01-23T14:21:02Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language:
- finetuned_from
license: mit
tags:
- generated_from_trainer
model-index:
- name: fintune-ja-chatbot
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fintune-ja-chatbot
This model is a fine-tuned version of [rinna/japanese-gpt2-medium](https://huggingface.co/rinna/japanese-gpt2-medium) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 50
### Training results
### Framework versions
- Transformers 4.12.3
- Pytorch 1.10.0+cu102
- Tokenizers 0.10.3
|
Madhour/gpt2-eli5
|
Madhour
| 2022-01-23T12:00:23Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"ELI5",
"en",
"dataset:eli5",
"license:gpl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:04Z |
---
language: en
tags:
- ELI5
license: gpl-3.0
datasets:
- eli5
Task: Summarization
widget:
- text: "<|BOS|><|SEP|>Consulting,business,Fraud<|SEP|>"
inference:
parameters:
temperature: 0.9
return_full_text: False
repetition_penalty: 1
---
# Conditional ELI5 Generator
Given a few keywords, it generates an Eli5 question with a corresponding answer.
The model is mainly used for [SeemsPhishy](https://github.com/madhour/seemsphishy) to auto generate newsletters for phishing/penetration-testing.
# How to use
```Python
from transformers import pipeline, AutoTokenizer, AutoModelForCausalLM
from torch import tensor
tokenizer = AutoTokenizer.from_pretrained("Madhour/gpt2-eli5")
model = AutoModelForCausalLM.from_pretrained("Madhour/gpt2-eli5")
prompt = <|BOS|> +"I have a question."+ <|SEP|> + "keyword1,keyword2,keyword3" + <|SEP|>
prompt = tensor(tokenizer.encode(prompt)).unsqueeze(0)
text = model.generate(prompt,
do_sample=True,
min_length=50,
max_length=768,
top_k=30,
top_p=0.7,
temperature=0.9,
repetition_penalty=2.0,
num_return_sequences=3)
```
|
asanka25/xlm-roberta-base-finetuned-conll03-english-finetuned-sinhala
|
asanka25
| 2022-01-23T10:59:51Z | 30 | 1 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
This model was created using xlm-roberta-base bodel and fine-tuned it using CoNLL 2003 dataset. On top of the trained model, we trained it again using a Sinhala NER data that was also formatted to the CoNLL format.
|
dandelin/vilt-b32-finetuned-flickr30k
|
dandelin
| 2022-01-23T09:46:32Z | 34 | 3 |
transformers
|
[
"transformers",
"pytorch",
"vilt",
"arxiv:1505.04870",
"arxiv:2102.03334",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
license: apache-2.0
---
# Vision-and-Language Transformer (ViLT), fine-tuned on Flickr30k
Vision-and-Language Transformer (ViLT) model fine-tuned on [Flickr30k](https://arxiv.org/abs/1505.04870#:~:text=The%20Flickr30k%20dataset%20has%20become,for%20sentence%2Dbased%20image%20description.&text=Such%20annotations%20are%20essential%20for,entity%20mentions%20in%20an%20image.). It was introduced in the paper [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) by Kim et al. and first released in [this repository](https://github.com/dandelin/ViLT).
Disclaimer: The team releasing ViLT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Intended uses & limitations
You can use the model for image and text retrieval.
### How to use
Here is how to use the model in PyTorch:
```
from transformers import ViltProcessor, ViltForImageAndTextRetrieval
import requests
from PIL import Image
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
texts = ["An image of two cats chilling on a couch", "A football player scoring a goal"]
processor = ViltProcessor.from_pretrained("dandelin/vilt-b32-finetuned-flickr30k")
model = ViltForImageAndTextRetrieval.from_pretrained("dandelin/vilt-b32-finetuned-flickr30k")
# prepare inputs
encoding = processor(image, text, return_tensors="pt")
# forward pass
scores = dict()
for text in texts:
encoding = processor(image, text, return_tensors="pt")
outputs = model(**encoding)
scores[text] = outputs.logits[0, :].item()
```
## Training data
(to do)
## Training procedure
### Preprocessing
(to do)
### Pretraining
(to do)
## Evaluation results
(to do)
### BibTeX entry and citation info
```bibtex
@misc{kim2021vilt,
title={ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision},
author={Wonjae Kim and Bokyung Son and Ildoo Kim},
year={2021},
eprint={2102.03334},
archivePrefix={arXiv},
primaryClass={stat.ML}
}
```
|
dandelin/vilt-b32-finetuned-coco
|
dandelin
| 2022-01-23T09:45:24Z | 10,342 | 1 |
transformers
|
[
"transformers",
"pytorch",
"vilt",
"arxiv:2102.03334",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
license: apache-2.0
---
# Vision-and-Language Transformer (ViLT), fine-tuned on COCO
Vision-and-Language Transformer (ViLT) model fine-tuned on [COCO](https://cocodataset.org/#home). It was introduced in the paper [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) by Kim et al. and first released in [this repository](https://github.com/dandelin/ViLT).
Disclaimer: The team releasing ViLT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Intended uses & limitations
You can use the model for image and text retrieval.
### How to use
Here is how to use the model in PyTorch:
```
from transformers import ViltProcessor, ViltForImageAndTextRetrieval
import requests
from PIL import Image
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
texts = ["An image of two cats chilling on a couch", "A football player scoring a goal"]
processor = ViltProcessor.from_pretrained("dandelin/vilt-b32-finetuned-coco")
model = ViltForImageAndTextRetrieval.from_pretrained("dandelin/vilt-b32-finetuned-coco")
# prepare inputs
encoding = processor(image, text, return_tensors="pt")
# forward pass
scores = dict()
for text in texts:
encoding = processor(image, text, return_tensors="pt")
outputs = model(**encoding)
scores[text] = outputs.logits[0, :].item()
```
## Training data
(to do)
## Training procedure
### Preprocessing
(to do)
### Pretraining
(to do)
## Evaluation results
(to do)
### BibTeX entry and citation info
```bibtex
@misc{kim2021vilt,
title={ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision},
author={Wonjae Kim and Bokyung Son and Ildoo Kim},
year={2021},
eprint={2102.03334},
archivePrefix={arXiv},
primaryClass={stat.ML}
}
```
|
dandelin/vilt-b32-finetuned-nlvr2
|
dandelin
| 2022-01-23T09:43:30Z | 673 | 2 |
transformers
|
[
"transformers",
"pytorch",
"vilt",
"arxiv:2102.03334",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
license: apache-2.0
---
# Vision-and-Language Transformer (ViLT), fine-tuned on NLVR2
Vision-and-Language Transformer (ViLT) model fine-tuned on [NLVR2](https://lil.nlp.cornell.edu/nlvr/). It was introduced in the paper [ViLT: Vision-and-Language Transformer
Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334) by Kim et al. and first released in [this repository](https://github.com/dandelin/ViLT).
Disclaimer: The team releasing ViLT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Intended uses & limitations
You can use the model to determine whether a sentence is true or false given 2 images.
### How to use
Here is how to use the model in PyTorch:
```
from transformers import ViltProcessor, ViltForImagesAndTextClassification
import requests
from PIL import Image
image1 = Image.open(requests.get("https://lil.nlp.cornell.edu/nlvr/exs/ex0_0.jpg", stream=True).raw)
image2 = Image.open(requests.get("https://lil.nlp.cornell.edu/nlvr/exs/ex0_1.jpg", stream=True).raw)
text = "The left image contains twice the number of dogs as the right image."
processor = ViltProcessor.from_pretrained("dandelin/vilt-b32-finetuned-nlvr2")
model = ViltForImagesAndTextClassification.from_pretrained("dandelin/vilt-b32-finetuned-nlvr2")
# prepare inputs
encoding = processor([image1, image2], text, return_tensors="pt")
# forward pass
outputs = model(input_ids=encoding.input_ids, pixel_values=encoding.pixel_values.unsqueeze(0))
logits = outputs.logits
idx = logits.argmax(-1).item()
print("Predicted answer:", model.config.id2label[idx])
```
## Training data
(to do)
## Training procedure
### Preprocessing
(to do)
### Pretraining
(to do)
## Evaluation results
(to do)
### BibTeX entry and citation info
```bibtex
@misc{kim2021vilt,
title={ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision},
author={Wonjae Kim and Bokyung Son and Ildoo Kim},
year={2021},
eprint={2102.03334},
archivePrefix={arXiv},
primaryClass={stat.ML}
}
```
|
ylh1013/ja_chatbot
|
ylh1013
| 2022-01-23T02:24:03Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language:
- finetuned_from
license: mit
tags:
- generated_from_trainer
model-index:
- name: ja_chatbot
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ja_chatbot
This model is a fine-tuned version of [rinna/japanese-gpt2-medium](https://huggingface.co/rinna/japanese-gpt2-medium) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.12.3
- Pytorch 1.10.0+cu102
- Tokenizers 0.10.3
|
pere/xls-test
|
pere
| 2022-01-22T18:40:50Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"ab",
"dataset:common_voice",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- ab
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [hf-test/xls-r-dummy](https://huggingface.co/hf-test/xls-r-dummy) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - AB dataset.
It achieves the following results on the evaluation set:
- Loss: 156.8789
- Wer: 1.3456
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
alistvt/bert-base-uncased-pretrain-finetuned-coqa-falttened
|
alistvt
| 2022-01-22T05:06:00Z | 30 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-pretrain-finetuned-coqa-falttened
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-pretrain-finetuned-coqa-falttened
This model is a fine-tuned version of [alistvt/bert-base-uncased-pretrained-mlm-coqa-stories](https://huggingface.co/alistvt/bert-base-uncased-pretrained-mlm-coqa-stories) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8655
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 3.2886 | 0.29 | 2000 | 3.0142 |
| 3.0801 | 0.59 | 4000 | 2.8347 |
| 2.9744 | 0.88 | 6000 | 2.7643 |
| 2.494 | 1.18 | 8000 | 2.7605 |
| 2.4417 | 1.47 | 10000 | 2.7790 |
| 2.4042 | 1.77 | 12000 | 2.7382 |
| 2.1285 | 2.06 | 14000 | 2.8588 |
| 2.0569 | 2.36 | 16000 | 2.8937 |
| 2.0794 | 2.65 | 18000 | 2.8511 |
| 2.0679 | 2.95 | 20000 | 2.8655 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
facebook/xm_transformer_600m-en_zh-multi_domain
|
facebook
| 2022-01-21T19:02:57Z | 5 | 2 |
fairseq
|
[
"fairseq",
"audio",
"audio-to-audio",
"speech-to-speech-translation",
"dataset:must_c",
"dataset:covost2",
"arxiv:2010.05171",
"region:us"
] |
audio-to-audio
| 2022-03-02T23:29:05Z |
---
library_name: fairseq
task: audio-to-audio
tags:
- fairseq
- audio
- audio-to-audio
- speech-to-speech-translation
language: en-zh
datasets:
- must_c
- covost2
widget:
- example_title: Common Voice sample 1
src: https://huggingface.co/facebook/xm_transformer_600m-en_es-multi_domain/resolve/main/common_voice_en_18295850.mp3
---
# xm_transformer_600m-en_zh-multi_domain
[W2V2-Transformer](https://aclanthology.org/2021.acl-long.68/) speech-to-text translation model from fairseq S2T ([paper](https://arxiv.org/abs/2010.05171)/[code](https://github.com/pytorch/fairseq/tree/main/examples/speech_to_text)):
- English-Chinese
- Trained on MuST-C, CoVoST 2, Multilingual LibriSpeech, Common Voice v7 and CCMatrix
- Speech synthesis with [facebook/tts_transformer-zh-cv7_css10](https://huggingface.co/facebook/tts_transformer-zh-cv7_css10)
## Usage
```python
from fairseq.checkpoint_utils import load_model_ensemble_and_task_from_hf_hub
from fairseq.models.speech_to_text.hub_interface import S2THubInterface
from fairseq.models.text_to_speech.hub_interface import TTSHubInterface
import IPython.display as ipd
import torchaudio
models, cfg, task = load_model_ensemble_and_task_from_hf_hub(
"facebook/xm_transformer_600m-en_zh-multi_domain",
arg_overrides={"config_yaml": "config.yaml"},
)
model = models[0]
generator = task.build_generator(model, cfg)
# requires 16000Hz mono channel audio
audio, _ = torchaudio.load("/path/to/an/audio/file")
sample = S2THubInterface.get_model_input(task, audio)
text = S2THubInterface.get_prediction(task, model, generator, sample)
# speech synthesis
tts_models, tts_cfg, tts_task = load_model_ensemble_and_task_from_hf_hub(
f"facebook/tts_transformer-zh-cv7_css10",
arg_overrides={"vocoder": "griffin_lim", "fp16": False},
)
tts_model = tts_models[0]
TTSHubInterface.update_cfg_with_data_cfg(tts_cfg, tts_task.data_cfg)
tts_generator = tts_task.build_generator([tts_model], tts_cfg)
tts_sample = TTSHubInterface.get_model_input(tts_task, text)
wav, sr = TTSHubInterface.get_prediction(
tts_task, tts_model, tts_generator, tts_sample
)
ipd.Audio(wav, rate=rate)
```
## Citation
```bibtex
@inproceedings{li-etal-2021-multilingual,
title = "Multilingual Speech Translation from Efficient Finetuning of Pretrained Models",
author = "Li, Xian and
Wang, Changhan and
Tang, Yun and
Tran, Chau and
Tang, Yuqing and
Pino, Juan and
Baevski, Alexei and
Conneau, Alexis and
Auli, Michael",
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.acl-long.68",
doi = "10.18653/v1/2021.acl-long.68",
pages = "827--838",
}
@inproceedings{wang-etal-2020-fairseq,
title = "Fairseq {S}2{T}: Fast Speech-to-Text Modeling with Fairseq",
author = "Wang, Changhan and
Tang, Yun and
Ma, Xutai and
Wu, Anne and
Okhonko, Dmytro and
Pino, Juan",
booktitle = "Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing: System Demonstrations",
month = dec,
year = "2020",
address = "Suzhou, China",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.aacl-demo.6",
pages = "33--39",
}
```
|
facebook/xm_transformer_600m-en_vi-multi_domain
|
facebook
| 2022-01-21T19:02:41Z | 8 | 1 |
fairseq
|
[
"fairseq",
"audio",
"audio-to-audio",
"speech-to-speech-translation",
"dataset:must_c",
"arxiv:2010.05171",
"region:us"
] |
audio-to-audio
| 2022-03-02T23:29:05Z |
---
library_name: fairseq
task: audio-to-audio
tags:
- fairseq
- audio
- audio-to-audio
- speech-to-speech-translation
language: en-vi
datasets:
- must_c
widget:
- example_title: Common Voice sample 1
src: https://huggingface.co/facebook/xm_transformer_600m-en_es-multi_domain/resolve/main/common_voice_en_18295850.mp3
---
# xm_transformer_600m-en_vi-multi_domain
[W2V2-Transformer](https://aclanthology.org/2021.acl-long.68/) speech-to-text translation model from fairseq S2T ([paper](https://arxiv.org/abs/2010.05171)/[code](https://github.com/pytorch/fairseq/tree/main/examples/speech_to_text)):
- English-Vietnamese
- Trained on MuST-C, Multilingual LibriSpeech, Common Voice v7 and CCMatrix
- Speech synthesis with [facebook/tts_transformer-vi-cv7](https://huggingface.co/facebook/tts_transformer-vi-cv7)
## Usage
```python
from fairseq.checkpoint_utils import load_model_ensemble_and_task_from_hf_hub
from fairseq.models.speech_to_text.hub_interface import S2THubInterface
from fairseq.models.text_to_speech.hub_interface import TTSHubInterface
import IPython.display as ipd
import torchaudio
models, cfg, task = load_model_ensemble_and_task_from_hf_hub(
"facebook/xm_transformer_600m-en_vi-multi_domain",
arg_overrides={"config_yaml": "config.yaml"},
)
model = models[0]
generator = task.build_generator(model, cfg)
# requires 16000Hz mono channel audio
audio, _ = torchaudio.load("/path/to/an/audio/file")
sample = S2THubInterface.get_model_input(task, audio)
text = S2THubInterface.get_prediction(task, model, generator, sample)
# speech synthesis
tts_models, tts_cfg, tts_task = load_model_ensemble_and_task_from_hf_hub(
f"facebook/tts_transformer-vi-cv7",
arg_overrides={"vocoder": "griffin_lim", "fp16": False},
)
tts_model = tts_models[0]
TTSHubInterface.update_cfg_with_data_cfg(tts_cfg, tts_task.data_cfg)
tts_generator = tts_task.build_generator([tts_model], tts_cfg)
tts_sample = TTSHubInterface.get_model_input(tts_task, text)
wav, sr = TTSHubInterface.get_prediction(
tts_task, tts_model, tts_generator, tts_sample
)
ipd.Audio(wav, rate=rate)
```
## Citation
```bibtex
@inproceedings{li-etal-2021-multilingual,
title = "Multilingual Speech Translation from Efficient Finetuning of Pretrained Models",
author = "Li, Xian and
Wang, Changhan and
Tang, Yun and
Tran, Chau and
Tang, Yuqing and
Pino, Juan and
Baevski, Alexei and
Conneau, Alexis and
Auli, Michael",
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.acl-long.68",
doi = "10.18653/v1/2021.acl-long.68",
pages = "827--838",
}
@inproceedings{wang-etal-2020-fairseq,
title = "Fairseq {S}2{T}: Fast Speech-to-Text Modeling with Fairseq",
author = "Wang, Changhan and
Tang, Yun and
Ma, Xutai and
Wu, Anne and
Okhonko, Dmytro and
Pino, Juan",
booktitle = "Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing: System Demonstrations",
month = dec,
year = "2020",
address = "Suzhou, China",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.aacl-demo.6",
pages = "33--39",
}
```
|
facebook/xm_transformer_600m-en_fr-multi_domain
|
facebook
| 2022-01-21T19:01:52Z | 10 | 0 |
fairseq
|
[
"fairseq",
"audio",
"audio-to-audio",
"speech-to-speech-translation",
"dataset:must_c",
"dataset:europarl_st",
"dataset:voxpopuli",
"dataset:libritrans",
"arxiv:2010.05171",
"region:us"
] |
audio-to-audio
| 2022-03-02T23:29:05Z |
---
library_name: fairseq
task: audio-to-audio
tags:
- fairseq
- audio
- audio-to-audio
- speech-to-speech-translation
language: en-fr
datasets:
- must_c
- europarl_st
- voxpopuli
- libritrans
widget:
- example_title: Common Voice sample 1
src: https://huggingface.co/facebook/xm_transformer_600m-en_es-multi_domain/resolve/main/common_voice_en_18295850.mp3
---
# xm_transformer_600m-en_fr-multi_domain
[W2V2-Transformer](https://aclanthology.org/2021.acl-long.68/) speech-to-text translation model from fairseq S2T ([paper](https://arxiv.org/abs/2010.05171)/[code](https://github.com/pytorch/fairseq/tree/main/examples/speech_to_text)):
- English-French
- Trained on MuST-C, EuroParl-ST, VoxPopuli, LibriTrans, Multilingual LibriSpeech, Common Voice v7 and CCMatrix
- Speech synthesis with [facebook/tts_transformer-fr-cv7_css10](https://huggingface.co/facebook/tts_transformer-fr-cv7_css10)
## Usage
```python
from fairseq.checkpoint_utils import load_model_ensemble_and_task_from_hf_hub
from fairseq.models.speech_to_text.hub_interface import S2THubInterface
from fairseq.models.text_to_speech.hub_interface import TTSHubInterface
import IPython.display as ipd
import torchaudio
models, cfg, task = load_model_ensemble_and_task_from_hf_hub(
"facebook/xm_transformer_600m-en_fr-multi_domain",
arg_overrides={"config_yaml": "config.yaml"},
)
model = models[0]
generator = task.build_generator(model, cfg)
# requires 16000Hz mono channel audio
audio, _ = torchaudio.load("/path/to/an/audio/file")
sample = S2THubInterface.get_model_input(task, audio)
text = S2THubInterface.get_prediction(task, model, generator, sample)
# speech synthesis
tts_models, tts_cfg, tts_task = load_model_ensemble_and_task_from_hf_hub(
f"facebook/tts_transformer-fr-cv7_css10",
arg_overrides={"vocoder": "griffin_lim", "fp16": False},
)
tts_model = tts_models[0]
TTSHubInterface.update_cfg_with_data_cfg(tts_cfg, tts_task.data_cfg)
tts_generator = tts_task.build_generator([tts_model], tts_cfg)
tts_sample = TTSHubInterface.get_model_input(tts_task, text)
wav, sr = TTSHubInterface.get_prediction(
tts_task, tts_model, tts_generator, tts_sample
)
ipd.Audio(wav, rate=rate)
```
## Citation
```bibtex
@inproceedings{li-etal-2021-multilingual,
title = "Multilingual Speech Translation from Efficient Finetuning of Pretrained Models",
author = "Li, Xian and
Wang, Changhan and
Tang, Yun and
Tran, Chau and
Tang, Yuqing and
Pino, Juan and
Baevski, Alexei and
Conneau, Alexis and
Auli, Michael",
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.acl-long.68",
doi = "10.18653/v1/2021.acl-long.68",
pages = "827--838",
}
@inproceedings{wang-etal-2020-fairseq,
title = "Fairseq {S}2{T}: Fast Speech-to-Text Modeling with Fairseq",
author = "Wang, Changhan and
Tang, Yun and
Ma, Xutai and
Wu, Anne and
Okhonko, Dmytro and
Pino, Juan",
booktitle = "Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing: System Demonstrations",
month = dec,
year = "2020",
address = "Suzhou, China",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.aacl-demo.6",
pages = "33--39",
}
```
|
facebook/xm_transformer_600m-ru_en-multi_domain
|
facebook
| 2022-01-21T18:56:34Z | 6 | 2 |
fairseq
|
[
"fairseq",
"audio",
"audio-to-audio",
"speech-to-speech-translation",
"dataset:mtedx",
"dataset:covost2",
"arxiv:2010.05171",
"region:us"
] |
audio-to-audio
| 2022-03-02T23:29:05Z |
---
library_name: fairseq
task: audio-to-audio
tags:
- fairseq
- audio
- audio-to-audio
- speech-to-speech-translation
language: ru-en
datasets:
- mtedx
- covost2
widget:
- example_title: Common Voice sample 1
src: https://huggingface.co/facebook/xm_transformer_600m-ru_en-multi_domain/resolve/main/common_voice_ru_18945535.flac
---
# xm_transformer_600m-ru_en-multi_domain
[W2V2-Transformer](https://aclanthology.org/2021.acl-long.68/) speech-to-text translation model from fairseq S2T ([paper](https://arxiv.org/abs/2010.05171)/[code](https://github.com/pytorch/fairseq/tree/main/examples/speech_to_text)):
- Russian-English
- Trained on mTEDx, CoVoST 2, OpenSTT, Common Voice v7 and CCMatrix
- Speech synthesis with [facebook/fastspeech2-en-ljspeech](https://huggingface.co/facebook/fastspeech2-en-ljspeech)
## Usage
```python
from fairseq.checkpoint_utils import load_model_ensemble_and_task_from_hf_hub
from fairseq.models.text_to_speech.hub_interface import S2THubInterface
from fairseq.models.text_to_speech.hub_interface import TTSHubInterface
import IPython.display as ipd
import torchaudio
models, cfg, task = load_model_ensemble_and_task_from_hf_hub(
"facebook/xm_transformer_600m-ru_en-multi_domain",
arg_overrides={"config_yaml": "config.yaml"},
)
model = models[0]
generator = task.build_generator(model, cfg)
# requires 16000Hz mono channel audio
audio, _ = torchaudio.load("/path/to/an/audio/file")
sample = S2THubInterface.get_model_input(task, audio)
text = S2THubInterface.get_prediction(task, model, generator, sample)
# speech synthesis
tts_models, tts_cfg, tts_task = load_model_ensemble_and_task_from_hf_hub(
f"facebook/fastspeech2-en-ljspeech",
arg_overrides={"vocoder": "griffin_lim", "fp16": False},
)
tts_model = tts_models[0]
TTSHubInterface.update_cfg_with_data_cfg(tts_cfg, tts_task.data_cfg)
tts_generator = tts_task.build_generator([tts_model], tts_cfg)
tts_sample = TTSHubInterface.get_model_input(tts_task, text)
wav, sr = TTSHubInterface.get_prediction(
tts_task, tts_model, tts_generator, tts_sample
)
ipd.Audio(wav, rate=rate)
```
## Citation
```bibtex
@inproceedings{li-etal-2021-multilingual,
title = "Multilingual Speech Translation from Efficient Finetuning of Pretrained Models",
author = "Li, Xian and
Wang, Changhan and
Tang, Yun and
Tran, Chau and
Tang, Yuqing and
Pino, Juan and
Baevski, Alexei and
Conneau, Alexis and
Auli, Michael",
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.acl-long.68",
doi = "10.18653/v1/2021.acl-long.68",
pages = "827--838",
}
@inproceedings{wang-etal-2020-fairseq,
title = "Fairseq {S}2{T}: Fast Speech-to-Text Modeling with Fairseq",
author = "Wang, Changhan and
Tang, Yun and
Ma, Xutai and
Wu, Anne and
Okhonko, Dmytro and
Pino, Juan",
booktitle = "Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing: System Demonstrations",
month = dec,
year = "2020",
address = "Suzhou, China",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.aacl-demo.6",
pages = "33--39",
}
@inproceedings{wang-etal-2021-fairseq,
title = "fairseq S{\^{}}2: A Scalable and Integrable Speech Synthesis Toolkit",
author = "Wang, Changhan and
Hsu, Wei-Ning and
Adi, Yossi and
Polyak, Adam and
Lee, Ann and
Chen, Peng-Jen and
Gu, Jiatao and
Pino, Juan",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-demo.17",
doi = "10.18653/v1/2021.emnlp-demo.17",
pages = "143--152",
}
```
|
Yaia/distilbert-base-uncased-finetuned-emotion
|
Yaia
| 2022-01-21T17:28:21Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9255
- name: F1
type: f1
value: 0.9257196896784097
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2086
- Accuracy: 0.9255
- F1: 0.9257
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8249 | 1.0 | 250 | 0.3042 | 0.9085 | 0.9068 |
| 0.2437 | 2.0 | 500 | 0.2086 | 0.9255 | 0.9257 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1
- Datasets 1.17.0
- Tokenizers 0.10.3
|
jiobiala24/wav2vec2-base-checkpoint-7.1
|
jiobiala24
| 2022-01-21T15:50:15Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-base-checkpoint-7.1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-checkpoint-7.1
This model is a fine-tuned version of [jiobiala24/wav2vec2-base-checkpoint-6](https://huggingface.co/jiobiala24/wav2vec2-base-checkpoint-6) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9369
- Wer: 0.3243
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.3124 | 1.75 | 1000 | 0.5602 | 0.3403 |
| 0.2428 | 3.5 | 2000 | 0.5924 | 0.3431 |
| 0.1884 | 5.24 | 3000 | 0.6161 | 0.3423 |
| 0.1557 | 6.99 | 4000 | 0.6570 | 0.3415 |
| 0.1298 | 8.74 | 5000 | 0.6837 | 0.3446 |
| 0.1141 | 10.49 | 6000 | 0.7304 | 0.3396 |
| 0.1031 | 12.24 | 7000 | 0.7264 | 0.3410 |
| 0.0916 | 13.99 | 8000 | 0.7229 | 0.3387 |
| 0.0835 | 15.73 | 9000 | 0.8078 | 0.3458 |
| 0.0761 | 17.48 | 10000 | 0.8304 | 0.3408 |
| 0.0693 | 19.23 | 11000 | 0.8290 | 0.3387 |
| 0.0646 | 20.98 | 12000 | 0.8593 | 0.3372 |
| 0.0605 | 22.73 | 13000 | 0.8728 | 0.3345 |
| 0.0576 | 24.48 | 14000 | 0.9111 | 0.3297 |
| 0.0529 | 26.22 | 15000 | 0.9247 | 0.3273 |
| 0.0492 | 27.97 | 16000 | 0.9248 | 0.3250 |
| 0.0472 | 29.72 | 17000 | 0.9369 | 0.3243 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
deepparag/DumBot
|
deepparag
| 2022-01-21T15:40:27Z | 148 | 2 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
thumbnail: https://cdn.discordapp.com/app-icons/870239976690970625/c02cae78ae105f07969cfd8f8ea3d0a0.png
tags:
- conversational
license: mit
---
# THIS AI IS OUTDATED. See [Aeona](https://huggingface.co/deepparag/Aeona)
An generative AI made using [microsoft/DialoGPT-small](https://huggingface.co/microsoft/DialoGPT-small).
Trained on:
https://www.kaggle.com/Cornell-University/movie-dialog-corpus
https://www.kaggle.com/jef1056/discord-data
[Live Demo](https://dumbot-331213.uc.r.appspot.com/)
Example:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("deepparag/DumBot")
model = AutoModelWithLMHead.from_pretrained("deepparag/DumBot")
# Let's chat for 4 lines
for step in range(4):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# print(new_user_input_ids)
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(
bot_input_ids, max_length=200,
pad_token_id=tokenizer.eos_token_id,
no_repeat_ngram_size=4,
do_sample=True,
top_k=100,
top_p=0.7,
temperature=0.8
)
# pretty print last ouput tokens from bot
print("DumBot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
```
|
Gianpe/en_textcat_emotion_xlm
|
Gianpe
| 2022-01-21T15:09:03Z | 3 | 0 |
spacy
|
[
"spacy",
"text-classification",
"en",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
tags:
- spacy
- text-classification
language:
- en
model-index:
- name: en_textcat_emotion_xlm
results: []
---
|
shivam/xls-r-hindi
|
shivam
| 2022-01-21T14:00:59Z | 7 | 1 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"hi",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- hi
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - HI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4484
- Wer: 1.0145
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.1844 | 3.4 | 500 | 5.2015 | 0.9999 |
| 3.3962 | 6.8 | 1000 | 3.4017 | 1.0002 |
| 2.5433 | 10.2 | 1500 | 1.6884 | 1.0222 |
| 1.5099 | 13.6 | 2000 | 0.7929 | 1.0188 |
| 1.2685 | 17.01 | 2500 | 0.6122 | 1.0191 |
| 1.1844 | 20.41 | 3000 | 0.5434 | 1.0197 |
| 1.0945 | 23.81 | 3500 | 0.5208 | 1.0316 |
| 1.0506 | 27.21 | 4000 | 0.4941 | 1.0139 |
| 1.0199 | 30.61 | 4500 | 0.4736 | 1.0106 |
| 0.9546 | 34.01 | 5000 | 0.4664 | 1.0164 |
| 0.9388 | 37.41 | 5500 | 0.4565 | 1.0085 |
| 0.9125 | 40.81 | 6000 | 0.4636 | 1.0148 |
| 0.8733 | 44.22 | 6500 | 0.4530 | 1.0154 |
| 0.8829 | 47.62 | 7000 | 0.4494 | 1.0152 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
alistvt/bert-base-uncased-pretrained-mlm-coqa-stories
|
alistvt
| 2022-01-21T13:17:32Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-pretrained-mlm-coqa-stories
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-pretrained-mlm-coqa-stories
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8310
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.0573 | 1.0 | 2479 | 1.8805 |
| 1.9517 | 2.0 | 4958 | 1.8377 |
| 1.9048 | 3.0 | 7437 | 1.8310 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
alistvt/bert-base-uncased-pretrained-clm-coqa-stories
|
alistvt
| 2022-01-21T12:36:10Z | 20 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-pretrained-clm-coqa-stories
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-pretrained-clm-coqa-stories
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0002
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0201 | 1.0 | 2479 | 0.0018 |
| 0.0033 | 2.0 | 4958 | 0.0003 |
| 0.0014 | 3.0 | 7437 | 0.0002 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
deepdml/output
|
deepdml
| 2022-01-21T11:50:22Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"ab",
"dataset:common_voice",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- ab
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: output
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output
This model is a fine-tuned version of [hf-test/xls-r-dummy](https://huggingface.co/hf-test/xls-r-dummy) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - AB dataset.
It achieves the following results on the evaluation set:
- Loss: 156.8789
- Wer: 1.3456
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
espnet/simpleoier_librispeech_asr_train_asr_conformer7_hubert_ll60k_large_raw_en_bpe5000_sp
|
espnet
| 2022-01-21T04:15:13Z | 8 | 2 |
espnet
|
[
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:librispeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
tags:
- espnet
- audio
- automatic-speech-recognition
language: en
datasets:
- librispeech
license: cc-by-4.0
---
## ESPnet2 ASR model
### `espnet/simpleoier_librispeech_asr_train_asr_conformer7_hubert_ll60k_large_raw_en_bpe5000_sp`
This model was trained by simpleoier using librispeech recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout b0ff60946ada6753af79423a2e6063984bec2926
pip install -e .
cd egs2/librispeech/asr1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/simpleoier_librispeech_asr_train_asr_conformer7_hubert_ll60k_large_raw_en_bpe5000_sp
```
## ASR config
<details><summary>expand</summary>
```
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
espnet/simpleoier_librispeech_asr_train_asr_conformer7_wav2vec2_960hr_large_raw_en_bpe5000_sp
|
espnet
| 2022-01-21T04:09:13Z | 4 | 0 |
espnet
|
[
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:librispeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
tags:
- espnet
- audio
- automatic-speech-recognition
language: en
datasets:
- librispeech
license: cc-by-4.0
---
## ESPnet2 ASR model
### `espnet/simpleoier_librispeech_asr_train_asr_conformer7_wav2vec2_960hr_large_raw_en_bpe5000_sp`
This model was trained by simpleoier using librispeech recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout b0ff60946ada6753af79423a2e6063984bec2926
pip install -e .
cd egs2/librispeech/asr1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/simpleoier_librispeech_asr_train_asr_conformer7_wav2vec2_960hr_large_raw_en_bpe5000_sp
```
## ASR config
<details><summary>expand</summary>
```
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
Gigworks/ASR_zh_espnet2
|
Gigworks
| 2022-01-21T02:58:59Z | 0 | 1 | null |
[
"region:us"
] | null | 2022-03-02T23:29:04Z |
<b>Speech-To-Text Chinese Model</b>
<br/><br/>
Reference: <br/>
Model - https://huggingface.co/espnet/pengcheng_guo_wenetspeech_asr_train_asr_raw_zh_char <br/>
Code - https://huggingface.co/spaces/akhaliq/espnet2_asr/blob/main/app.py
|
guoqiang/glm
|
guoqiang
| 2022-01-21T01:21:46Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-03-02T23:29:05Z |
# WudaoSailing
WudaoSailing is a package for pretraining chinese Language Model and finetune tasks. Now it supports GLM, Bert, T5, Cogview and Roberta models.
## Get Started
### Docker Image
We prepare two docker images based on CUDA 10.2 and CUDA 11.2. You can build images from the docker file [docs/docker/cuda102.dockerfile](docs/docker/cuda102.dcokerfile) or pull the pre-built images from Docker Hub and run with docker v19.03+
```shell
nvidia-docker run -id --hostname=V100 --network=host\
--ipc=host --shm-size=16gb --name=deepspeed-cuda \
-e NVIDIA_VISIBLE_DEVICES=0,1,2,3 \
-v /DATA/disk1/docker/containers/:/data deepspeed/cuda102:lastest
```
or replace `cuda102` with `cuda112`.
```shell
docker build -f cuda102.dockerfile -t deepspeed/cuda102 .
```
### Clone this repo
```shell
git clone https://github.com/wangguojim/WudaoSailing.git
cd WudaoSailing
pip install -r requirements.txt
```
## GLM
We show some examples based on GLM model.
### finetuene
We provide scripts for finetuning GLM on some downstream tasks.
#### SuperGLUE
- Download the [SuperGlue](https://super.gluebenchmark.com/tasks) data and check the experiment setup in
[examples/glm/scripts/ds_finetune_superglue.sh](xamples/glm/scripts/ds_finetune_superglue.sh). Note that `DATA_ROOT, CHECKPOINT_PATH, SAVE_PATH`
need to be changed to your local path. You may also change the `batch-size` and `nproc_per_node` according to your
available hardware.
- Run the following script for text similarity finetune task (use the afqmc dataset as an example)
```
cd examples/glm/
bash scripts/ds_finetune_superglue.sh\
config/model_blocklm_large_chinese.sh\
config_tasks/task_afqmc.sh
```
- Run the following script for text classification finetune task (use the thunews and thunews dataset as an example)
```
cd examples/glm/
bash scripts/ds_finetune_superglue.sh\
config/model_blocklm_large_chinese.sh\
config_tasks/task_tnews.sh
```
- Run the following script for causal inference finetune task (use the COPA dataset as an example)
```
cd examples/glm/
bash scripts/ds_finetune_superglue.sh\
config/model_blocklm_large_chinese.sh\
config_tasks/task_copa.sh
```
- To apply GLM to a new NLU dataset with cloze-filling finetuning, implement a `DataProcessor` in
[examples/glm/tasks/superglue/dataset.py](examples/glm/tasks/superglue/dataset.py) for data loading and add a `PVP` in
[examples/glm/tasks/superglue/pvp.py](examples/glm/tasks/superglue/pvp.py) for the cloze question. More details can be found
[here](examples/glm/tasks/superglue/README.md).
#### Blank Filling (Interactive)
* Change `CHECKPOINT_PATH` to your local path. Run the following script
```
bash config/generate_block.sh\
config/model_blocklm_large_chinese.sh
```
##### Example1 (Entity Prediction):
Context: 凯旋门位于意大利米兰市古城堡旁。1807年为纪念[MASK]而建,门高25米,顶上矗立两武士青铜古兵车铸像。
GLM:拿破仑军队攻克米兰城
##### Example2 (Sentence Prediction)
Context: 工业互联网(Industrial Internet)是新一代信息通信技术与工业经济深度融合的新型基础设施、应用模式和工业生态,通过对人、机、物、系统等的全面连接,构建起覆盖全产业链、全价值链的全新制造和服务体系,为工业乃至产业数字化、网络化、智能化发展提供了实现途径,是第四次工业革命的重要基石。[sMASK]它以网络为基础、平台为中枢、数据为要素、安全为保障,既是工业数字化、网络化、智能化转型的基础设施,也是互联网、大数据、人工智能与实体经济深度融合的应用模式,同时也是一种新业态、新产业,将重塑企业形态、供应链和产业链。当前,工业互联网融合应用向国民经济重点行业广泛拓展,形成平台化设计、智能化制造、网络化协同、个性化定制、服务化延伸、数字化管理六大新模式,赋能、赋智、赋值作用不断显现,有力的促进了实体经济提质、增效、降本、绿色、安全发展。
GLM: 工业互联网是制造业技术、管理、模式的重大变革,是推动互联网、大数据、人工智能和实体经济深度融合的重要载体,是建设制造强国和网络强国的重要基础。
##### Example3 (Long Text Generation)
Context: 问题:高斯所在的国家有什么汽车品牌?答案:[gMASK]
GLM:答案:[gMASK]<|startofpiece|>德国奔驰、德国大众、别克、沃尔沃、斯柯达、本田、雪铁龙.
### Ptuning
Run the following script to integrate p-tuning with GLM:
```shell
cd algutils/ptuning/
bash finetune_zy.sh
```
### Pretrain
Run the following script to pre-train the GLM-Large model
```shell
cd examples/glm/
bash scripts/ds_pretrain_nvidia.sh config/ds_block_large.sh
```
The script [examples/glm/config/ds_pretrain_nvidia.sh](examples/glm/config/ds_pretrain_nvidia.sh) launches the training program with DeepSpeed. You should change `NUM_WORKERS` and `NUM_GPUS_PER_WORKER` to the number of workers and the number of gpus per worker. Also change `HOST_FILE_PATH` to the path to an OpenMPI-style hostfile. More details about DeepSpeed launcher can be found [here](https://www.deepspeed.ai/getting-started/#resource-configuration-multi-node).
The file [examples/glm/config/ds_block_large.sh](examples/glm/config/ds_block_large.sh) defines the hyperparameters for pretraining. Most of the arguments are fairly self-explanatory. Specifically, `--train-data` can be multiple keywords defined in `NAMED_CORPORA` in [data_utils/corpora.py](data_utils/corpora.py). The hyperparameters of the optimizer are defined in the corresponding json file under `config`. The semantics of the json file can be found [here](https://www.deepspeed.ai/docs/config-json).
## Bert
We show some examples based on GLM model.
### Pretrain
Run the following script to pre-train the Bert model
```shell
cd examples/bert/
python quick_start.py
```
## CogView
### Pretrain
Run the following script to pre-train the cogview model
```shell
cd examples/cogview/
bash config/pretrain_multiple_nodes.sh
```
### inference
Run the following script to test the ability of text2image
```shell
cd examples/cogview/
bash config/text2image_cogview.sh
```
|
kika2000/wav2vec2-large-xls-r-300m-kika10
|
kika2000
| 2022-01-21T00:02:17Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-georgian2-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-georgian2-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4317
- Wer: 0.4280
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.7071 | 4.76 | 400 | 0.6897 | 0.7844 |
| 0.2908 | 9.52 | 800 | 0.4630 | 0.5582 |
| 0.1392 | 14.29 | 1200 | 0.4501 | 0.5006 |
| 0.0977 | 19.05 | 1600 | 0.4593 | 0.4755 |
| 0.075 | 23.81 | 2000 | 0.4340 | 0.4401 |
| 0.0614 | 28.57 | 2400 | 0.4317 | 0.4280 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
huggingtweets/anticarbons
|
huggingtweets
| 2022-01-20T22:52:20Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/anticarbons/1642719091326/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1477498953524518912/yvJkd9VL_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">ANTICARBON</div>
<div style="text-align: center; font-size: 14px;">@anticarbons</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from ANTICARBON.
| Data | ANTICARBON |
| --- | --- |
| Tweets downloaded | 2518 |
| Retweets | 427 |
| Short tweets | 352 |
| Tweets kept | 1739 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/s9q99sc5/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @anticarbons's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1k8boybi) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1k8boybi/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/anticarbons')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Gianpe/en_textcat_emotion_umberto
|
Gianpe
| 2022-01-20T21:45:19Z | 1 | 0 |
spacy
|
[
"spacy",
"text-classification",
"en",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
tags:
- spacy
- text-classification
language:
- en
model-index:
- name: en_textcat_emotion_umberto
results: []
---
|
milyiyo/selectra-small-finetuned-amazon-review
|
milyiyo
| 2022-01-20T21:11:57Z | 16 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"electra",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: selectra-small-finetuned-amazon-review
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_reviews_multi
type: amazon_reviews_multi
args: es
metrics:
- name: Accuracy
type: accuracy
value: 0.737
- name: F1
type: f1
value: 0.7437773019932409
- name: Precision
type: precision
value: 0.7524857881639091
- name: Recall
type: recall
value: 0.737
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# selectra-small-finetuned-amazon-review
This model is a fine-tuned version of [Recognai/selectra_small](https://huggingface.co/Recognai/selectra_small) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6279
- Accuracy: 0.737
- F1: 0.7438
- Precision: 0.7525
- Recall: 0.737
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| No log | 0.5 | 500 | 0.7041 | 0.7178 | 0.6724 | 0.6715 | 0.7178 |
| 0.7908 | 1.0 | 1000 | 0.6365 | 0.7356 | 0.7272 | 0.7211 | 0.7356 |
| 0.7908 | 1.5 | 1500 | 0.6204 | 0.7376 | 0.7380 | 0.7387 | 0.7376 |
| 0.6358 | 2.0 | 2000 | 0.6162 | 0.7386 | 0.7377 | 0.7380 | 0.7386 |
| 0.6358 | 2.5 | 2500 | 0.6228 | 0.7274 | 0.7390 | 0.7576 | 0.7274 |
| 0.5827 | 3.0 | 3000 | 0.6188 | 0.7378 | 0.7400 | 0.7425 | 0.7378 |
| 0.5827 | 3.5 | 3500 | 0.6246 | 0.7374 | 0.7416 | 0.7467 | 0.7374 |
| 0.5427 | 4.0 | 4000 | 0.6266 | 0.7446 | 0.7452 | 0.7465 | 0.7446 |
| 0.5427 | 4.5 | 4500 | 0.6331 | 0.7392 | 0.7421 | 0.7456 | 0.7392 |
| 0.5184 | 5.0 | 5000 | 0.6279 | 0.737 | 0.7438 | 0.7525 | 0.737 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
oandreae/financial_sentiment_model
|
oandreae
| 2022-01-20T20:00:01Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"perceiver",
"text-classification",
"generated_from_trainer",
"dataset:financial_phrasebank",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- financial_phrasebank
metrics:
- recall
- accuracy
- precision
model-index:
- name: financial_sentiment_model
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: financial_phrasebank
type: financial_phrasebank
args: sentences_50agree
metrics:
- name: Recall
type: recall
value: 0.8839956357328868
- name: Accuracy
type: accuracy
value: 0.8804123711340206
- name: Precision
type: precision
value: 0.8604175202419276
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# financial_sentiment_model
This model is a fine-tuned version of [deepmind/language-perceiver](https://huggingface.co/deepmind/language-perceiver) on the financial_phrasebank dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3467
- Recall: 0.8840
- Accuracy: 0.8804
- Precision: 0.8604
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- distributed_type: tpu
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Recall | Accuracy | Precision |
|:-------------:|:-----:|:----:|:---------------:|:------:|:--------:|:---------:|
| 0.4481 | 1.0 | 273 | 0.4035 | 0.8526 | 0.8433 | 0.7955 |
| 0.4069 | 2.0 | 546 | 0.4478 | 0.8683 | 0.8289 | 0.8123 |
| 0.2225 | 3.0 | 819 | 0.3167 | 0.8747 | 0.8680 | 0.8387 |
| 0.1245 | 4.0 | 1092 | 0.3467 | 0.8840 | 0.8804 | 0.8604 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.9.0+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
tomwetherell/TOMFINSEN
|
tomwetherell
| 2022-01-20T18:19:24Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"perceiver",
"text-classification",
"generated_from_trainer",
"dataset:financial_phrasebank",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- financial_phrasebank
metrics:
- recall
- accuracy
- precision
model-index:
- name: TOMFINSEN
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: financial_phrasebank
type: financial_phrasebank
args: sentences_50agree
metrics:
- name: Recall
type: recall
value: 0.8985861629736692
- name: Accuracy
type: accuracy
value: 0.8742268041237113
- name: Precision
type: precision
value: 0.8509995913451198
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TOMFINSEN
This model is a fine-tuned version of [deepmind/language-perceiver](https://huggingface.co/deepmind/language-perceiver) on the financial_phrasebank dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3642
- Recall: 0.8986
- Accuracy: 0.8742
- Precision: 0.8510
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- distributed_type: tpu
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Recall | Accuracy | Precision |
|:-------------:|:-----:|:----:|:---------------:|:------:|:--------:|:---------:|
| 0.5403 | 1.0 | 273 | 0.4207 | 0.8358 | 0.8619 | 0.8534 |
| 0.3939 | 2.0 | 546 | 0.3750 | 0.8943 | 0.8577 | 0.8225 |
| 0.1993 | 3.0 | 819 | 0.3113 | 0.8882 | 0.8660 | 0.8367 |
| 0.301 | 4.0 | 1092 | 0.3642 | 0.8986 | 0.8742 | 0.8510 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.9.0+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
nntadotzip/xlnet-base-cased-IUChatbot-ontologyDts-BertPretrainedTokenizerFast
|
nntadotzip
| 2022-01-20T18:06:05Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlnet",
"question-answering",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: xlnet-base-cased-IUChatbot-ontologyDts-BertPretrainedTokenizerFast
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlnet-base-cased-IUChatbot-ontologyDts-BertPretrainedTokenizerFast
This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3489
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 382 | 0.4695 |
| 0.5633 | 2.0 | 764 | 0.3361 |
| 0.3533 | 3.0 | 1146 | 0.3489 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
ucberkeley-dlab/hate-measure-roberta-large
|
ucberkeley-dlab
| 2022-01-20T17:57:30Z | 7 | 4 |
tf-keras
|
[
"tf-keras",
"text-classification",
"hate-speech",
"counterspeech",
"irt",
"arxiv:2009.10277",
"en",
"dataset:ucberkeley-dlab/measuring-hate-speech",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
language:
- en
tags:
- text-classification
- hate-speech
- counterspeech
- irt
- arxiv:2009.10277
datasets:
- ucberkeley-dlab/measuring-hate-speech
---
# Measuring hate speech: RoBERTa-Large
This model predicts a continuous hate speech score as described in Kennedy et al. (2020).
## Citation
```
@article{kennedy2020constructing,
title={Constructing interval variables via faceted Rasch measurement and multitask deep learning: a hate speech application},
author={Kennedy, Chris J and Bacon, Geoff and Sahn, Alexander and von Vacano, Claudia},
journal={arXiv preprint arXiv:2009.10277},
year={2020}
}
```
## References
Kennedy, C. J., Bacon, G., Sahn, A., & von Vacano, C. (2020). [Constructing interval variables via faceted Rasch measurement and multitask deep learning: a hate speech application](https://arxiv.org/abs/2009.10277). arXiv preprint arXiv:2009.10277.
|
ml6team/distilbart-tos-summarizer-tosdr
|
ml6team
| 2022-01-20T15:21:41Z | 22 | 15 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"summarization",
"t&c",
"tos",
"distilbart",
"distilbart-6-6",
"en",
"dataset:tosdr",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-03-02T23:29:05Z |
---
language:
- en
tags:
- summarization
- t&c
- tos
- distilbart
- distilbart-6-6
datasets:
- tosdr
metrics:
- rouge1
- rouge2
- rougel
inference:
parameters:
min_length: 5
max_length: 512
do_sample: False
widget:
- text: "In addition, certain portions of the Web Site may be subject to additional terms of use that we make available for your review or otherwise link to that portion of the Web Site to which such additional terms apply. By using such portions, or any part thereof, you agree to be bound by the additional terms of use applicable to such portions. Age Restrictions The Web Site may be accessed and used only by individuals who can form legally binding contracts under applicable laws, who are at least 18 years of age or the age of majority in their state or territory of residence (if higher than 18), and who are not barred from using the Web Site under applicable laws. Our Technology may not be copied, modified, reproduced, republished, posted, transmitted, sold, offered for sale, or redistributed in any way without our prior written permission and the prior written permission of our applicable licensors. Nothing in these Site Terms of Use grants you any right to receive delivery of a copy of Our Technology or to obtain access to Our Technology except as generally and ordinarily permitted through the Web Site according to these Site Terms of Use. Furthermore, nothing in these Site Terms of Use will be deemed to grant you, by implication, estoppel or otherwise, a license to Our Technology. Certain of the names, logos, and other materials displayed via the Web site constitute trademarks, tradenames, service marks or logos (“Marks”) of us or other entities. You are not authorized to use any such Marks. Ownership of all such Marks and the goodwill associated therewith remains with us or those other entities. Any use of third party software provided in connection with the Web Site will be governed by such third parties’ licenses and not by these Site Terms of Use. Information on this Web Site may contain technical inaccuracies or typographical errors. Lenovo provides no assurances that any reported problems may be resolved with the use of any information that Lenovo provides."
---
# T&C Summarization Model
T&C Summarization Model based on [sshleifer/distilbart-cnn-6-6](https://huggingface.co/sshleifer/distilbart-cnn-6-6),
This abstractive summarization model is a part of a bigger end-to-end T&C summarizer pipeline
which is preceded by LSA (Latent Semantic Analysis) extractive summarization. The extractive
summarization shortens the T&C to be further summarized by this model.
## Finetuning Corpus
We collaborated with [TOSDR](https://tosdr.org/) to work with their data, and the model is finetuned accordingly. The article and
summarization text is reduced via extractive summarization before it is finetuned to the model.
## Contact Us
https://ml6.eu/ .
This abstractive model finetuning is the continuation of the Christmas Project 2021 done in ML6: https://bit.ly/XmasProjects .
## Load Finetuned Model
```
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("ml6team/distilbart-tos-summarizer-tosdr")
model = AutoModelForSeq2SeqLM.from_pretrained("ml6team/distilbart-tos-summarizer-tosdr")
```
## Code Sample
This sample requires [sumy](https://pypi.org/project/sumy/), the LSA Extractive Summarization library, as additional package to
run.
```
import re
import nltk
nltk.download('punkt')
from sumy.parsers.plaintext import PlaintextParser
from sumy.nlp.tokenizers import Tokenizer
from sumy.nlp.stemmers import Stemmer
from sumy.summarizers.lsa import LsaSummarizer
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
LANGUAGE = "english"
EXTRACTED_ARTICLE_SENTENCES_LEN = 12
stemmer = Stemmer(LANGUAGE)
lsa_summarizer = LsaSummarizer(stemmer)
tokenizer = AutoTokenizer.from_pretrained("ml6team/distilbart-tos-summarizer-tosdr")
model = AutoModelForSeq2SeqLM.from_pretrained("ml6team/distilbart-tos-summarizer-tosdr")
def get_extractive_summary(text, sentences_count):
parser = PlaintextParser.from_string(text, Tokenizer(LANGUAGE))
summarized_info = lsa_summarizer(parser.document, sentences_count)
summarized_info = [element._text for element in summarized_info]
return ' '.join(summarized_info)
def get_summary(dict_summarizer_model, dict_tokenizer, text_content):
text_content = get_extractive_summary(text_content, EXTRACTED_ARTICLE_SENTENCES_LEN)
tokenizer = dict_tokenizer['tokenizer']
model = dict_summarizer_model['model']
inputs = tokenizer(text_content, max_length=dict_tokenizer['max_length'], truncation=True, return_tensors="pt")
outputs = model.generate(
inputs["input_ids"], max_length=dict_summarizer_model['max_length'], min_length=dict_summarizer_model['min_length'],
)
summarized_text = tokenizer.decode(outputs[0])
match = re.search(r"<s>(.*)</s>", summarized_text)
if match is not None: summarized_text = match.group(1)
return summarized_text.replace('<s>', '').replace('</s>', '')
test_tos = """
In addition, certain portions of the Web Site may be subject to additional terms of use that we make available for your review or otherwise link to that portion of the Web Site to which such additional terms apply. By using such portions, or any part thereof, you agree to be bound by the additional terms of use applicable to such portions.
Age Restrictions The Web Site may be accessed and used only by individuals who can form legally binding contracts under applicable laws, who are at least 18 years of age or the age of majority in their state or territory of residence (if higher than 18), and who are not barred from using the Web Site under applicable laws.
Our Technology may not be copied, modified, reproduced, republished, posted, transmitted, sold, offered for sale, or redistributed in any way without our prior written permission and the prior written permission of our applicable licensors. Nothing in these Site Terms of Use grants you any right to receive delivery of a copy of Our Technology or to obtain access to Our Technology except as generally and ordinarily permitted through the Web Site according to these Site Terms of Use.
Furthermore, nothing in these Site Terms of Use will be deemed to grant you, by implication, estoppel or otherwise, a license to Our Technology. Certain of the names, logos, and other materials displayed via the Web site constitute trademarks, tradenames, service marks or logos (“Marks”) of us or other entities. You are not authorized to use any such Marks. Ownership of all such Marks and the goodwill associated therewith remains with us or those other entities.
Any use of third party software provided in connection with the Web Site will be governed by such third parties’ licenses and not by these Site Terms of Use. Information on this Web Site may contain technical inaccuracies or typographical errors. Lenovo provides no assurances that any reported problems may be resolved with the use of any information that Lenovo provides
"""
model_dict = {
'model': model,
'max_length': 512,
'min_length': 4
}
tokenizer_dict = {
'tokenizer': tokenizer,
'max_length': 1024
}
print(get_summary(model_dict, tokenizer_dict, test_tos))
```
|
huggingtweets/aevaeavaevevave
|
huggingtweets
| 2022-01-20T15:13:33Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/aevaeavaevevave/1642691608974/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1471448753353670660/T0h3zXn-_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">aeva</div>
<div style="text-align: center; font-size: 14px;">@aevaeavaevevave</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from aeva.
| Data | aeva |
| --- | --- |
| Tweets downloaded | 3184 |
| Retweets | 985 |
| Short tweets | 659 |
| Tweets kept | 1540 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3g4kejp0/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @aevaeavaevevave's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3ikuw0pg) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3ikuw0pg/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/aevaeavaevevave')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
aidan-o-brien/recipe-improver
|
aidan-o-brien
| 2022-01-20T14:26:53Z | 5 | 0 |
transformers
|
[
"transformers",
"tf",
"albert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: recipe-improver
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# recipe-improver
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.5570
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 5539, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 2.5570 | 0 |
### Framework versions
- Transformers 4.15.0
- TensorFlow 2.7.0
- Datasets 1.17.0
- Tokenizers 0.10.3
|
Aleksandra/herbert-base-cased-finetuned-squad
|
Aleksandra
| 2022-01-20T13:14:11Z | 14 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:04Z |
---
license: cc-by-4.0
tags:
- generated_from_trainer
model-index:
- name: herbert-base-cased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# herbert-base-cased-finetuned-squad
This model is a fine-tuned version of [allegro/herbert-base-cased](https://huggingface.co/allegro/herbert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2071
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 233 | 1.2474 |
| No log | 2.0 | 466 | 1.1951 |
| 1.3459 | 3.0 | 699 | 1.2071 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
g30rv17ys/avhubert
|
g30rv17ys
| 2022-01-20T13:07:45Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-03-02T23:29:05Z |
https://dl.fbaipublicfiles.com/avhubert/model/lrs3_vox/vsr/base_vox_433h.pt
|
mptrigo/run1
|
mptrigo
| 2022-01-20T10:37:49Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model_index:
- name: run1
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
metric:
name: Bleu
type: bleu
value: 8.4217
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# run1
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-es-es](https://huggingface.co/Helsinki-NLP/opus-mt-es-es) on an unkown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1740
- Bleu: 8.4217
- Gen Len: 15.9457
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 250 | 4.2342 | 0.8889 | 83.4022 |
| 4.6818 | 2.0 | 500 | 3.7009 | 4.1671 | 35.587 |
| 4.6818 | 3.0 | 750 | 3.4737 | 7.6414 | 23.9674 |
| 3.4911 | 4.0 | 1000 | 3.3713 | 7.7512 | 18.6957 |
| 3.4911 | 5.0 | 1250 | 3.2689 | 8.0901 | 19.4674 |
| 3.0164 | 6.0 | 1500 | 3.2194 | 8.5708 | 25.0543 |
| 3.0164 | 7.0 | 1750 | 3.1853 | 9.5275 | 23.9239 |
| 2.6954 | 8.0 | 2000 | 3.1562 | 8.5635 | 18.9674 |
| 2.6954 | 9.0 | 2250 | 3.1564 | 8.2031 | 17.5978 |
| 2.4503 | 10.0 | 2500 | 3.1314 | 8.5638 | 18.1522 |
| 2.4503 | 11.0 | 2750 | 3.1511 | 8.8428 | 17.913 |
| 2.2554 | 12.0 | 3000 | 3.1513 | 8.1244 | 17.0 |
| 2.2554 | 13.0 | 3250 | 3.1664 | 8.0157 | 16.2717 |
| 2.1202 | 14.0 | 3500 | 3.1656 | 8.7758 | 16.6087 |
| 2.1202 | 15.0 | 3750 | 3.1550 | 8.4637 | 16.4565 |
| 2.0082 | 16.0 | 4000 | 3.1702 | 8.2488 | 15.8587 |
| 2.0082 | 17.0 | 4250 | 3.1725 | 8.609 | 16.3043 |
| 1.9274 | 18.0 | 4500 | 3.1750 | 8.4476 | 15.8043 |
| 1.9274 | 19.0 | 4750 | 3.1734 | 8.4753 | 16.5543 |
| 1.888 | 20.0 | 5000 | 3.1740 | 8.4217 | 15.9457 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.1.dev0
- Tokenizers 0.10.3
|
dbsamu/distilbert-base-uncased-finetuned-ner
|
dbsamu
| 2022-01-20T10:30:26Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:wikiann",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wikiann
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: wikiann
type: wikiann
args: en
metrics:
- name: Precision
type: precision
value: 0.8120642485217545
- name: Recall
type: recall
value: 0.830235495804385
- name: F1
type: f1
value: 0.8210493441599
- name: Accuracy
type: accuracy
value: 0.9203828724683252
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the wikiann dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2781
- Precision: 0.8121
- Recall: 0.8302
- F1: 0.8210
- Accuracy: 0.9204
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.3504 | 1.0 | 1250 | 0.2922 | 0.7930 | 0.8075 | 0.8002 | 0.9115 |
| 0.2353 | 2.0 | 2500 | 0.2711 | 0.8127 | 0.8264 | 0.8195 | 0.9196 |
| 0.1745 | 3.0 | 3750 | 0.2781 | 0.8121 | 0.8302 | 0.8210 | 0.9204 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
dehio/german-qg-t5-e2e-quad
|
dehio
| 2022-01-20T09:40:47Z | 5 | 3 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"question generation",
"de",
"dataset:deepset/germanquad",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
license: mit
widget:
- text: "Naturschutzwarte haben auf der ostfriesischen Insel Wangerooge zwei seltene Kurzschnäuzige Seepferdchen entdeckt. Die Tiere seien vergangene Woche bei einer sogenannten Spülsaumkontrolle entdeckt worden, bei der die Strände eigentlich nach Müll und toten Vögeln abgesucht würden, sagte der Geschäftsführer der zuständigen Naturschutz- und Forschungsgemeinschaft Mellumrat, Mathias Heckroth. Dabei seien den Naturschützern am Nordstrand kurz hintereinander die beiden leblosen, nur wenige Zentimeter großen Tiere aufgefallen. Experten der Nationalparkverwaltung bestimmten beide Tiere als Kurzschnäuzige Seepferdchen (Hippocampus hippocampus)."
inference:
parameters:
max_length: 128
language:
- de
tags:
- question generation
datasets:
- deepset/germanquad
model-index:
- name: german-qg-t5-e2e-quad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# german-qg-t5-e2e-quad (Work in progress)
This model is a end-to-end question generation model in German. Given a text, it generates several questions about it. This model is a fine-tuned version of [valhalla/t5-base-e2e-qg](https://huggingface.co/valhalla/t5-base-e2e-qg) on the [GermanQuAD dataset from deepset](https://huggingface.co/datasets/deepset/germanquad).
## Model description
More information needed
## Training and evaluation data
Bleu_1: 0.196051
Bleu_2: 0.122380
Bleu_3: 0.079980
Bleu_4: 0.053672
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
hrdipto/wav2vec2-xls-r-tf-left-right-shuru
|
hrdipto
| 2022-01-20T08:48:17Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-xls-r-tf-left-right-shuru
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-tf-left-right-shuru
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0921
- Wer: 1.2628
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 6.5528 | 23.81 | 500 | 0.5509 | 1.9487 |
| 0.2926 | 47.62 | 1000 | 0.1306 | 1.2756 |
| 0.1171 | 71.43 | 1500 | 0.1189 | 1.2628 |
| 0.0681 | 95.24 | 2000 | 0.0921 | 1.2628 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
ml6team/distilbert-base-dutch-cased-toxic-comments
|
ml6team
| 2022-01-20T08:21:12Z | 10 | 6 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"nl",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
language:
- nl
tags:
- text-classification
- pytorch
widget:
- text: "Ik heb je lief met heel mijn hart"
example_title: "Non toxic comment 1"
- text: "Dat is een goed punt, zo had ik het nog niet bekeken."
example_title: "Non toxic comment 2"
- text: "Wat de fuck zei je net tegen me, klootzak?"
example_title: "Toxic comment 1"
- text: "Rot op, vuile hoerenzoon."
example_title: "Toxic comment 2"
license: apache-2.0
metrics:
- Accuracy, F1 Score, Recall, Precision
---
# distilbert-base-dutch-toxic-comments
## Model description:
This model was created with the purpose to detect toxic or potentially harmful comments.
For this model, we finetuned a multilingual distilbert model [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the translated [Jigsaw Toxicity dataset](https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge).
The original dataset was translated using the appropriate [MariantMT model](https://huggingface.co/Helsinki-NLP/opus-mt-en-nl).
The model was trained for 2 epochs, on 90% of the dataset, with the following arguments:
```
training_args = TrainingArguments(
learning_rate=3e-5,
per_device_train_batch_size=16,
per_device_eval_batch_size=16,
gradient_accumulation_steps=4,
load_best_model_at_end=True,
metric_for_best_model="recall",
epochs=2,
evaluation_strategy="steps",
save_strategy="steps",
save_total_limit=10,
logging_steps=100,
eval_steps=250,
save_steps=250,
weight_decay=0.001,
report_to="wandb")
```
## Model Performance:
Model evaluation was done on 1/10th of the dataset, which served as the test dataset.
| Accuracy | F1 Score | Recall | Precision |
| --- | --- | --- | --- |
| 95.75 | 78.88 | 77.23 | 80.61 |
## Dataset:
Unfortunately we cannot open-source the dataset, since we are bound by the underlying Jigsaw license.
|
huggingtweets/chickenhalf
|
huggingtweets
| 2022-01-20T07:52:22Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/chickenhalf/1642665052826/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1482989404125806596/JtLgKHTu_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">chicken sandwich</div>
<div style="text-align: center; font-size: 14px;">@chickenhalf</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from chicken sandwich.
| Data | chicken sandwich |
| --- | --- |
| Tweets downloaded | 3202 |
| Retweets | 126 |
| Short tweets | 427 |
| Tweets kept | 2649 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3r0cwhle/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @chickenhalf's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1zvaxh71) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1zvaxh71/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/chickenhalf')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
LiqiangXiao/ConvSearch_QU
|
LiqiangXiao
| 2022-01-20T06:32:35Z | 7 | 4 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"arxiv:2109.05460",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:04Z |
## End-to-end Conversational search model
A end-to-end system of conversational search system for online shopping. It was introduced in [this paper](https://arxiv.org/abs/2109.05460) published on conference EMNLP.
## Model description
ConvSearch is an end-to-end conversational search system that deeply combines the dialog and search system to improve the search performance. In particular, the Product Search module leverages both structured product attributes and unstructured product text (e.g. profile), where the product text may contain phrases matching with utterances when schema is incomplete or when a product attribute value is missing. Putting together, our system has the advantage of both reduced error accumulation along individual modules, and enhanced robustness against product schema/knowledge gaps.
## Intended uses & limitations
You can use the raw model to understand the dialog between consumer and server. The concatenated dialogs can be parsed into intents (e.g. inform, request, buy, et al.) and attributes of products.
You can also fine-tune this model on similar down-stream tasks, such as a dialog system for shopping in your scenario or customer service system. Since our model is seq-to-seq, any dialog system that can be reformed to seq-to-seq can be implemented based on this model.
## How to use
You can use this model directly with:
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("LiqiangXiao/ConvSearch_QU")
model = AutoModelForSeq2SeqLM.from_pretrained("LiqiangXiao/ConvSearch_QU")
## Training data
ConvSearch was pretrained on a dialog corpus with 49,999 dialogs/942,766 turns.
|
rdpatilds/distilbert-finetuned-imdb
|
rdpatilds
| 2022-01-20T05:49:25Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"fill-mask",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: rdpatilds/distilbert-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# rdpatilds/distilbert-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.6914
- Validation Loss: 2.5383
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -688, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.6914 | 2.5383 | 0 |
### Framework versions
- Transformers 4.15.0
- TensorFlow 2.7.0
- Datasets 1.17.0
- Tokenizers 0.10.3
|
LiqiangXiao/summarization
|
LiqiangXiao
| 2022-01-20T05:01:36Z | 5 | 4 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:04Z |
## Copy-or-Rewrite
This repository contains the code of paper "Copy or Rewrite: Hybrid Summarization with Hierarchical Reinforcement Learning". A model built for human-like summarization task and trained with Actor-critic Reinforcement Learning. This work significantly improved the ROUGE scores on CNN/DM dataset by 1.7 and augmented the informativity and readability of generated summaries. It implemented a more human-like workflow for summarization task solving the information loss problem. It contains a novel hierarchical transformer module to represent article in both word and sentence level, a new reinforcement learning method that can effectively train two-step model.
## Model description
Copy-or-Rewrite is a model to improve the workflow of summarization models. Existing methods that adopt an extract-then-abstract strategy have achieved impressive results, yet they suffer from the information loss in the abstraction step because they compress all the selected sentences without distinguish. Especially when the whole sentence is summary-worthy, salient content would be lost by compression. To address this problem, we pro- pose HYSUM, a hybrid framework for summarization that can flexibly switch between copying sentence and rewriting sentence according to the degree of redundancy. In this way, our approach can effectively combine the advantages of two branches of summarization, juggling informativity and conciseness. Moreover, we based on Hierarchical Reinforcement Learning, propose an end-to-end reinforcing method to bridge together the extraction module and rewriting module, which can enhance the cooperation between them. Automatic evaluation shows that our approach significantly outperforms the state-of-the-arts on the CNN/DailyMail corpus. Human evaluation also demonstrates that our generated summaries are more informative and concise than popular models.
## Intended uses & limitations
With this repository, you can generate informative and concise summaries for input articles. For other tasks, you may used the hierarchical representation module to effectively represent the article. The parameters of the model is pre-trained on CNN/DM dataset. You may need to fine-tune it other your own dataset when needed.
## How to use
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("LiqiangXiao/summarization")
model = AutoModelForSeq2SeqLM.from_pretrained("LiqiangXiao/summarization")
## Training data
This model used the non-anonymous version of CNN/Daily Mail dataset.
## BibTeX entry and citation info
@inproceedings{DBLP:conf/aaai/XiaoWHJ20,
author = {Liqiang Xiao and
Lu Wang and
Hao He and
Yaohui Jin},
title = {Copy or Rewrite: Hybrid Summarization with Hierarchical Reinforcement
Learning},
booktitle = {The Thirty-Fourth {AAAI} Conference on Artificial Intelligence, {AAAI}
2020, The Thirty-Second Innovative Applications of Artificial Intelligence
Conference, {IAAI} 2020, The Tenth {AAAI} Symposium on Educational
Advances in Artificial Intelligence, {EAAI} 2020, New York, NY, USA,
February 7-12, 2020},
pages = {9306--9313},
publisher = {{AAAI} Press},
year = {2020},
url = {https://aaai.org/ojs/index.php/AAAI/article/view/6470},
timestamp = {Tue, 02 Feb 2021 08:00:14 +0100},
biburl = {https://dblp.org/rec/conf/aaai/XiaoWHJ20.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
|
abdelkader/distilbert-base-uncased-finetuned-clinc
|
abdelkader
| 2022-01-20T04:59:36Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9174193548387096
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7713
- Accuracy: 0.9174
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 3.2831 | 0.7426 |
| 3.785 | 2.0 | 636 | 1.8739 | 0.8335 |
| 3.785 | 3.0 | 954 | 1.1525 | 0.8926 |
| 1.6894 | 4.0 | 1272 | 0.8569 | 0.91 |
| 0.897 | 5.0 | 1590 | 0.7713 | 0.9174 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
mrp/marian-finetuned-kde4-en-to-fr
|
mrp
| 2022-01-20T04:05:30Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- translation
- generated_from_trainer
datasets:
- kde4
metrics:
- bleu
model-index:
- name: marian-finetuned-kde4-en-to-fr
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: kde4
type: kde4
args: en-fr
metrics:
- name: Bleu
type: bleu
value: 50.20410659441166
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9643
- Bleu: 50.2041
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
ethzanalytics/ai-msgbot-gpt2-XL
|
ethzanalytics
| 2022-01-20T01:40:42Z | 9 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"gpt",
"en",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language:
- en
tags:
- text-generation
- gpt2
- gpt
license: mit
datasets:
- natural questions
widget:
- text: "Do you like my new haircut?\nperson beta:\n\n"
example_title: "haircut"
- text: "I love to learn new things.. are you willing to teach me something?\nperson beta:\n\n"
example_title: "teaching"
- text: "What's your favorite animal? Mine is the dog? \nperson beta:\n\n"
example_title: "favorite"
- text: "how much does it cost?\nperson beta:\n\n"
example_title: "money"
inference:
parameters:
min_length: 2
max_length: 64
length_penalty: 0.6
no_repeat_ngram_size: 3
do_sample: True
top_p: 0.85
top_k: 10
repetition_penalty: 2.1
---
# ai-msgbot GPT2-XL
_NOTE: model card is WIP_
GPT2-XL (~1.5 B parameters) trained on [the Wizard of Wikipedia dataset](https://parl.ai/projects/wizard_of_wikipedia/) for 40k steps with **33**/36 layers frozen using `aitextgen`.
Designed for use with [ai-msgbot](https://github.com/pszemraj/ai-msgbot) to create an open-ended chatbot (of course, if other use cases arise, have at it).
## conversation data
The dataset was tokenized and fed to the model as a conversation between two speakers, whose names are below. This is relevant for writing prompts and filtering/extracting text from responses.
`script_speaker_name` = `person alpha`
`script_responder_name` = `person beta`
## examples
- the default inference API examples should work _okay_
- an ideal test would be explicitly adding `person beta` into the prompt text the model is forced to respond to instead of adding onto the entered prompt.
### Example prompt:
```
do you like to eat beans?
person beta:
```
### Resulting output
```
do you like to eat beans?person beta:
yes, i like fried beans.
person alpha:
i wonder when the first beans were cultivated and how they were processed.
person beta:
nitrogenic bacteria (in
```
_Note: the Inference API cuts off generation due to length, if run elsewhere you would see what comes after "(in"_
## citations
```
@inproceedings{dinan2019wizard,
author={Emily Dinan and Stephen Roller and Kurt Shuster and Angela Fan and Michael Auli and Jason Weston},
title={{W}izard of {W}ikipedia: Knowledge-powered Conversational Agents},
booktitle = {Proceedings of the International Conference on Learning Representations (ICLR)},
year={2019},
}
@inproceedings{li-etal-2017-dailydialog,
title = "{D}aily{D}ialog: A Manually Labelled Multi-turn Dialogue Dataset",
author = "Li, Yanran and
Su, Hui and
Shen, Xiaoyu and
Li, Wenjie and
Cao, Ziqiang and
Niu, Shuzi",
booktitle = "Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = nov,
year = "2017",
address = "Taipei, Taiwan",
publisher = "Asian Federation of Natural Language Processing",
url = "https://aclanthology.org/I17-1099",
pages = "986--995",
abstract = "We develop a high-quality multi-turn dialog dataset, \textbf{DailyDialog}, which is intriguing in several aspects. The language is human-written and less noisy. The dialogues in the dataset reflect our daily communication way and cover various topics about our daily life. We also manually label the developed dataset with communication intention and emotion information. Then, we evaluate existing approaches on DailyDialog dataset and hope it benefit the research field of dialog systems. The dataset is available on \url{http://yanran.li/dailydialog}",
}
```
|
UBC-NLP/ARBERT
|
UBC-NLP
| 2022-01-19T20:10:55Z | 540 | 5 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"Arabic BERT",
"MSA",
"Twitter",
"Masked Langauge Model",
"ar",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language:
- ar
tags:
- Arabic BERT
- MSA
- Twitter
- Masked Langauge Model
widget:
- text: "اللغة العربية هي لغة [MASK]."
---
<img src="https://raw.githubusercontent.com/UBC-NLP/marbert/main/ARBERT_MARBERT.jpg" alt="drawing" width="30%" height="30%" align="right"/>
**ARBERT** is one of three models described in our **ACl 2021 paper** **["ARBERT & MARBERT: Deep Bidirectional Transformers for Arabic"](https://mageed.arts.ubc.ca/files/2020/12/marbert_arxiv_2020.pdf)**. ARBERT is a large-scale pre-trained masked language model focused on Modern Standard Arabic (MSA). To train ARBERT, we use the same architecture as BERT-base: 12 attention layers, each has 12 attention heads and 768 hidden dimensions, a vocabulary of 100K WordPieces, making ∼163M parameters. We train ARBERT on a collection of Arabic datasets comprising **61GB of text** (**6.2B tokens**). For more information, please visit our own GitHub [repo](https://github.com/UBC-NLP/marbert).
# BibTex
If you use our models (ARBERT, MARBERT, or MARBERTv2) for your scientific publication, or if you find the resources in this repository useful, please cite our paper as follows (to be updated):
```bibtex
@inproceedings{abdul-mageed-etal-2021-arbert,
title = "{ARBERT} {\&} {MARBERT}: Deep Bidirectional Transformers for {A}rabic",
author = "Abdul-Mageed, Muhammad and
Elmadany, AbdelRahim and
Nagoudi, El Moatez Billah",
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.acl-long.551",
doi = "10.18653/v1/2021.acl-long.551",
pages = "7088--7105",
abstract = "Pre-trained language models (LMs) are currently integral to many natural language processing systems. Although multilingual LMs were also introduced to serve many languages, these have limitations such as being costly at inference time and the size and diversity of non-English data involved in their pre-training. We remedy these issues for a collection of diverse Arabic varieties by introducing two powerful deep bidirectional transformer-based models, ARBERT and MARBERT. To evaluate our models, we also introduce ARLUE, a new benchmark for multi-dialectal Arabic language understanding evaluation. ARLUE is built using 42 datasets targeting six different task clusters, allowing us to offer a series of standardized experiments under rich conditions. When fine-tuned on ARLUE, our models collectively achieve new state-of-the-art results across the majority of tasks (37 out of 48 classification tasks, on the 42 datasets). Our best model acquires the highest ARLUE score (77.40) across all six task clusters, outperforming all other models including XLM-R Large ( 3.4x larger size). Our models are publicly available at https://github.com/UBC-NLP/marbert and ARLUE will be released through the same repository.",
}
```
## Acknowledgments
We gratefully acknowledge support from the Natural Sciences and Engineering Research Council of Canada, the Social Sciences and Humanities Research Council of Canada, Canadian Foundation for Innovation, [ComputeCanada](www.computecanada.ca) and [UBC ARC-Sockeye](https://doi.org/10.14288/SOCKEYE). We also thank the [Google TensorFlow Research Cloud (TFRC)](https://www.tensorflow.org/tfrc) program for providing us with free TPU access.
|
hrdipto/wav2vec2-xls-r-tf-left-right-trainer
|
hrdipto
| 2022-01-19T20:06:38Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-xls-r-tf-left-right-trainer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-tf-left-right-trainer
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.0090
- eval_wer: 0.0037
- eval_runtime: 11.2686
- eval_samples_per_second: 71.703
- eval_steps_per_second: 8.963
- epoch: 21.05
- step: 4000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
kjackson/distilbert-base-uncased-finetuned-emotion
|
kjackson
| 2022-01-19T19:10:27Z | 0 | 0 | null |
[
"exbert",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1907.11692",
"license:mit",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
language: en
tags:
- exbert
license: mit
datasets:
- bookcorpus
- wikipedia
---
# RoBERTa base model
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1907.11692) and first released in
[this repository](https://github.com/pytorch/fairseq/tree/master/examples/roberta). This model is case-sensitive: it
makes a difference between english and English.
Disclaimer: The team releasing RoBERTa did not write a model card for this model so this model card has been written by
the Hugging Face team.
|
vuiseng9/bert-base-squadv1
|
vuiseng9
| 2022-01-19T19:03:57Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"onnx",
"bert",
"question-answering",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
This model is a fork of [```csarron/bert-base-uncased-squad-v1```](https://huggingface.co/csarron/bert-base-uncased-squad-v1).
```
eval_exact_match = 80.9082
eval_f1 = 88.2275
eval_samples = 10784
```
# Eval
```bash
export CUDA_VISIBLE_DEVICES=0
OUTDIR=eval-bert-base-squadv1
WORKDIR=transformers/examples/pytorch/question-answering
cd $WORKDIR
nohup python run_qa.py \
--model_name_or_path vuiseng9/bert-base-squadv1 \
--dataset_name squad \
--do_eval \
--per_device_eval_batch_size 128 \
--max_seq_length 384 \
--doc_stride 128 \
--overwrite_output_dir \
--output_dir $OUTDIR 2>&1 | tee $OUTDIR/run.log &
```
|
indonesian-nlp/wav2vec2-luganda
|
indonesian-nlp
| 2022-01-19T16:19:45Z | 11 | 2 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"lg",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: lg
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
license: apache-2.0
model-index:
- name: Wav2Vec2 Luganda by Indonesian-NLP
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice lg
type: common_voice
args: lg
metrics:
- name: Test WER
type: wer
value: 7.53
---
# Automatic Speech Recognition for Luganda
This is the model built for the
[Mozilla Luganda Automatic Speech Recognition competition](https://zindi.africa/competitions/mozilla-luganda-automatic-speech-recognition).
It is a fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53)
model on the [Luganda Common Voice dataset](https://huggingface.co/datasets/common_voice) version 7.0.
We also provide a [live demo](https://huggingface.co/spaces/indonesian-nlp/luganda-asr) to test the model.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "lg", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("indonesian-nlp/wav2vec2-luganda")
model = Wav2Vec2ForCTC.from_pretrained("indonesian-nlp/wav2vec2-luganda")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
if "audio" in batch:
speech_array = torch.tensor(batch["audio"]["array"])
else:
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset[:2]["sentence"])
```
## Evaluation
The model can be evaluated as follows on the Indonesian test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "lg", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("indonesian-nlp/wav2vec2-luganda")
model = Wav2Vec2ForCTC.from_pretrained("indonesian-nlp/wav2vec2-luganda")
model.to("cuda")
chars_to_ignore = [",", "?", ".", "!", "-", ";", ":", '""', "%", "'", '"', "�", "‘", "’", "’"]
chars_to_ignore_regex = f'[{"".join(chars_to_ignore)}]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
if "audio" in batch:
speech_array = torch.tensor(batch["audio"]["array"])
else:
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
WER without KenLM: 15.38 %
WER With KenLM:
**Test Result**: 7.53 %
## Training
The Common Voice `train`, `validation`, and ... datasets were used for training as well as ... and ... # TODO
The script used for training can be found [here](https://github.com/indonesian-nlp/luganda-asr)
|
baaastien/xls-r-ab-test
|
baaastien
| 2022-01-19T12:03:47Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"common_voice",
"generated_from_trainer",
"ab",
"dataset:common_voice",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- ab
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [hf-test/xls-r-dummy](https://huggingface.co/hf-test/xls-r-dummy) on the COMMON_VOICE - AB dataset.
It achieves the following results on the evaluation set:
- Loss: 133.5167
- Wer: 18.9286
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
chitra/finetuned-adversarial-paraphrase-model
|
chitra
| 2022-01-19T09:13:16Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
model-index:
- name: finetuned-adversarial-paraphrase-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-adversarial-paraphrase-model
This model is a fine-tuned version of [coderpotter/adversarial-paraphrasing-detector](https://huggingface.co/coderpotter/adversarial-paraphrasing-detector) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 7.5680
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0848 | 1.0 | 2000 | 5.4633 |
| 0.0495 | 2.0 | 4000 | 6.0352 |
| 0.0121 | 3.0 | 6000 | 7.5680 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
huggingtweets/wmascen
|
huggingtweets
| 2022-01-19T04:52:23Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/wmascen/1642567908765/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1453179488569802752/LsB82o0-_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">wihrel</div>
<div style="text-align: center; font-size: 14px;">@wmascen</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from wihrel.
| Data | wihrel |
| --- | --- |
| Tweets downloaded | 2900 |
| Retweets | 203 |
| Short tweets | 236 |
| Tweets kept | 2461 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/bsbw98xm/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @wmascen's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3pwlitks) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3pwlitks/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/wmascen')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/godslovepariah
|
huggingtweets
| 2022-01-19T04:12:22Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/godslovepariah/1642565537762/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1432780406777020417/XTrp9MCR_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">LOVER//PARIAH</div>
<div style="text-align: center; font-size: 14px;">@godslovepariah</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from LOVER//PARIAH.
| Data | LOVER//PARIAH |
| --- | --- |
| Tweets downloaded | 525 |
| Retweets | 9 |
| Short tweets | 10 |
| Tweets kept | 506 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/6l5fj9xw/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @godslovepariah's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3v0x5r1a) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3v0x5r1a/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/godslovepariah')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
NbAiLab/roberta_des_128
|
NbAiLab
| 2022-01-19T01:06:51Z | 3 | 0 |
transformers
|
[
"transformers",
"jax",
"tensorboard",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:04Z |
Just for performing some experiments. Do not use.
This needed to be restarted at 100k. I am getting memory errors at the end of the epoch. Not really sure why.
Step 2 is therefore on train_2__4. Static learning rate for a while. The first 100k ended at 0.59. This is decent so early. No point in running more epochs here though. Changing the corpus and continue training.
|
domdomreloaded/bert-base-uncased-finetuned-swag
|
domdomreloaded
| 2022-01-18T22:33:47Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"multiple-choice",
"generated_from_trainer",
"dataset:swag",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
multiple-choice
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- swag
metrics:
- accuracy
model-index:
- name: bert-base-uncased-finetuned-swag
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-swag
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the swag dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6045
- Accuracy: 0.7960
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7494 | 1.0 | 4597 | 0.5942 | 0.7716 |
| 0.3499 | 2.0 | 9194 | 0.6045 | 0.7960 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
milyiyo/electra-base-gen-finetuned-amazon-review
|
milyiyo
| 2022-01-18T21:21:53Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"electra",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: electra-base-gen-finetuned-amazon-review
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_reviews_multi
type: amazon_reviews_multi
args: es
metrics:
- name: Accuracy
type: accuracy
value: 0.5024
- name: F1
type: f1
value: 0.5063190059782597
- name: Precision
type: precision
value: 0.5121183330982292
- name: Recall
type: recall
value: 0.5024
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-base-gen-finetuned-amazon-review
This model is a fine-tuned version of [mrm8488/electricidad-base-generator](https://huggingface.co/mrm8488/electricidad-base-generator) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8030
- Accuracy: 0.5024
- F1: 0.5063
- Precision: 0.5121
- Recall: 0.5024
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Accuracy | F1 | Validation Loss | Precision | Recall |
|:-------------:|:-----:|:----:|:--------:|:------:|:---------------:|:---------:|:------:|
| 0.5135 | 1.0 | 1000 | 0.4886 | 0.4929 | 1.6580 | 0.5077 | 0.4886 |
| 0.4138 | 2.0 | 2000 | 0.5044 | 0.5093 | 1.7951 | 0.5183 | 0.5044 |
| 0.4244 | 3.0 | 3000 | 0.5022 | 0.5068 | 1.8108 | 0.5141 | 0.5022 |
| 0.4231 | 6.0 | 6000 | 1.7636 | 0.4972 | 0.5018 | 0.5092 | 0.4972 |
| 0.3574 | 7.0 | 7000 | 1.8030 | 0.5024 | 0.5063 | 0.5121 | 0.5024 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
malloc/OpenNMT-py-German-English-2-layer-BiLSTM
|
malloc
| 2022-01-18T20:22:23Z | 0 | 0 | null |
[
"translation",
"pytorch",
"de",
"en",
"license:mit",
"region:us"
] |
translation
| 2022-03-02T23:29:05Z |
---
language:
- de
- en
tags:
- translation
- pytorch
license: mit
datasets:
- IWSLT ‘14 DE-EN
metrics:
- bleu
---
# OpenNMT-py-English-German-Transformer
[OpenNMT-py](https://github.com/OpenNMT/OpenNMT-py) is the PyTorch version of the OpenNMT project, an open-source (MIT) neural machine translation framework.
OpenNMT has several [pretrained models](https://opennmt.net/Models-py/). This one is trained particularly for German to English translation.
- Configuration: 2-layer BiLSTM with hidden size 500 trained for 20 epochs
- Data: IWSLT ‘14 DE-EN
- BLEU: 30.33
|
malloc/OpenNMT-py-English-German-Transformer
|
malloc
| 2022-01-18T20:18:11Z | 0 | 2 | null |
[
"translation",
"pytorch",
"de",
"en",
"dataset:WMT",
"license:mit",
"region:us"
] |
translation
| 2022-03-02T23:29:05Z |
---
language:
- de
- en
tags:
- translation
- pytorch
license: mit
datasets:
- WMT
metrics:
- bleu
---
# OpenNMT-py-English-German-Transformer
[OpenNMT-py](https://github.com/OpenNMT/OpenNMT-py) is the PyTorch version of the OpenNMT project, an open-source (MIT) neural machine translation framework.
OpenNMT has several [pretrained models](https://opennmt.net/Models-py/). This one is trained particularly for English to German translation.
- Configuration: Base Transformer configuration with [standard training options](http://opennmt.net/OpenNMT-py/FAQ.html#how-do-i-use-the-transformer-model-do-you-support-multi-gpu)
- Data: WMT with shared SentencePiece model
- BLEU:
- newstest2014 = 26.89
- newstest2017 = 28.09
|
milyiyo/electra-small-finetuned-amazon-review
|
milyiyo
| 2022-01-18T17:47:17Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"electra",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: electra-small-finetuned-amazon-review
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_reviews_multi
type: amazon_reviews_multi
args: en
metrics:
- name: Accuracy
type: accuracy
value: 0.5504
- name: F1
type: f1
value: 0.5457527808330634
- name: Precision
type: precision
value: 0.5428695841337288
- name: Recall
type: recall
value: 0.5504
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-small-finetuned-amazon-review
This model is a fine-tuned version of [google/electra-small-discriminator](https://huggingface.co/google/electra-small-discriminator) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0560
- Accuracy: 0.5504
- F1: 0.5458
- Precision: 0.5429
- Recall: 0.5504
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 1.2172 | 1.0 | 1000 | 1.1014 | 0.5216 | 0.4902 | 0.4954 | 0.5216 |
| 1.0027 | 2.0 | 2000 | 1.0388 | 0.549 | 0.5471 | 0.5494 | 0.549 |
| 0.9035 | 3.0 | 3000 | 1.0560 | 0.5504 | 0.5458 | 0.5429 | 0.5504 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
vuiseng9/bert-base-squadv1-block-pruning-hybrid-filled-lt-nncf-57.92sparse-lt
|
vuiseng9
| 2022-01-18T17:45:15Z | 1 | 0 |
transformers
|
[
"transformers",
"pytorch",
"onnx",
"bert",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
This model is a downstream optimization of [```vuiseng9/bert-base-squadv1-block-pruning-hybrid-filled-lt```](https://huggingface.co/vuiseng9/bert-base-squadv1-block-pruning-hybrid-filled-lt) using [OpenVINO/NNCF](https://github.com/openvinotoolkit/nncf). Applied optimization includes:
1. magnitude sparsification at 57.92% upon initialization so that sparsity over all linear layers of bert-base is at 90%. Parameters are ranked globally via thier absolute norm. Only linear layers of self-attention and ffnn are targeted.
2. Custom distillation with large model ```bert-large-uncased-whole-word-masking-finetuned-squad```
```
eval_exact_match = 80.4447
eval_f1 = 87.7678
eval_samples = 10784
```
# Setup
```bash
# OpenVINO/NNCF
git clone https://github.com/vuiseng9/nncf && cd nncf
git checkout tld-poc
git reset --hard 1dec7afe7a4b567c059fcf287ea2c234980fded2
python setup.py develop
pip install -r examples/torch/requirements.txt
# Huggingface nn_pruning
git clone https://github.com/vuiseng9/nn_pruning && cd nn_pruning
git checkout reproduce-evaluation
git reset --hard 2d4e196d694c465e43e5fbce6c3836d0a60e1446
pip install -e ".[dev]"
# Huggingface Transformers
git clone https://github.com/vuiseng9/transformers && cd transformers
git checkout tld-poc
git reset --hard 10a1e29d84484e48fd106f58957d9ffc89dc43c5
pip install -e .
head -n 1 examples/pytorch/question-answering/requirements.txt | xargs -i pip install {}
# Additional dependencies
pip install onnx
```
# Train
```bash
git clone https://huggingface.co/vuiseng9/bert-base-squadv1-block-pruning-hybrid-filled-lt
BASE_MODEL=/path/to/cloned_repo_above #to-revise
wget https://huggingface.co/vuiseng9/bert-base-squadv1-block-pruning-hybrid-filled-lt-nncf-57.92sparse-lt/raw/main/nncf_bert_squad_sparsity.json
NNCF_CFG=/path/to/downloaded_nncf_cfg_above #to-revise
OUTROOT=/path/to/train_output_root #to-revise
WORKDIR=transformers/examples/pytorch/question-answering #to-revise
RUNID=bert-base-squadv1-block-pruning-hybrid-filled-lt-nncf-57.92sparse-lt
cd $WORKDIR
OUTDIR=$OUTROOT/$RUNID
mkdir -p $OUTDIR
export CUDA_VISIBLE_DEVICES=0
NEPOCH=5
python run_qa.py \
--model_name_or_path vuiseng9/bert-base-squadv1-block-pruning-hybrid \
--optimize_model_before_eval \
--optimized_checkpoint $BASE_MODEL \
--dataset_name squad \
--do_eval \
--do_train \
--evaluation_strategy steps \
--eval_steps 250 \
--learning_rate 3e-5 \
--lr_scheduler_type cosine_with_restarts \
--warmup_ratio 0.25 \
--cosine_cycles 1 \
--teacher bert-large-uncased-whole-word-masking-finetuned-squad \
--teacher_ratio 0.9 \
--num_train_epochs $NEPOCH \
--per_device_eval_batch_size 128 \
--per_device_train_batch_size 16 \
--max_seq_length 384 \
--doc_stride 128 \
--save_steps 250 \
--nncf_config $NNCF_CFG \
--logging_steps 1 \
--overwrite_output_dir \
--run_name $RUNID \
--output_dir $OUTDIR
```
# Eval
This repo must be cloned locally.
```bash
git clone https://huggingface.co/vuiseng9/bert-base-squadv1-block-pruning-hybrid-filled-lt-nncf-57.92sparse-lt
MODELROOT=/path/to/cloned_repo_above #to-revise
export CUDA_VISIBLE_DEVICES=0
OUTDIR=eval-bert-base-squadv1-block-pruning-hybrid-filled-lt-nncf-57.92sparse-lt
WORKDIR=transformers/examples/pytorch/question-answering #to-revise
cd $WORKDIR
mkdir $OUTDIR
nohup python run_qa.py \
--model_name_or_path vuiseng9/bert-base-squadv1-block-pruning-hybrid \
--dataset_name squad \
--optimize_model_before_eval \
--qat_checkpoint $MODELROOT/checkpoint-20000 \
--nncf_config $MODELROOT/nncf_bert_squad_sparsity.json \
--to_onnx $OUTDIR/bert-base-squadv1-block-pruning-hybrid-filled-lt-nncf-57.92sparse-lt.onnx \
--do_eval \
--per_device_eval_batch_size 128 \
--max_seq_length 384 \
--doc_stride 128 \
--overwrite_output_dir \
--output_dir $OUTDIR 2>&1 | tee $OUTDIR/run.log &
```
|
tal-yifat/injury-report-test
|
tal-yifat
| 2022-01-18T16:24:00Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: injury-report-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# injury-report-test
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5697
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.8158 | 1.0 | 6633 | 1.7368 |
| 1.6984 | 2.0 | 13266 | 1.6198 |
| 1.6209 | 3.0 | 19899 | 1.5800 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
phueb/BabyBERTa-2
|
phueb
| 2022-01-18T14:44:44Z | 60 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"BabyBERTa",
"en",
"dataset:CHILDES",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language: en
tags:
- BabyBERTa
datasets:
- CHILDES
widget:
- text: "Look here. What is that <mask> ?"
- text: "Do you like your <mask> ?"
---
## BabyBERTA
### Overview
BabyBERTa is a light-weight version of RoBERTa trained on 5M words of American-English child-directed input.
It is intended for language acquisition research, on a single desktop with a single GPU - no high-performance computing infrastructure needed.
The three provided models are randomly selected from 10 that were trained and reported in the paper.
## Loading the tokenizer
BabyBERTa was trained with `add_prefix_space=True`, so it will not work properly with the tokenizer defaults.
For instance, to load the tokenizer for BabyBERTa-1, load it as follows:
```python
tokenizer = RobertaTokenizerFast.from_pretrained("phueb/BabyBERTa-1",
add_prefix_space=True)
```
### Hyper-Parameters
See the paper for details.
All provided models were trained for 400K steps with a batch size of 16.
Importantly, BabyBERTa never predicts unmasked tokens during training - `unmask_prob` is set to zero.
### Performance
BabyBerta was developed for learning grammatical knowledge from child-directed input.
Its grammatical knowledge was evaluated using the [Zorro](https://github.com/phueb/Zorro) test suite.
The best model achieves an overall accuracy of 80.3,
comparable to RoBERTa-base, which achieves an overall accuracy of 82.6 on the latest version of Zorro (as of October, 2021).
Both values differ slightly from those reported in the [CoNLL 2021 paper](https://aclanthology.org/2021.conll-1.49/).
There are two reasons for this:
1. Performance of RoBERTa-base is slightly larger because the authors previously lower-cased all words in Zorro before evaluation.
Lower-casing of proper nouns is detrimental to RoBERTa-base because RoBERTa-base has likely been trained on proper nouns that are primarily title-cased.
In contrast, because BabyBERTa is not case-sensitive, its performance is not influenced by this change.
2. The latest version of Zorro no longer contains ambiguous content words such as "Spanish" which can be both a noun and an adjective.
this resulted in a small reduction in the performance of BabyBERTa.
Overall Accuracy on Zorro:
| Model Name | Accuracy (holistic scoring) | Accuracy (MLM-scoring) |
|----------------------------------------|------------------------------|------------|
| [BabyBERTa-1][link-BabyBERTa-1] | 80.3 | 79.9 |
| [BabyBERTa-2][link-BabyBERTa-2] | 78.6 | 78.2 |
| [BabyBERTa-3][link-BabyBERTa-3] | 74.5 | 78.1 |
### Additional Information
This model was trained by [Philip Huebner](https://philhuebner.com), currently at the [UIUC Language and Learning Lab](http://www.learninglanguagelab.org).
More info can be found [here](https://github.com/phueb/BabyBERTa).
[link-BabyBERTa-1]: https://huggingface.co/phueb/BabyBERTa-1
[link-BabyBERTa-2]: https://huggingface.co/phueb/BabyBERTa-2
[link-BabyBERTa-3]: https://huggingface.co/phueb/BabyBERTa-3
|
phueb/BabyBERTa-1
|
phueb
| 2022-01-18T14:44:02Z | 56 | 2 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"BabyBERTa",
"en",
"dataset:CHILDES",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language: en
tags:
- BabyBERTa
datasets:
- CHILDES
widget:
- text: "Look here. What is that <mask> ?"
- text: "Do you like your <mask> ?"
---
## BabyBERTA
### Overview
BabyBERTa is a light-weight version of RoBERTa trained on 5M words of American-English child-directed input.
It is intended for language acquisition research, on a single desktop with a single GPU - no high-performance computing infrastructure needed.
The three provided models are randomly selected from 10 that were trained and reported in the paper.
## Loading the tokenizer
BabyBERTa was trained with `add_prefix_space=True`, so it will not work properly with the tokenizer defaults.
For instance, to load the tokenizer for BabyBERTa-1, load it as follows:
```python
tokenizer = RobertaTokenizerFast.from_pretrained("phueb/BabyBERTa-1",
add_prefix_space=True)
```
### Hyper-Parameters
See the paper for details.
All provided models were trained for 400K steps with a batch size of 16.
Importantly, BabyBERTa never predicts unmasked tokens during training - `unmask_prob` is set to zero.
### Performance
BabyBerta was developed for learning grammatical knowledge from child-directed input.
Its grammatical knowledge was evaluated using the [Zorro](https://github.com/phueb/Zorro) test suite.
The best model achieves an overall accuracy of 80.3,
comparable to RoBERTa-base, which achieves an overall accuracy of 82.6 on the latest version of Zorro (as of October, 2021).
Both values differ slightly from those reported in the [CoNLL 2021 paper](https://aclanthology.org/2021.conll-1.49/).
There are two reasons for this:
1. Performance of RoBERTa-base is slightly larger because the authors previously lower-cased all words in Zorro before evaluation.
Lower-casing of proper nouns is detrimental to RoBERTa-base because RoBERTa-base has likely been trained on proper nouns that are primarily title-cased.
In contrast, because BabyBERTa is not case-sensitive, its performance is not influenced by this change.
2. The latest version of Zorro no longer contains ambiguous content words such as "Spanish" which can be both a noun and an adjective.
this resulted in a small reduction in the performance of BabyBERTa.
Overall Accuracy on Zorro:
| Model Name | Accuracy (holistic scoring) | Accuracy (MLM-scoring) |
|----------------------------------------|------------------------------|------------|
| [BabyBERTa-1][link-BabyBERTa-1] | 80.3 | 79.9 |
| [BabyBERTa-2][link-BabyBERTa-2] | 78.6 | 78.2 |
| [BabyBERTa-3][link-BabyBERTa-3] | 74.5 | 78.1 |
### Additional Information
This model was trained by [Philip Huebner](https://philhuebner.com), currently at the [UIUC Language and Learning Lab](http://www.learninglanguagelab.org).
More info can be found [here](https://github.com/phueb/BabyBERTa).
[link-BabyBERTa-1]: https://huggingface.co/phueb/BabyBERTa-1
[link-BabyBERTa-2]: https://huggingface.co/phueb/BabyBERTa-2
[link-BabyBERTa-3]: https://huggingface.co/phueb/BabyBERTa-3
|
phueb/BabyBERTa-3
|
phueb
| 2022-01-18T14:41:25Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"BabyBERTa",
"en",
"dataset:CHILDES",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language: en
tags:
- BabyBERTa
license: mit
datasets:
- CHILDES
widget:
- text: "Look here. What is that <mask> ?"
- text: "Do you like your <mask> ?"
---
## BabyBERTA
### Overview
BabyBERTa is a light-weight version of RoBERTa trained on 5M words of American-English child-directed input.
It is intended for language acquisition research, on a single desktop with a single GPU - no high-performance computing infrastructure needed.
The three provided models are randomly selected from 10 that were trained and reported in the paper.
## Loading the tokenizer
BabyBERTa was trained with `add_prefix_space=True`, so it will not work properly with the tokenizer defaults.
For instance, to load the tokenizer for BabyBERTa-1, load it as follows:
```python
tokenizer = RobertaTokenizerFast.from_pretrained("phueb/BabyBERTa-1",
add_prefix_space=True)
```
### Hyper-Parameters
See the paper for details.
All provided models were trained for 400K steps with a batch size of 16.
Importantly, BabyBERTa never predicts unmasked tokens during training - `unmask_prob` is set to zero.
### Performance
BabyBerta was developed for learning grammatical knowledge from child-directed input.
Its grammatical knowledge was evaluated using the [Zorro](https://github.com/phueb/Zorro) test suite.
The best model achieves an overall accuracy of 80.3,
comparable to RoBERTa-base, which achieves an overall accuracy of 82.6 on the latest version of Zorro (as of October, 2021).
Both values differ slightly from those reported in the [CoNLL 2021 paper](https://aclanthology.org/2021.conll-1.49/).
There are two reasons for this:
1. Performance of RoBERTa-base is slightly larger because the authors previously lower-cased all words in Zorro before evaluation.
Lower-casing of proper nouns is detrimental to RoBERTa-base because RoBERTa-base has likely been trained on proper nouns that are primarily title-cased.
In contrast, because BabyBERTa is not case-sensitive, its performance is not influenced by this change.
2. The latest version of Zorro no longer contains ambiguous content words such as "Spanish" which can be both a noun and an adjective.
this resulted in a small reduction in the performance of BabyBERTa.
Overall Accuracy on Zorro:
| Model Name | Accuracy (holistic scoring) | Accuracy (MLM-scoring) |
|----------------------------------------|------------------------------|------------|
| [BabyBERTa-1][link-BabyBERTa-1] | 80.3 | 79.9 |
| [BabyBERTa-2][link-BabyBERTa-2] | 78.6 | 78.2 |
| [BabyBERTa-3][link-BabyBERTa-3] | 74.5 | 78.1 |
### Additional Information
This model was trained by [Philip Huebner](https://philhuebner.com), currently at the [UIUC Language and Learning Lab](http://www.learninglanguagelab.org).
More info can be found [here](https://github.com/phueb/BabyBERTa).
[link-BabyBERTa-1]: https://huggingface.co/phueb/BabyBERTa-1
[link-BabyBERTa-2]: https://huggingface.co/phueb/BabyBERTa-2
[link-BabyBERTa-3]: https://huggingface.co/phueb/BabyBERTa-3
|
akozlo/conserv_fulltext_1_18_22
|
akozlo
| 2022-01-18T13:42:59Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: conserv_fulltext_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# conserv_fulltext_model
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
unbalanced_texts gpt2
|
soskok1288/Sas
|
soskok1288
| 2022-01-18T11:54:46Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-03-02T23:29:05Z |
export enum PipelineType {
"text-generation"}
|
huggingtweets/dankogai-hirox246
|
huggingtweets
| 2022-01-18T09:55:05Z | 0 | 0 | null |
[
"huggingtweets",
"en",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/dankogai-hirox246/1642499700234/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/646595746905620480/oeKI14gB_400x400.png')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1190142566831984640/o4kO2hp-_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">ひろゆき, Hiroyuki Nishimura & Dan Kogai</div>
<div style="text-align: center; font-size: 14px;">@dankogai-hirox246</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from ひろゆき, Hiroyuki Nishimura & Dan Kogai.
| Data | ひろゆき, Hiroyuki Nishimura | Dan Kogai |
| --- | --- | --- |
| Tweets downloaded | 3249 | 3250 |
| Retweets | 284 | 340 |
| Short tweets | 1988 | 2416 |
| Tweets kept | 977 | 494 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3vrtv6xf/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dankogai-hirox246's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1yfxplpr) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1yfxplpr/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/dankogai-hirox246')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
hkunlp/T5_large_prefix_all_tasks_2upsample2
|
hkunlp
| 2022-01-18T07:15:22Z | 4 | 2 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
This is the ckpt of prefix-tuning model we trained on 21 tasks using a upsampling temp of 2.
Note: The prefix module is large due to the fact we keep the re-param weight and didn't compress it to make it more original and extendable for researchers.
|
dmiller1/distilbert-base-uncased-finetuned-emotion
|
dmiller1
| 2022-01-18T03:59:30Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.926
- name: F1
type: f1
value: 0.9261144741040841
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2161
- Accuracy: 0.926
- F1: 0.9261
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8436 | 1.0 | 250 | 0.3175 | 0.9105 | 0.9081 |
| 0.2492 | 2.0 | 500 | 0.2161 | 0.926 | 0.9261 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.7.1
- Datasets 1.17.0
- Tokenizers 0.10.3
|
ronanki/xlmr_17-01-2022_v3
|
ronanki
| 2022-01-17T20:34:20Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# ronanki/xlmr_17-01-2022_v3
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('ronanki/xlmr_17-01-2022_v3')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('ronanki/xlmr_17-01-2022_v3')
model = AutoModel.from_pretrained('ronanki/xlmr_17-01-2022_v3')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=ronanki/xlmr_17-01-2022_v3)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 40 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 4,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
groadabike/ConvTasNet_DAMPVSEP_EnglishNonEnglish_baseline
|
groadabike
| 2022-01-17T12:53:22Z | 11 | 1 |
asteroid
|
[
"asteroid",
"pytorch",
"audio",
"ConvTasNet",
"audio-to-audio",
"license:cc-by-sa-4.0",
"region:us"
] |
audio-to-audio
| 2022-03-02T23:29:05Z |
---
tags:
- asteroid
- audio
- ConvTasNet
- audio-to-audio
datasets:
- DAMP-VSEP
- Singing/Accompaniment Separation
license: cc-by-sa-4.0
---
## Description:
This model was trained by Gerardo Roa using the dampvsep recipe in Asteroid.
It was trained on the `singing/accompaniment` task of the `DAMP-VSEP` dataset.
## Training config:
```yaml
data:
channels: 1
emb_model: 'no'
metadata_path: metadata
mixture: remix
root_path: /fastdata/acp13gr/DAMP/DAMP-VSEP
sample_rate: 16000
train_set: english_nonenglish
filterbank:
kernel_size: 20
n_filters: 256
stride: 10
main_args:
exp_dir: exp/train_convtasnet_remix-no-0.0-english_nonenglish-0.0005-jade
help: null
masknet:
bn_chan: 256
conv_kernel_size: 3
hid_chan: 512
mask_act: relu
n_blocks: 10
n_repeats: 4
n_src: 2
norm_type: gLN
skip_chan: 256
optim:
lr: 0.0005
optimizer: adam
weight_decay: 0.0
positional arguments: {}
training:
batch_size: 7
early_stop: true
epochs: 50
half_lr: true
loss_alpha: 0.0
num_workers: 10
```
## Results:
```yaml
"si_sdr": 15.111802516750586,
"si_sdr_imp": 15.178209807687663,
"si_sdr_s0": 12.160261214703553,
"si_sdr_s0_imp": 17.434593619085675,
"si_sdr_s1": 18.063343818797623,
"si_sdr_s1_imp": 12.92182599628965,
"sdr": 15.959722569460281,
"sdr_imp": 14.927002467087567,
"sdr_s0": 13.270412028426595,
"sdr_s0_imp": 16.45867572657551,
"sdr_s1": 18.64903311049397,
"sdr_s1_imp": 13.39532920759962,
"sir": 23.935932341084754,
"sir_imp": 22.903212238712012,
"sir_s0": 22.30777879911744,
"sir_s0_imp": 25.49604249726635,
"sir_s1": 25.56408588305207,
"sir_s1_imp": 20.310381980157665,
"sar": 17.174899162445882,
"sar_imp": -134.47377304178818,
"sar_s0": 14.268071153965913,
"sar_s0_imp": -137.38060105026818,
"sar_s1": 20.081727170925856,
"sar_s1_imp": -131.56694503330817,
"stoi": 0.7746496376326059,
"stoi_imp": 0.19613735629114643,
"stoi_s0": 0.6611376621212413,
"stoi_s0_imp": 0.21162695175464794,
"stoi_s1": 0.8881616131439705,
"stoi_s1_imp": 0.1806477608276449
```
## License notice:
** This is important, please fill it, if you need help, you can ask on Asteroid's slack.**
This work "ConvTasNet_DAMPVSEP_EnglishNonEnglish_baseline"
is a derivative of [DAMP-VSEP corpus](https://zenodo.org/record/3553059) by
[Smule, Inc](https://www.smule.com/),
used under [Restricted License](https://zenodo.org/record/3553059)(Research only).
"ConvTasNet_DAMPVSEP_EnglishNonEnglish_baseline"
is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/)
by Gerardo Roa.
|
addy88/t5-grammar-correction
|
addy88
| 2022-01-17T12:09:14Z | 109 | 2 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
### How to use
Here is how to use this model in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("addy88/t5-grammar-correction")
model = AutoModelForSeq2SeqLM.from_pretrained("addy88/t5-grammar-correction")
input_ids = tokenizer('grammar: This sentences has has bads grammar.', return_tensors='pt').input_ids
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
|
addy88/T5-23-emotions-detections
|
addy88
| 2022-01-17T12:08:03Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
### How to use
Here is how to use this model in PyTorch:
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
model = T5ForConditionalGeneration.from_pretrained("addy88/T5-23-emotions-detections")
tokenizer = T5Tokenizer.from_pretrained("addy88/T5-23-emotions-detections")
text_to_summarize="emotion: i don't like it this is nonsense."
input_ids = tokenizer.encode(text_to_summarize, return_tensors="pt", add_special_tokens=True)
input_ids = input_ids.to(self.device)
generated_ids = model.generate(
input_ids=input_ids,
num_beams=2,
max_length=512,
repetition_penalty=2.5,
length_penalty=1.0,
early_stopping=True,
top_p=0.95,
top_k=50,
num_return_sequences=1,
)
preds = [tokenizer.decode(g,skip_special_tokens=True,clean_up_tokenization_spaces=True,)for g in generated_ids]
```
|
DoyyingFace/doyying_bert_first_again
|
DoyyingFace
| 2022-01-17T09:00:22Z | 6 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: tmp_qubhe07
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# tmp_qubhe07
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 1374, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.15.0
- TensorFlow 2.7.0
- Datasets 1.17.0
- Tokenizers 0.10.3
|
huggingtweets/lazar181
|
huggingtweets
| 2022-01-17T01:55:14Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/lazar181/1642384387963/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1451342601483952130/-RJ3Ewqp_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Ari/Sera @ 🛌</div>
<div style="text-align: center; font-size: 14px;">@lazar181</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Ari/Sera @ 🛌.
| Data | Ari/Sera @ 🛌 |
| --- | --- |
| Tweets downloaded | 3241 |
| Retweets | 362 |
| Short tweets | 668 |
| Tweets kept | 2211 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/21d2ewj0/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @lazar181's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3ukmb9ye) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3ukmb9ye/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/lazar181')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
milyiyo/multi-minilm-finetuned-amazon-review
|
milyiyo
| 2022-01-16T22:53:05Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: multi-minilm-finetuned-amazon-review
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_reviews_multi
type: amazon_reviews_multi
args: es
metrics:
- name: Accuracy
type: accuracy
value: 0.5422
- name: F1
type: f1
value: 0.543454465221178
- name: Precision
type: precision
value: 0.5452336215624385
- name: Recall
type: recall
value: 0.5422
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# multi-minilm-finetuned-amazon-review
This model is a fine-tuned version of [microsoft/Multilingual-MiniLM-L12-H384](https://huggingface.co/microsoft/Multilingual-MiniLM-L12-H384) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2436
- Accuracy: 0.5422
- F1: 0.5435
- Precision: 0.5452
- Recall: 0.5422
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 1.0049 | 1.0 | 2500 | 1.0616 | 0.5352 | 0.5268 | 0.5347 | 0.5352 |
| 0.9172 | 2.0 | 5000 | 1.0763 | 0.5432 | 0.5412 | 0.5444 | 0.5432 |
| 0.8285 | 3.0 | 7500 | 1.1077 | 0.5408 | 0.5428 | 0.5494 | 0.5408 |
| 0.7361 | 4.0 | 10000 | 1.1743 | 0.5342 | 0.5399 | 0.5531 | 0.5342 |
| 0.6538 | 5.0 | 12500 | 1.2436 | 0.5422 | 0.5435 | 0.5452 | 0.5422 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
husnu/electra-small-turkish-uncased-discriminator
|
husnu
| 2022-01-16T19:01:47Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"electra",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: ft_electra-small-turkish-uncased-discriminator_lr-2e-1_epochs-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
This model is a fine-tuned version of [loodos/electra-small-turkish-uncased-discriminator](https://huggingface.co/loodos/electra-small-turkish-uncased-discriminator) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 5.9506
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.2
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 5.951 | 1.0 | 5818 | 5.9506 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
Shushant/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-ContaminationQAmodel_PubmedBERT
|
Shushant
| 2022-01-16T15:54:15Z | 55 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-ContaminationQAmodel_PubmedBERT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-ContaminationQAmodel_PubmedBERT
This model is a fine-tuned version of [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7515
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 22 | 3.9518 |
| No log | 2.0 | 44 | 3.2703 |
| No log | 3.0 | 66 | 2.9308 |
| No log | 4.0 | 88 | 2.7806 |
| No log | 5.0 | 110 | 2.6926 |
| No log | 6.0 | 132 | 2.7043 |
| No log | 7.0 | 154 | 2.7113 |
| No log | 8.0 | 176 | 2.7236 |
| No log | 9.0 | 198 | 2.7559 |
| No log | 10.0 | 220 | 2.7515 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
Shushant/biobert-v1.1-biomedicalQuestionAnswering
|
Shushant
| 2022-01-16T15:34:49Z | 83 | 5 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
model-index:
- name: biobert-v1.1-biomedicalQuestionAnswering
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# biobert-v1.1-biomedicalQuestionAnswering
This model is a fine-tuned version of [dmis-lab/biobert-v1.1](https://huggingface.co/dmis-lab/biobert-v1.1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9009
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 22 | 3.7409 |
| No log | 2.0 | 44 | 3.1852 |
| No log | 3.0 | 66 | 3.0342 |
| No log | 4.0 | 88 | 2.9416 |
| No log | 5.0 | 110 | 2.9009 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
huggingtweets/clamtime
|
huggingtweets
| 2022-01-16T07:38:14Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/clamtime/1642318689772/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1471629178936176645/RPufrtAg_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">clementine!!!!</div>
<div style="text-align: center; font-size: 14px;">@clamtime</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from clementine!!!!.
| Data | clementine!!!! |
| --- | --- |
| Tweets downloaded | 3243 |
| Retweets | 352 |
| Short tweets | 892 |
| Tweets kept | 1999 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/be98fl09/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @clamtime's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/24efu0w5) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/24efu0w5/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/clamtime')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
porpaul/t5-small-finetuned-xsum
|
porpaul
| 2022-01-16T06:59:38Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:xlsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xlsum
metrics:
- rouge
model-index:
- name: t5-small-finetuned-xsum
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: xlsum
type: xlsum
args: chinese_traditional
metrics:
- name: Rouge1
type: rouge
value: 0.5217
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xlsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2188
- Rouge1: 0.5217
- Rouge2: 0.0464
- Rougel: 0.527
- Rougelsum: 0.5215
- Gen Len: 6.7441
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 5
- eval_batch_size: 5
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 1.3831 | 1.0 | 7475 | 1.2188 | 0.5217 | 0.0464 | 0.527 | 0.5215 | 6.7441 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
Sakil/imdbsentdistilbertmodel
|
Sakil
| 2022-01-16T06:54:14Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"text Classification",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
language:
- en
tags:
- text Classification
license: apache-2.0
widget:
- text: "I like you. </s></s> I love you."
---
* IMDBSentimentDistilBertModel:
- I have used IMDB movie review dataset to create custom model by using DistilBertForSequenceClassification.
from transformers import DistilBertForSequenceClassification, Trainer, TrainingArguments
model = DistilBertForSequenceClassification.from_pretrained('./imdbsentdistilbertmodel')
|
anzorq/t5-v1_1-small-ru_kbd-cased
|
anzorq
| 2022-01-16T05:24:51Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"translation",
"ru",
"kbd",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:05Z |
---
language:
- ru
- kbd
tags:
- translation
datasets:
- anzorq/kbd-ru-1.67M-temp
- 17753 Russian-Kabardian pairs of text
widget:
- text: "ru->kbd: Я иду домой."
example_title: "Я иду домой."
- text: "ru->kbd: Дети играют во дворе."
example_title: "Дети играют во дворе."
- text: "ru->kbd: Сколько тебе лет?"
example_title: "Сколько тебе лет?"
---
## [google/t5-v1_1-small](google/t5-v1_1-small) model
### pretrained on [anzorq/kbd-ru-1.67M-temp](https://huggingface.co/datasets/anzorq/kbd-ru-1.67M-temp)
### fine-tuned on **17753** Russian-Kabardian word/sentence pairs
kbd text uses custom latin script for optimization reasons.
Translation input should start with '**ru->kbd:** '.
**Tokenizer**: T5 sentencepiece, char, cased.
|
haji2438/bertweet-base-SNS_BRANDS_100k
|
haji2438
| 2022-01-16T02:23:32Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
model-index:
- name: bertweet-base-SNS_BRANDS_100k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bertweet-base-SNS_BRANDS_100k
This model is a fine-tuned version of [vinai/bertweet-base](https://huggingface.co/vinai/bertweet-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0483
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0735 | 1.0 | 2928 | 0.0670 |
| 0.0574 | 2.0 | 5856 | 0.0529 |
| 0.0497 | 3.0 | 8784 | 0.0483 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
husnu/bert-base-turkish-128k-cased-finetuned_lr-2e-05_epochs-3TQUAD2-finetuned_lr-2e-05_epochs-3
|
husnu
| 2022-01-15T18:42:09Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
datasets:
- turkish squad v2
model-index:
- name: bert-base-turkish-128k-cased-finetuned_lr-2e-05_epochs-3TQUAD2-finetuned_lr-2e-05_epochs-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-turkish-128k-cased-finetuned_lr-2e-05_epochs-3TQUAD2-finetuned_lr-2e-05_epochs-3
This model is a fine-tuned version of [husnu/bert-base-turkish-128k-cased-finetuned_lr-2e-05_epochs-3](https://huggingface.co/husnu/bert-base-turkish-128k-cased-finetuned_lr-2e-05_epochs-3) on the turkish squad2 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9011
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.6404 | 1.0 | 2245 | 1.4524 |
| 0.403 | 2.0 | 4490 | 1.5638 |
| 0.2355 | 3.0 | 6735 | 1.9011 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
Fraser/to_delete
|
Fraser
| 2022-01-15T15:08:51Z | 0 | 0 | null |
[
"program-synthesis",
"en",
"dataset:program-synthesis",
"license:mit",
"region:us"
] | null | 2022-03-02T23:29:04Z |
---
language:
- en
thumbnail: "https://huggingface.co/Fraser/program-synthesis/resolve/main/img.png"
tags:
- program-synthesis
license: "mit"
datasets:
- program-synthesis
---
# Program Synthesis Data
Generated program synthesis datasets used to train [dreamcoder](https://github.com/ellisk42/ec).
Currently just supports text & list data.
```python
_FEATURES = datasets.Features(
{
"description": datasets.Value("string"),
"input": datasets.Value("string"),
"output": datasets.Value("string"),
"types": datasets.Value("string")
}
)
```

|
Ifromspace/GRIEFSOFT-walr
|
Ifromspace
| 2022-01-15T13:07:07Z | 8 | 2 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"ru",
"4ulan",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:04Z |
---
tags:
- ru
- 4ulan
---
Забавное для дискордика))00)) https://discord.gg/HpeadKH
Offers
work@4ulan.fun
|
Ifromspace/GRIEFSOFT
|
Ifromspace
| 2022-01-15T13:06:43Z | 9 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"PyTorch",
"Transformers",
"4ulan",
"ru",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:04Z |
---
language:
- ru
tags:
- PyTorch
- Transformers
- 4ulan
---
**Fork of https://huggingface.co/sberbank-ai/rugpt3large_based_on_gpt2**
Забавное для дискордика))00))
ROADMAP:
- Собираю датасетик из книжек про попаданцев. <------------------------- Сейчас тут.
- Дообучаю.
- Выбрасываю в дискордик.
https://discord.gg/HpeadKH
|
khizon/distilbert-unreliable-news-eng-4L
|
khizon
| 2022-01-15T07:06:59Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
# Unreliable News Classifier (English)
Trained, validate, and tested using a subset of the NELA-GT-2018 dataset. The dataset is split such that there was no overlap in of news sources between the three sets.
This model used the pre-trained weights of `distilbert-base-cased` as starting point (only 4 layers) and was able to achieve 84% accuracy on the test set. It has less than 1% difference in performance compared to the BERT based model while having **2.0x** the speed.
For more details: [Github](https://github.com/khizon/CS284_final_project)
|
husnu/electra-small-turkish-uncased-discriminator-finetuned_lr-2e-05_epochs-3
|
husnu
| 2022-01-15T05:24:03Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"electra",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: electra-small-turkish-uncased-discriminator-finetuned_lr-2e-05_epochs-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-small-turkish-uncased-discriminator-finetuned_lr-2e-05_epochs-3
This model is a fine-tuned version of [loodos/electra-small-turkish-uncased-discriminator](https://huggingface.co/loodos/electra-small-turkish-uncased-discriminator) on the Turkish squad dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1669
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.5305 | 1.0 | 5818 | 2.4024 |
| 2.3264 | 2.0 | 11636 | 2.2298 |
| 2.1762 | 3.0 | 17454 | 2.1669 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.