modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-02 18:52:31
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 533
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-02 18:52:05
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
mesolitica/t5-super-tiny-bahasa-cased
|
mesolitica
| 2022-10-06T15:38:49Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"ms",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-10-06T15:37:13Z |
---
language: ms
---
# t5-super-tiny-bahasa-cased
Pretrained T5 super tiny on both standard and local language model for Malay.
## Pretraining Corpus
`t5-super-tiny-bahasa-cased` model was pretrained on multiple tasks. Below is list of tasks we trained on,
1. Language masking task on bahasa news, bahasa Wikipedia, bahasa Academia.edu, bahasa parliament and translated The Pile.
2. News title prediction on bahasa news.
3. Next sentence prediction on bahasa news, bahasa Wikipedia, bahasa Academia.edu, bahasa parliament and translated The Pile.
4. Translated QA Natural.
5. Text Similarity task on translated SNLI and translated MNLI.
6. EN-MS translation.
7. MS-EN translation.
8. Abstractive Summarization.
9. Knowledge Graph triples generation.
10. Paraphrase.
11. Social media normalization.
12. Noisy EN-MS translation.
13. Noisy MS-EN translation.
Preparing steps can reproduce at https://github.com/huseinzol05/malaya/tree/master/pretrained-model/t5/prepare
## Pretraining details
- This model was trained using Google T5 repository https://github.com/google-research/text-to-text-transfer-transformer, on v3-8 TPU.
- All steps can reproduce from here, https://github.com/huseinzol05/Malaya/tree/master/pretrained-model/t5
## Supported prefix
1. `soalan: {string}`, trained using Natural QA.
2. `ringkasan: {string}`, for abstractive summarization.
3. `tajuk: {string}`, for abstractive title.
4. `parafrasa: {string}`, for abstractive paraphrase.
5. `terjemah Inggeris ke Melayu: {string}`, for EN-MS translation.
6. `terjemah Melayu ke Inggeris: {string}`, for MS-EN translation.
7. `grafik pengetahuan: {string}`, for MS text to EN Knowledge Graph triples format.
8. `ayat1: {string1} ayat2: {string2}`, semantic similarity.
|
mesolitica/t5-tiny-bahasa-cased
|
mesolitica
| 2022-10-06T15:35:23Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"ms",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-10-06T15:29:48Z |
---
language: ms
---
# t5-tiny-bahasa-cased
Pretrained T5 tiny on both standard and local language model for Malay.
## Pretraining Corpus
`t5-tiny-bahasa-cased` model was pretrained on multiple tasks. Below is list of tasks we trained on,
1. Language masking task on bahasa news, bahasa Wikipedia, bahasa Academia.edu, bahasa parliament and translated The Pile.
2. News title prediction on bahasa news.
3. Next sentence prediction on bahasa news, bahasa Wikipedia, bahasa Academia.edu, bahasa parliament and translated The Pile.
4. Translated QA Natural.
5. Text Similarity task on translated SNLI and translated MNLI.
6. EN-MS translation.
7. MS-EN translation.
8. Abstractive Summarization.
9. Knowledge Graph triples generation.
10. Paraphrase.
11. Social media normalization.
12. Noisy EN-MS translation.
13. Noisy MS-EN translation.
Preparing steps can reproduce at https://github.com/huseinzol05/malaya/tree/master/pretrained-model/t5/prepare
## Pretraining details
- This model was trained using Google T5 repository https://github.com/google-research/text-to-text-transfer-transformer, on v3-8 TPU.
- All steps can reproduce from here, https://github.com/huseinzol05/Malaya/tree/master/pretrained-model/t5
## Supported prefix
1. `soalan: {string}`, trained using Natural QA.
2. `ringkasan: {string}`, for abstractive summarization.
3. `tajuk: {string}`, for abstractive title.
4. `parafrasa: {string}`, for abstractive paraphrase.
5. `terjemah Inggeris ke Melayu: {string}`, for EN-MS translation.
6. `terjemah Melayu ke Inggeris: {string}`, for MS-EN translation.
7. `grafik pengetahuan: {string}`, for MS text to EN Knowledge Graph triples format.
8. `ayat1: {string1} ayat2: {string2}`, semantic similarity.
|
ronanki/MiniLM-L12-v2-alias
|
ronanki
| 2022-10-06T15:32:00Z | 2 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-10-06T15:31:51Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# ronanki/MiniLM-L12-v2-alias
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('ronanki/MiniLM-L12-v2-alias')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('ronanki/MiniLM-L12-v2-alias')
model = AutoModel.from_pretrained('ronanki/MiniLM-L12-v2-alias')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=ronanki/MiniLM-L12-v2-alias)
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 560 with parameters:
```
{'batch_size': 32}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 560,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
mesolitica/t5-super-super-tiny-standard-bahasa-cased
|
mesolitica
| 2022-10-06T15:25:19Z | 126 | 1 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"feature-extraction",
"ms",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:05Z |
---
language: ms
---
# t5-super-super-tiny-standard-bahasa-cased
Pretrained T5 super-super-tiny standard language model for Malay.
## Pretraining Corpus
`t5-super-super-tiny-standard-bahasa-cased` model was pretrained on multiple tasks. Below is list of tasks we trained on,
1. Language masking task on bahasa news, bahasa Wikipedia, bahasa Academia.edu, bahasa parliament and translated The Pile.
2. News title prediction on bahasa news.
3. Next sentence prediction on bahasa news, bahasa Wikipedia, bahasa Academia.edu, bahasa parliament and translated The Pile.
4. Translated QA Natural.
5. Text Similarity task on translated SNLI and translated MNLI.
6. EN-MS translation.
7. MS-EN translation.
8. Abstractive Summarization.
9. Knowledge Graph triples generation.
10. Paraphrase.
Preparing steps can reproduce at https://github.com/huseinzol05/malaya/tree/master/pretrained-model/t5/prepare
## Pretraining details
- This model was trained using Google T5 repository https://github.com/google-research/text-to-text-transfer-transformer, on v3-8 TPU.
- All steps can reproduce from here, https://github.com/huseinzol05/Malaya/tree/master/pretrained-model/t5
## Load Pretrained Model
You can use this model by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this:
```python
from transformers import T5Tokenizer, T5Model
model = T5Model.from_pretrained('malay-huggingface/t5-super-super-tiny-bahasa-cased')
tokenizer = T5Tokenizer.from_pretrained('malay-huggingface/t5-super-super-tiny-bahasa-cased')
```
## Example using T5ForConditionalGeneration
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained('malay-huggingface/t5-super-super-tiny-bahasa-cased')
model = T5ForConditionalGeneration.from_pretrained('malay-huggingface/t5-super-super-tiny-bahasa-cased')
input_ids = tokenizer.encode('soalan: siapakah perdana menteri malaysia?', return_tensors = 'pt')
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
Output is,
```
'Mahathir Mohamad'
```
## Supported prefix
1. `soalan: {string}`, trained using Natural QA.
2. `ringkasan: {string}`, for abstractive summarization.
3. `tajuk: {string}`, for abstractive title.
4. `parafrasa: {string}`, for abstractive paraphrase.
5. `terjemah Inggeris ke Melayu: {string}`, for EN-MS translation.
6. `terjemah Melayu ke Inggeris: {string}`, for MS-EN translation.
7. `grafik pengetahuan: {string}`, for MS text to EN Knowledge Graph triples format.
8. `ayat1: {string1} ayat2: {string2}`, semantic similarity.
|
mesolitica/t5-super-tiny-standard-bahasa-cased
|
mesolitica
| 2022-10-06T15:25:03Z | 106 | 1 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"feature-extraction",
"ms",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:05Z |
---
language: ms
---
# t5-super-tiny-standard-bahasa-cased
Pretrained T5 super-tiny standard language model for Malay.
## Pretraining Corpus
`t5-super-tiny-standard-bahasa-cased` model was pretrained on multiple tasks. Below is list of tasks we trained on,
1. Language masking task on bahasa news, bahasa Wikipedia, bahasa Academia.edu, bahasa parliament and translated The Pile.
2. News title prediction on bahasa news.
3. Next sentence prediction on bahasa news, bahasa Wikipedia, bahasa Academia.edu, bahasa parliament and translated The Pile.
4. Translated QA Natural.
5. Text Similarity task on translated SNLI and translated MNLI.
6. EN-MS translation.
7. MS-EN translation.
8. Abstractive Summarization.
9. Knowledge Graph triples generation.
10. Paraphrase.
Preparing steps can reproduce at https://github.com/huseinzol05/malaya/tree/master/pretrained-model/t5/prepare
## Pretraining details
- This model was trained using Google T5 repository https://github.com/google-research/text-to-text-transfer-transformer, on v3-8 TPU.
- All steps can reproduce from here, https://github.com/huseinzol05/Malaya/tree/master/pretrained-model/t5
## Load Pretrained Model
You can use this model by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this:
```python
from transformers import T5Tokenizer, T5Model
model = T5Model.from_pretrained('malay-huggingface/t5-super-tiny-bahasa-cased')
tokenizer = T5Tokenizer.from_pretrained('malay-huggingface/t5-super-tiny-bahasa-cased')
```
## Example using T5ForConditionalGeneration
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained('malay-huggingface/t5-super-tiny-bahasa-cased')
model = T5ForConditionalGeneration.from_pretrained('malay-huggingface/t5-super-tiny-bahasa-cased')
input_ids = tokenizer.encode('soalan: siapakah perdana menteri malaysia?', return_tensors = 'pt')
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
Output is,
```
'Mahathir Mohamad'
```
## Supported prefix
1. `soalan: {string}`, trained using Natural QA.
2. `ringkasan: {string}`, for abstractive summarization.
3. `tajuk: {string}`, for abstractive title.
4. `parafrasa: {string}`, for abstractive paraphrase.
5. `terjemah Inggeris ke Melayu: {string}`, for EN-MS translation.
6. `terjemah Melayu ke Inggeris: {string}`, for MS-EN translation.
7. `grafik pengetahuan: {string}`, for MS text to EN Knowledge Graph triples format.
8. `ayat1: {string1} ayat2: {string2}`, semantic similarity.
|
mesolitica/t5-small-standard-bahasa-cased
|
mesolitica
| 2022-10-06T15:24:30Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"feature-extraction",
"ms",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:05Z |
---
language: ms
---
# t5-small-standard-bahasa-cased
Pretrained T5 small standard language model for Malay.
## Pretraining Corpus
`t5-small-standard-bahasa-cased` model was pretrained on multiple tasks. Below is list of tasks we trained on,
1. Language masking task on bahasa news, bahasa Wikipedia, bahasa Academia.edu, bahasa parliament and translated The Pile.
2. News title prediction on bahasa news.
3. Next sentence prediction on bahasa news, bahasa Wikipedia, bahasa Academia.edu, bahasa parliament and translated The Pile.
4. Translated QA Natural.
5. Text Similarity task on translated SNLI and translated MNLI.
6. EN-MS translation.
7. MS-EN translation.
8. Abstractive Summarization.
9. Knowledge Graph triples generation.
10. Paraphrase.
Preparing steps can reproduce at https://github.com/huseinzol05/malaya/tree/master/pretrained-model/t5/prepare
## Pretraining details
- This model was trained using Google T5 repository https://github.com/google-research/text-to-text-transfer-transformer, on v3-8 TPU.
- All steps can reproduce from here, https://github.com/huseinzol05/Malaya/tree/master/pretrained-model/t5
## Load Pretrained Model
You can use this model by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this:
```python
from transformers import T5Tokenizer, T5Model
model = T5Model.from_pretrained('malay-huggingface/t5-small-bahasa-cased')
tokenizer = T5Tokenizer.from_pretrained('malay-huggingface/t5-small-bahasa-cased')
```
## Example using T5ForConditionalGeneration
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained('malay-huggingface/t5-small-bahasa-cased')
model = T5ForConditionalGeneration.from_pretrained('malay-huggingface/t5-small-bahasa-cased')
input_ids = tokenizer.encode('soalan: siapakah perdana menteri malaysia?', return_tensors = 'pt')
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
Output is,
```
'Mahathir Mohamad'
```
## Supported prefix
1. `soalan: {string}`, trained using Natural QA.
2. `ringkasan: {string}`, for abstractive summarization.
3. `tajuk: {string}`, for abstractive title.
4. `parafrasa: {string}`, for abstractive paraphrase.
5. `terjemah Inggeris ke Melayu: {string}`, for EN-MS translation.
6. `terjemah Melayu ke Inggeris: {string}`, for MS-EN translation.
7. `grafik pengetahuan: {string}`, for MS text to EN Knowledge Graph triples format.
8. `ayat1: {string1} ayat2: {string2}`, semantic similarity.
|
Charul/my-dummy-model-1
|
Charul
| 2022-10-06T14:56:49Z | 2 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-10-06T14:41:28Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 150 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 150,
"warmup_steps": 15,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
Sandipan1994/t5-small-mathT5-finetune_qatoexp
|
Sandipan1994
| 2022-10-06T14:03:21Z | 111 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:math_qa",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-10-06T11:23:45Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- math_qa
metrics:
- rouge
model-index:
- name: t5-small-mathT5-finetune_qatoexp
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: math_qa
type: math_qa
config: default
split: train
args: default
metrics:
- name: Rouge1
type: rouge
value: 21.9174
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-mathT5-finetune_qatoexp
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the math_qa dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8677
- Rouge1: 21.9174
- Rouge2: 8.4401
- Rougel: 19.1645
- Rougelsum: 19.8239
- Gen Len: 18.9765
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
We have trained T5-small on MathQA dataset for sequence to sequence generation of explanations from given math problem.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 2.4496 | 1.0 | 2984 | 2.2096 | 19.6477 | 6.508 | 16.9295 | 17.5212 | 18.9064 |
| 2.2893 | 2.0 | 5968 | 2.0837 | 20.4879 | 7.2528 | 17.7778 | 18.4085 | 18.968 |
| 2.1869 | 3.0 | 8952 | 2.0125 | 20.8462 | 7.6105 | 18.1516 | 18.8343 | 18.9837 |
| 2.1456 | 4.0 | 11936 | 1.9633 | 20.7623 | 7.7113 | 18.1274 | 18.783 | 18.9886 |
| 2.1171 | 5.0 | 14920 | 1.9321 | 21.0648 | 7.8897 | 18.4162 | 19.0551 | 18.9844 |
| 2.0854 | 6.0 | 17904 | 1.9061 | 21.4445 | 8.0883 | 18.8038 | 19.4176 | 18.9812 |
| 2.0592 | 7.0 | 20888 | 1.8902 | 21.5714 | 8.2751 | 18.8864 | 19.537 | 18.9772 |
| 2.0609 | 8.0 | 23872 | 1.8770 | 21.7737 | 8.3297 | 19.022 | 19.6897 | 18.9763 |
| 2.0285 | 9.0 | 26856 | 1.8701 | 21.964 | 8.4358 | 19.1701 | 19.845 | 18.9747 |
| 2.0165 | 10.0 | 29840 | 1.8677 | 21.9174 | 8.4401 | 19.1645 | 19.8239 | 18.9765 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.2
- Tokenizers 0.12.1
|
GItaf/bert-base-uncased-bert-base-uncased-mc-weight0.25-epoch2
|
GItaf
| 2022-10-06T13:45:07Z | 45 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-10-06T13:44:03Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-bert-base-uncased-mc-weight0.25-epoch2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-bert-base-uncased-mc-weight0.25-epoch2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1
- Datasets 2.4.0
- Tokenizers 0.12.1
|
jasmine009/materials
|
jasmine009
| 2022-10-06T12:10:51Z | 240 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-10-06T12:10:37Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: materials
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.8928571343421936
---
# materials
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### brick

#### metal

#### paper

#### plastic

#### wood

|
fmartinmonier/distilbert-base-uncased-finetuned-cola
|
fmartinmonier
| 2022-10-06T10:52:34Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-06T10:08:28Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5477951635989807
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8133
- Matthews Correlation: 0.5478
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5259 | 1.0 | 535 | 0.5401 | 0.4009 |
| 0.3513 | 2.0 | 1070 | 0.5403 | 0.4876 |
| 0.2373 | 3.0 | 1605 | 0.5422 | 0.5384 |
| 0.1795 | 4.0 | 2140 | 0.7586 | 0.5309 |
| 0.1282 | 5.0 | 2675 | 0.8133 | 0.5478 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
hitchyouwithdawork/koelectra-small-v3-discriminator-finetuned
|
hitchyouwithdawork
| 2022-10-06T09:45:01Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"electra",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-10-06T00:03:50Z |
---
tags:
- generated_from_trainer
model-index:
- name: koelectra-small-v3-discriminator-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# koelectra-small-v3-discriminator-finetuned
This model is a fine-tuned version of [monologg/koelectra-small-v3-discriminator](https://huggingface.co/monologg/koelectra-small-v3-discriminator) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4324
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 4.4466 | 1.0 | 702 | 3.0850 |
| 3.1639 | 2.0 | 1404 | 2.5210 |
| 2.3734 | 3.0 | 2106 | 2.1738 |
| 2.0019 | 4.0 | 2808 | 1.7048 |
| 1.5426 | 5.0 | 3510 | 1.5741 |
| 1.4233 | 6.0 | 4212 | 1.5246 |
| 1.3254 | 7.0 | 4914 | 1.4860 |
| 1.2166 | 8.0 | 5616 | 1.4525 |
| 1.1515 | 9.0 | 6318 | 1.4354 |
| 1.0863 | 10.0 | 7020 | 1.4480 |
| 1.0471 | 11.0 | 7722 | 1.4549 |
| 0.9938 | 12.0 | 8424 | 1.4586 |
| 0.9645 | 13.0 | 9126 | 1.4276 |
| 0.957 | 14.0 | 9828 | 1.4289 |
| 0.9322 | 15.0 | 10530 | 1.4324 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.2
- Tokenizers 0.12.1
|
guma/distilgpt2-finetuned-shakespeare
|
guma
| 2022-10-06T09:40:23Z | 60 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-10-06T09:18:15Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: guma/distilgpt2-finetuned-shakespeare
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# guma/distilgpt2-finetuned-shakespeare
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.1769
- Validation Loss: 3.5116
- Epoch: 19
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 4.2136 | 3.8303 | 0 |
| 3.8997 | 3.6993 | 1 |
| 3.7790 | 3.6344 | 2 |
| 3.7061 | 3.5923 | 3 |
| 3.6476 | 3.5653 | 4 |
| 3.6003 | 3.5513 | 5 |
| 3.5578 | 3.5360 | 6 |
| 3.5204 | 3.5277 | 7 |
| 3.4843 | 3.5171 | 8 |
| 3.4514 | 3.5117 | 9 |
| 3.4194 | 3.5048 | 10 |
| 3.3903 | 3.5040 | 11 |
| 3.3627 | 3.5006 | 12 |
| 3.3332 | 3.5006 | 13 |
| 3.3052 | 3.5019 | 14 |
| 3.2772 | 3.5051 | 15 |
| 3.2514 | 3.5043 | 16 |
| 3.2249 | 3.5026 | 17 |
| 3.2019 | 3.5129 | 18 |
| 3.1769 | 3.5116 | 19 |
### Framework versions
- Transformers 4.22.2
- TensorFlow 2.8.2
- Datasets 2.5.2
- Tokenizers 0.12.1
|
jplu/tf-xlm-r-ner-40-lang
|
jplu
| 2022-10-06T09:25:04Z | 639 | 25 |
transformers
|
[
"transformers",
"tf",
"xlm-roberta",
"token-classification",
"multilingual",
"af",
"ar",
"bg",
"bn",
"de",
"el",
"en",
"es",
"et",
"eu",
"fa",
"fi",
"fr",
"he",
"hi",
"hu",
"id",
"it",
"ja",
"jv",
"ka",
"kk",
"ko",
"ml",
"mr",
"ms",
"my",
"nl",
"pt",
"ru",
"sw",
"ta",
"te",
"th",
"tl",
"tr",
"ur",
"vi",
"yo",
"zh",
"arxiv:1911.02116",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
language:
- multilingual
- af
- ar
- bg
- bn
- de
- el
- en
- es
- et
- eu
- fa
- fi
- fr
- he
- hi
- hu
- id
- it
- ja
- jv
- ka
- kk
- ko
- ml
- mr
- ms
- my
- nl
- pt
- ru
- sw
- ta
- te
- th
- tl
- tr
- ur
- vi
- yo
- zh
language_bcp47:
- fa-IR
---
# XLM-R + NER
This model is a fine-tuned [XLM-Roberta-base](https://arxiv.org/abs/1911.02116) over the 40 languages proposed in [XTREME](https://github.com/google-research/xtreme) from [Wikiann](https://aclweb.org/anthology/P17-1178). This is still an on-going work and the results will be updated everytime an improvement is reached.
The covered labels are:
```
LOC
ORG
PER
O
```
## Metrics on evaluation set:
### Average over the 40 languages
Number of documents: 262300
```
precision recall f1-score support
ORG 0.81 0.81 0.81 102452
PER 0.90 0.91 0.91 108978
LOC 0.86 0.89 0.87 121868
micro avg 0.86 0.87 0.87 333298
macro avg 0.86 0.87 0.87 333298
```
### Afrikaans
Number of documents: 1000
```
precision recall f1-score support
ORG 0.89 0.88 0.88 582
PER 0.89 0.97 0.93 369
LOC 0.84 0.90 0.86 518
micro avg 0.87 0.91 0.89 1469
macro avg 0.87 0.91 0.89 1469
```
### Arabic
Number of documents: 10000
```
precision recall f1-score support
ORG 0.83 0.84 0.84 3507
PER 0.90 0.91 0.91 3643
LOC 0.88 0.89 0.88 3604
micro avg 0.87 0.88 0.88 10754
macro avg 0.87 0.88 0.88 10754
```
### Basque
Number of documents: 10000
```
precision recall f1-score support
LOC 0.88 0.93 0.91 5228
ORG 0.86 0.81 0.83 3654
PER 0.91 0.91 0.91 4072
micro avg 0.89 0.89 0.89 12954
macro avg 0.89 0.89 0.89 12954
```
### Bengali
Number of documents: 1000
```
precision recall f1-score support
ORG 0.86 0.89 0.87 325
LOC 0.91 0.91 0.91 406
PER 0.96 0.95 0.95 364
micro avg 0.91 0.92 0.91 1095
macro avg 0.91 0.92 0.91 1095
```
### Bulgarian
Number of documents: 1000
```
precision recall f1-score support
ORG 0.86 0.83 0.84 3661
PER 0.92 0.95 0.94 4006
LOC 0.92 0.95 0.94 6449
micro avg 0.91 0.92 0.91 14116
macro avg 0.91 0.92 0.91 14116
```
### Burmese
Number of documents: 100
```
precision recall f1-score support
LOC 0.60 0.86 0.71 37
ORG 0.68 0.63 0.66 30
PER 0.44 0.44 0.44 36
micro avg 0.57 0.65 0.61 103
macro avg 0.57 0.65 0.60 103
```
### Chinese
Number of documents: 10000
```
precision recall f1-score support
ORG 0.70 0.69 0.70 4022
LOC 0.76 0.81 0.78 3830
PER 0.84 0.84 0.84 3706
micro avg 0.76 0.78 0.77 11558
macro avg 0.76 0.78 0.77 11558
```
### Dutch
Number of documents: 10000
```
precision recall f1-score support
ORG 0.87 0.87 0.87 3930
PER 0.95 0.95 0.95 4377
LOC 0.91 0.92 0.91 4813
micro avg 0.91 0.92 0.91 13120
macro avg 0.91 0.92 0.91 13120
```
### English
Number of documents: 10000
```
precision recall f1-score support
LOC 0.83 0.84 0.84 4781
PER 0.89 0.90 0.89 4559
ORG 0.75 0.75 0.75 4633
micro avg 0.82 0.83 0.83 13973
macro avg 0.82 0.83 0.83 13973
```
### Estonian
Number of documents: 10000
```
precision recall f1-score support
LOC 0.89 0.92 0.91 5654
ORG 0.85 0.85 0.85 3878
PER 0.94 0.94 0.94 4026
micro avg 0.90 0.91 0.90 13558
macro avg 0.90 0.91 0.90 13558
```
### Finnish
Number of documents: 10000
```
precision recall f1-score support
ORG 0.84 0.83 0.84 4104
LOC 0.88 0.90 0.89 5307
PER 0.95 0.94 0.94 4519
micro avg 0.89 0.89 0.89 13930
macro avg 0.89 0.89 0.89 13930
```
### French
Number of documents: 10000
```
precision recall f1-score support
LOC 0.90 0.89 0.89 4808
ORG 0.84 0.87 0.85 3876
PER 0.94 0.93 0.94 4249
micro avg 0.89 0.90 0.90 12933
macro avg 0.89 0.90 0.90 12933
```
### Georgian
Number of documents: 10000
```
precision recall f1-score support
PER 0.90 0.91 0.90 3964
ORG 0.83 0.77 0.80 3757
LOC 0.82 0.88 0.85 4894
micro avg 0.84 0.86 0.85 12615
macro avg 0.84 0.86 0.85 12615
```
### German
Number of documents: 10000
```
precision recall f1-score support
LOC 0.85 0.90 0.87 4939
PER 0.94 0.91 0.92 4452
ORG 0.79 0.78 0.79 4247
micro avg 0.86 0.86 0.86 13638
macro avg 0.86 0.86 0.86 13638
```
### Greek
Number of documents: 10000
```
precision recall f1-score support
ORG 0.86 0.85 0.85 3771
LOC 0.88 0.91 0.90 4436
PER 0.91 0.93 0.92 3894
micro avg 0.88 0.90 0.89 12101
macro avg 0.88 0.90 0.89 12101
```
### Hebrew
Number of documents: 10000
```
precision recall f1-score support
PER 0.87 0.88 0.87 4206
ORG 0.76 0.75 0.76 4190
LOC 0.85 0.85 0.85 4538
micro avg 0.83 0.83 0.83 12934
macro avg 0.82 0.83 0.83 12934
```
### Hindi
Number of documents: 1000
```
precision recall f1-score support
ORG 0.78 0.81 0.79 362
LOC 0.83 0.85 0.84 422
PER 0.90 0.95 0.92 427
micro avg 0.84 0.87 0.85 1211
macro avg 0.84 0.87 0.85 1211
```
### Hungarian
Number of documents: 10000
```
precision recall f1-score support
PER 0.95 0.95 0.95 4347
ORG 0.87 0.88 0.87 3988
LOC 0.90 0.92 0.91 5544
micro avg 0.91 0.92 0.91 13879
macro avg 0.91 0.92 0.91 13879
```
### Indonesian
Number of documents: 10000
```
precision recall f1-score support
ORG 0.88 0.89 0.88 3735
LOC 0.93 0.95 0.94 3694
PER 0.93 0.93 0.93 3947
micro avg 0.91 0.92 0.92 11376
macro avg 0.91 0.92 0.92 11376
```
### Italian
Number of documents: 10000
```
precision recall f1-score support
LOC 0.88 0.88 0.88 4592
ORG 0.86 0.86 0.86 4088
PER 0.96 0.96 0.96 4732
micro avg 0.90 0.90 0.90 13412
macro avg 0.90 0.90 0.90 13412
```
### Japanese
Number of documents: 10000
```
precision recall f1-score support
ORG 0.62 0.61 0.62 4184
PER 0.76 0.81 0.78 3812
LOC 0.68 0.74 0.71 4281
micro avg 0.69 0.72 0.70 12277
macro avg 0.69 0.72 0.70 12277
```
### Javanese
Number of documents: 100
```
precision recall f1-score support
ORG 0.79 0.80 0.80 46
PER 0.81 0.96 0.88 26
LOC 0.75 0.75 0.75 40
micro avg 0.78 0.82 0.80 112
macro avg 0.78 0.82 0.80 112
```
### Kazakh
Number of documents: 1000
```
precision recall f1-score support
ORG 0.76 0.61 0.68 307
LOC 0.78 0.90 0.84 461
PER 0.87 0.91 0.89 367
micro avg 0.81 0.83 0.82 1135
macro avg 0.81 0.83 0.81 1135
```
### Korean
Number of documents: 10000
```
precision recall f1-score support
LOC 0.86 0.89 0.88 5097
ORG 0.79 0.74 0.77 4218
PER 0.83 0.86 0.84 4014
micro avg 0.83 0.83 0.83 13329
macro avg 0.83 0.83 0.83 13329
```
### Malay
Number of documents: 1000
```
precision recall f1-score support
ORG 0.87 0.89 0.88 368
PER 0.92 0.91 0.91 366
LOC 0.94 0.95 0.95 354
micro avg 0.91 0.92 0.91 1088
macro avg 0.91 0.92 0.91 1088
```
### Malayalam
Number of documents: 1000
```
precision recall f1-score support
ORG 0.75 0.74 0.75 347
PER 0.84 0.89 0.86 417
LOC 0.74 0.75 0.75 391
micro avg 0.78 0.80 0.79 1155
macro avg 0.78 0.80 0.79 1155
```
### Marathi
Number of documents: 1000
```
precision recall f1-score support
PER 0.89 0.94 0.92 394
LOC 0.82 0.84 0.83 457
ORG 0.84 0.78 0.81 339
micro avg 0.85 0.86 0.85 1190
macro avg 0.85 0.86 0.85 1190
```
### Persian
Number of documents: 10000
```
precision recall f1-score support
PER 0.93 0.92 0.93 3540
LOC 0.93 0.93 0.93 3584
ORG 0.89 0.92 0.90 3370
micro avg 0.92 0.92 0.92 10494
macro avg 0.92 0.92 0.92 10494
```
### Portuguese
Number of documents: 10000
```
precision recall f1-score support
LOC 0.90 0.91 0.91 4819
PER 0.94 0.92 0.93 4184
ORG 0.84 0.88 0.86 3670
micro avg 0.89 0.91 0.90 12673
macro avg 0.90 0.91 0.90 12673
```
### Russian
Number of documents: 10000
```
precision recall f1-score support
PER 0.93 0.96 0.95 3574
LOC 0.87 0.89 0.88 4619
ORG 0.82 0.80 0.81 3858
micro avg 0.87 0.88 0.88 12051
macro avg 0.87 0.88 0.88 12051
```
### Spanish
Number of documents: 10000
```
precision recall f1-score support
PER 0.95 0.93 0.94 3891
ORG 0.86 0.88 0.87 3709
LOC 0.89 0.91 0.90 4553
micro avg 0.90 0.91 0.90 12153
macro avg 0.90 0.91 0.90 12153
```
### Swahili
Number of documents: 1000
```
precision recall f1-score support
ORG 0.82 0.85 0.83 349
PER 0.95 0.92 0.94 403
LOC 0.86 0.89 0.88 450
micro avg 0.88 0.89 0.88 1202
macro avg 0.88 0.89 0.88 1202
```
### Tagalog
Number of documents: 1000
```
precision recall f1-score support
LOC 0.90 0.91 0.90 338
ORG 0.83 0.91 0.87 339
PER 0.96 0.93 0.95 350
micro avg 0.90 0.92 0.91 1027
macro avg 0.90 0.92 0.91 1027
```
### Tamil
Number of documents: 1000
```
precision recall f1-score support
PER 0.90 0.92 0.91 392
ORG 0.77 0.76 0.76 370
LOC 0.78 0.81 0.79 421
micro avg 0.82 0.83 0.82 1183
macro avg 0.82 0.83 0.82 1183
```
### Telugu
Number of documents: 1000
```
precision recall f1-score support
ORG 0.67 0.55 0.61 347
LOC 0.78 0.87 0.82 453
PER 0.73 0.86 0.79 393
micro avg 0.74 0.77 0.76 1193
macro avg 0.73 0.77 0.75 1193
```
### Thai
Number of documents: 10000
```
precision recall f1-score support
LOC 0.63 0.76 0.69 3928
PER 0.78 0.83 0.80 6537
ORG 0.59 0.59 0.59 4257
micro avg 0.68 0.74 0.71 14722
macro avg 0.68 0.74 0.71 14722
```
### Turkish
Number of documents: 10000
```
precision recall f1-score support
PER 0.94 0.94 0.94 4337
ORG 0.88 0.89 0.88 4094
LOC 0.90 0.92 0.91 4929
micro avg 0.90 0.92 0.91 13360
macro avg 0.91 0.92 0.91 13360
```
### Urdu
Number of documents: 1000
```
precision recall f1-score support
LOC 0.90 0.95 0.93 352
PER 0.96 0.96 0.96 333
ORG 0.91 0.90 0.90 326
micro avg 0.92 0.94 0.93 1011
macro avg 0.92 0.94 0.93 1011
```
### Vietnamese
Number of documents: 10000
```
precision recall f1-score support
ORG 0.86 0.87 0.86 3579
LOC 0.88 0.91 0.90 3811
PER 0.92 0.93 0.93 3717
micro avg 0.89 0.90 0.90 11107
macro avg 0.89 0.90 0.90 11107
```
### Yoruba
Number of documents: 100
```
precision recall f1-score support
LOC 0.54 0.72 0.62 36
ORG 0.58 0.31 0.41 35
PER 0.77 1.00 0.87 36
micro avg 0.64 0.68 0.66 107
macro avg 0.63 0.68 0.63 107
```
## Reproduce the results
Download and prepare the dataset from the [XTREME repo](https://github.com/google-research/xtreme#download-the-data). Next, from the root of the transformers repo run:
```
cd examples/ner
python run_tf_ner.py \
--data_dir . \
--labels ./labels.txt \
--model_name_or_path jplu/tf-xlm-roberta-base \
--output_dir model \
--max-seq-length 128 \
--num_train_epochs 2 \
--per_gpu_train_batch_size 16 \
--per_gpu_eval_batch_size 32 \
--do_train \
--do_eval \
--logging_dir logs \
--mode token-classification \
--evaluate_during_training \
--optimizer_name adamw
```
## Usage with pipelines
```python
from transformers import pipeline
nlp_ner = pipeline(
"ner",
model="jplu/tf-xlm-r-ner-40-lang",
tokenizer=(
'jplu/tf-xlm-r-ner-40-lang',
{"use_fast": True}),
framework="tf"
)
text_fr = "Barack Obama est né à Hawaï."
text_en = "Barack Obama was born in Hawaii."
text_es = "Barack Obama nació en Hawai."
text_zh = "巴拉克·奧巴馬(Barack Obama)出生於夏威夷。"
text_ar = "ولد باراك أوباما في هاواي."
nlp_ner(text_fr)
#Output: [{'word': '▁Barack', 'score': 0.9894659519195557, 'entity': 'PER'}, {'word': '▁Obama', 'score': 0.9888848662376404, 'entity': 'PER'}, {'word': '▁Hawa', 'score': 0.998701810836792, 'entity': 'LOC'}, {'word': 'ï', 'score': 0.9987035989761353, 'entity': 'LOC'}]
nlp_ner(text_en)
#Output: [{'word': '▁Barack', 'score': 0.9929141998291016, 'entity': 'PER'}, {'word': '▁Obama', 'score': 0.9930834174156189, 'entity': 'PER'}, {'word': '▁Hawaii', 'score': 0.9986202120780945, 'entity': 'LOC'}]
nlp_ner(test_es)
#Output: [{'word': '▁Barack', 'score': 0.9944776296615601, 'entity': 'PER'}, {'word': '▁Obama', 'score': 0.9949177503585815, 'entity': 'PER'}, {'word': '▁Hawa', 'score': 0.9987911581993103, 'entity': 'LOC'}, {'word': 'i', 'score': 0.9984861612319946, 'entity': 'LOC'}]
nlp_ner(test_zh)
#Output: [{'word': '夏威夷', 'score': 0.9988449215888977, 'entity': 'LOC'}]
nlp_ner(test_ar)
#Output: [{'word': '▁با', 'score': 0.9903655648231506, 'entity': 'PER'}, {'word': 'راك', 'score': 0.9850614666938782, 'entity': 'PER'}, {'word': '▁أوباما', 'score': 0.9850308299064636, 'entity': 'PER'}, {'word': '▁ها', 'score': 0.9477543234825134, 'entity': 'LOC'}, {'word': 'وا', 'score': 0.9428229928016663, 'entity': 'LOC'}, {'word': 'ي', 'score': 0.9319471716880798, 'entity': 'LOC'}]
```
|
Bingsu/timm-mobilevitv2_050-beans
|
Bingsu
| 2022-10-06T08:59:38Z | 18 | 0 |
timm
|
[
"timm",
"pytorch",
"image-classification",
"dataset:beans",
"region:us"
] |
image-classification
| 2022-08-08T07:40:55Z |
---
tags:
- image-classification
- timm
library_tag: timm
datasets:
- beans
widget:
- src: https://huggingface.co/nateraw/vit-base-beans/resolve/main/healthy.jpeg
example_title: Healthy
- src: https://huggingface.co/nateraw/vit-base-beans/resolve/main/angular_leaf_spot.jpeg
example_title: Angular Leaf Spot
- src: https://huggingface.co/nateraw/vit-base-beans/resolve/main/bean_rust.jpeg
example_title: Bean Rust
---
# Model card for timm-mobilevitv2_050-beans
This model is a fine-tuned version of `mobilevitv2_050` (from timm) on the `beans` dataset. It achieves the following results on the validation set:
- Loss: 0.08228
- Accuracy: 0.9850
- F1Score: 0.9846
## Image normalization
Imagenet
```python
mean = [0.485, 0.456, 0.406]
std = [0.229, 0.224, 0.225]
```
|
ShadowPower/waifu-diffusion.openvino
|
ShadowPower
| 2022-10-06T08:52:27Z | 0 | 9 | null |
[
"stable-diffusion",
"text-to-image",
"en",
"license:bigscience-bloom-rail-1.0",
"region:us"
] |
text-to-image
| 2022-09-06T09:03:00Z |
---
language:
- en
tags:
- stable-diffusion
- text-to-image
license: bigscience-bloom-rail-1.0
inference: false
---
for this repo: [GitHub - bes-dev/stable_diffusion.openvino](https://github.com/bes-dev/stable_diffusion.openvino)
reference:
[bes-dev/stable-diffusion-v1-4-openvino · Hugging Face](https://huggingface.co/bes-dev/stable-diffusion-v1-4-openvino)
[hakurei/waifu-diffusion · Hugging Face](https://huggingface.co/hakurei/waifu-diffusion)
[GitHub - harishanand95/diffusers: 🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch](https://github.com/harishanand95/diffusers)
|
Seongmi/kobert-finetuned-klue-v2
|
Seongmi
| 2022-10-06T08:20:10Z | 111 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-10-06T05:48:52Z |
---
tags:
- generated_from_trainer
model-index:
- name: kobert-finetuned-klue-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kobert-finetuned-klue-v2
This model is a fine-tuned version of [monologg/kobert](https://huggingface.co/monologg/kobert) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 5.3234
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 5.5898 | 1.08 | 500 | 5.2618 |
| 5.217 | 2.16 | 1000 | 5.1505 |
| 5.1044 | 3.24 | 1500 | 5.0895 |
| 5.0048 | 4.32 | 2000 | 5.0649 |
| 4.8292 | 5.4 | 2500 | 4.9589 |
| 4.5451 | 6.48 | 3000 | 4.8549 |
| 4.2284 | 7.56 | 3500 | 4.8801 |
| 3.9195 | 8.64 | 4000 | 4.8797 |
| 3.6506 | 9.72 | 4500 | 4.8009 |
| 3.4175 | 10.8 | 5000 | 4.8996 |
| 3.1964 | 11.88 | 5500 | 4.9734 |
| 3.0401 | 12.96 | 6000 | 4.9378 |
| 2.8965 | 14.04 | 6500 | 5.3631 |
| 2.7672 | 15.12 | 7000 | 5.3234 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.2
- Tokenizers 0.12.1
|
YYK/YYK
|
YYK
| 2022-10-06T07:25:27Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-10-06T07:25:08Z |
想你,是孤独的嗜好;
怎样也戒不掉。
像无法根治的毒药;
根本就治不好。
爱你,是唯一的嗜好;
安眠药不如拥抱;
虽美好,但失去味道
像曾经的承诺如今的玩笑
想你,是孤独的嗜好;
怎样也戒不掉。
不论争吵或打闹
我全不想忘掉
爱你,是唯一的嗜好;
安眠药不如拥抱;
很美好希望一起到老
雨过天晴,能接受你的讯号
|
ArunVP3799/tf_finetuned_doctr_v2
|
ArunVP3799
| 2022-10-06T06:54:56Z | 2 | 0 |
transformers
|
[
"transformers",
"en",
"endpoints_compatible",
"region:us"
] | null | 2022-10-06T06:54:46Z |
---
language: en
---
<p align="center">
<img src="https://doctr-static.mindee.com/models?id=v0.3.1/Logo_doctr.gif&src=0" width="60%">
</p>
**Optical Character Recognition made seamless & accessible to anyone, powered by TensorFlow 2 & PyTorch**
## Task: recognition
https://github.com/mindee/doctr
### Example usage:
```python
>>> from doctr.io import DocumentFile
>>> from doctr.models import ocr_predictor, from_hub
>>> img = DocumentFile.from_images(['<image_path>'])
>>> # Load your model from the hub
>>> model = from_hub('mindee/my-model')
>>> # Pass it to the predictor
>>> # If your model is a recognition model:
>>> predictor = ocr_predictor(det_arch='db_mobilenet_v3_large',
>>> reco_arch=model,
>>> pretrained=True)
>>> # If your model is a detection model:
>>> predictor = ocr_predictor(det_arch=model,
>>> reco_arch='crnn_mobilenet_v3_small',
>>> pretrained=True)
>>> # Get your predictions
>>> res = predictor(img)
```
### Run Configuration
{
"arch": "crnn_vgg16_bn",
"train_path": "/content/drive/Shareddrives/DataScience/DISA/datasets/IAM_Dataset/IAM/data",
"val_path": "/content/drive/MyDrive/OCR_Finetuning/test",
"train_samples": 1000,
"val_samples": 20,
"font": "FreeMono.ttf,FreeSans.ttf,FreeSerif.ttf",
"min_chars": 1,
"max_chars": 12,
"name": null,
"epochs": 10,
"batch_size": 64,
"input_size": 32,
"lr": 0.001,
"workers": 2,
"resume": null,
"vocab": "legacy_french",
"test_only": false,
"show_samples": false,
"wb": false,
"push_to_hub": false,
"pretrained": true,
"amp": false,
"find_lr": false
}
|
tubyneto/my_new_model
|
tubyneto
| 2022-10-06T05:57:16Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-10-06T05:57:02Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# tubyneto/my_new_model
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('tubyneto/my_new_model')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=tubyneto/my_new_model)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 7 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.SoftmaxLoss.SoftmaxLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
sd-concepts-library/beldam
|
sd-concepts-library
| 2022-10-06T04:31:38Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2022-10-06T04:31:28Z |
---
license: mit
---
### beldam on Stable Diffusion
This is the `beldam` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:
















|
YujiK/deberta-v3-small-test_ver1
|
YujiK
| 2022-10-06T04:09:37Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-06T03:31:16Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: deberta-v3-small-test_ver1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-small-test_ver1
This model is a fine-tuned version of [microsoft/deberta-v3-small](https://huggingface.co/microsoft/deberta-v3-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7818
- Pearson: 0.8125
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|
| 0.8486 | 1.0 | 2052 | 0.7806 | 0.7692 |
| 0.6951 | 2.0 | 4104 | 0.7546 | 0.7934 |
| 0.5971 | 3.0 | 6156 | 0.7366 | 0.8085 |
| 0.4998 | 4.0 | 8208 | 0.7407 | 0.8136 |
| 0.4407 | 5.0 | 10260 | 0.7818 | 0.8125 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.2
- Tokenizers 0.12.1
|
GItaf/bert-base-uncased-bert-base-uncased-mc-weight0-epoch15
|
GItaf
| 2022-10-06T02:37:42Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-10-04T03:25:21Z |
---
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-bert-base-uncased-mc-weight0-epoch15
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-bert-base-uncased-mc-weight0-epoch15
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3651
- Cls loss: 2.9223
- Lm loss: 4.3649
- Cls Accuracy: 0.0248
- Cls F1: 0.0057
- Cls Precision: 0.0061
- Cls Recall: 0.0248
- Perplexity: 78.64
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cls loss | Lm loss | Cls Accuracy | Cls F1 | Cls Precision | Cls Recall | Perplexity |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:-------:|:------------:|:------:|:-------------:|:----------:|:----------:|
| 4.8711 | 1.0 | 3470 | 4.5156 | 2.9252 | 4.5155 | 0.0213 | 0.0047 | 0.0042 | 0.0213 | 91.42 |
| 4.483 | 2.0 | 6940 | 4.4193 | 2.9248 | 4.4191 | 0.0219 | 0.0048 | 0.0042 | 0.0219 | 83.02 |
| 4.3345 | 3.0 | 10410 | 4.3684 | 2.9244 | 4.3682 | 0.0219 | 0.0048 | 0.0042 | 0.0219 | 78.91 |
| 4.2266 | 4.0 | 13880 | 4.3445 | 2.9241 | 4.3443 | 0.0225 | 0.0049 | 0.0043 | 0.0225 | 77.04 |
| 4.1388 | 5.0 | 17350 | 4.3260 | 2.9237 | 4.3258 | 0.0231 | 0.0050 | 0.0044 | 0.0231 | 75.63 |
| 4.0644 | 6.0 | 20820 | 4.3299 | 2.9234 | 4.3297 | 0.0231 | 0.0050 | 0.0044 | 0.0231 | 75.92 |
| 3.999 | 7.0 | 24290 | 4.3278 | 2.9232 | 4.3276 | 0.0231 | 0.0059 | 0.0061 | 0.0231 | 75.76 |
| 3.9426 | 8.0 | 27760 | 4.3269 | 2.9230 | 4.3267 | 0.0231 | 0.0059 | 0.0061 | 0.0231 | 75.70 |
| 3.8929 | 9.0 | 31230 | 4.3324 | 2.9228 | 4.3322 | 0.0248 | 0.0061 | 0.0062 | 0.0248 | 76.11 |
| 3.8488 | 10.0 | 34700 | 4.3382 | 2.9227 | 4.3380 | 0.0248 | 0.0061 | 0.0064 | 0.0248 | 76.55 |
| 3.8116 | 11.0 | 38170 | 4.3461 | 2.9225 | 4.3459 | 0.0242 | 0.0057 | 0.0061 | 0.0242 | 77.16 |
| 3.7791 | 12.0 | 41640 | 4.3537 | 2.9224 | 4.3535 | 0.0248 | 0.0057 | 0.0061 | 0.0248 | 77.75 |
| 3.7532 | 13.0 | 45110 | 4.3593 | 2.9223 | 4.3591 | 0.0248 | 0.0057 | 0.0061 | 0.0248 | 78.19 |
| 3.7321 | 14.0 | 48580 | 4.3588 | 2.9223 | 4.3586 | 0.0248 | 0.0057 | 0.0061 | 0.0248 | 78.15 |
| 3.7182 | 15.0 | 52050 | 4.3651 | 2.9223 | 4.3649 | 0.0248 | 0.0057 | 0.0061 | 0.0248 | 78.64 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1
- Datasets 2.4.0
- Tokenizers 0.12.1
|
sd-concepts-library/other-mother
|
sd-concepts-library
| 2022-10-06T00:58:05Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2022-10-06T00:58:00Z |
---
license: mit
---
### other-mother on Stable Diffusion
This is the `other-mother` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:
















|
Bakuraza/q-FrozenLake-v1-4x4-noSlippery
|
Bakuraza
| 2022-10-06T00:09:23Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-10-06T00:09:17Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Bakuraza/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
jEVVB/dreamSD
|
jEVVB
| 2022-10-05T22:03:26Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2022-10-05T22:03:26Z |
---
license: creativeml-openrail-m
---
|
monakth/bert-base-multilingual-uncased-finetuned-squad
|
monakth
| 2022-10-05T20:20:25Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-10-05T16:46:59Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-base-multilingual-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-uncased-finetuned-squad
This model is a fine-tuned version of [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0321
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.9976 | 1.0 | 5547 | 0.9970 |
| 0.7523 | 2.0 | 11094 | 0.9646 |
| 0.5824 | 3.0 | 16641 | 1.0321 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
selimsametoglu/xlm-roberta-base-finetuned-panx-de
|
selimsametoglu
| 2022-10-05T19:35:08Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-09-15T21:44:24Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8683253805953192
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1386
- F1: 0.8683
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 15
- eval_batch_size: 15
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2501 | 1.0 | 839 | 0.1879 | 0.8135 |
| 0.1328 | 2.0 | 1678 | 0.1419 | 0.8475 |
| 0.0792 | 3.0 | 2517 | 0.1386 | 0.8683 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.10.0
- Datasets 2.3.2
- Tokenizers 0.12.1
|
minjibi/test1000v2
|
minjibi
| 2022-10-05T19:18:26Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-10-05T17:52:34Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: test1000v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test1000v2
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7873
- Wer: 0.6162
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 7.7913 | 3.22 | 100 | 3.3481 | 1.0 |
| 3.3831 | 6.44 | 200 | 3.3229 | 1.0 |
| 3.3778 | 9.67 | 300 | 3.3211 | 1.0 |
| 3.3671 | 12.89 | 400 | 3.2973 | 1.0 |
| 3.3528 | 16.13 | 500 | 3.1349 | 1.0 |
| 1.8611 | 19.35 | 600 | 0.7873 | 0.6162 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.10.0+cu102
- Datasets 1.4.1
- Tokenizers 0.12.1
|
CalamitousVisibility/enron-spam-checker-10000
|
CalamitousVisibility
| 2022-10-05T19:00:54Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-05T06:52:27Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: enron-spam-checker-10000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# enron-spam-checker-10000
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0512
- Accuracy: 0.9915
- F1: [0.99143577 0.99156328]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cpu
- Datasets 2.5.1
- Tokenizers 0.12.1
|
et-do/invasive_plant_classifier
|
et-do
| 2022-10-05T18:07:42Z | 0 | 0 |
FastAI
|
[
"FastAI",
"region:us"
] | null | 2022-08-09T02:55:02Z |
---
library_name: FastAI
tags:
- FastAI
---
# British Columbia Invasive Plants Identifier
## Model Details
An invasive plant classifier trained on BingSearch Images scraped dataset with FastAi.
Model is able to detect 6 species (marked for Provincial Containment) of invasive plants defined by the government of British Columbia.
Hosted in a HuggingFace Space accessible here: https://huggingface.co/spaces/et-do/bc_invasive_plant_classifier
## Notebook Details
In the main notebook (https://github.com/et-do/invasive_plant_classifier), I aim to apply transfer learning methods to a PyTorch image classification CNN (resnet34) to be able to identify both the species and the level of invasiveness to British Columbia as deemed by https://www2.gov.bc.ca/gov/content/environment/plants-animals-ecosystems/invasive-species/priority-species/priority-plants
Currently, the BC government identifies invasive plants across 5 categories:
Prevent: Species determined to be high risk to BC and not yet established. Management objective is prevent the introduction and establishment.
Provincial EDRR: Species is high risk to B.C. and is new to the Province. Management objective is eradication.
Provincial Containment: Species is high risk with limited extent in B.C. but significant potential to spread. Management objective is to prevent further expansion into new areas with the ultimate goal of reducing the overall extent.
Regional containment/Control: Species is high risk and well established, or medium risk with high potential for spread. Management objective is to prevent further expansion into new areas within the region through establishment of containment lines and identification of occurrences outside the line to control.
Management: Species is more widespread but may be of concern in specific situations with certain high values - e.g., conservation lands, specific agriculture crops. Management objective is to reduce the invasive species impacts locally or regionally, where resources are available.
All of these categories could be extremely relevant to a free-to-use plant-identifier web app. However, in the sake of API costs, resource management, and model complexity, the first version of the model will only be trained to recognize plants under the Provincial Containment category (n=6). As the web app won't be geographically restricted, being able to use it both inside BC and outside BC to identify these plants that have a management objective of limiting outer-provincial occurrences could provide immense value.
The notebook will walkthrough:
- Data gathering and validation
- Data preprocessing & augmentation via FastAi dataloaders
- Training the model on the new dataset, and using results to further clean the data
- Serving the model under a huggingface space
|
HuggingAlex1247/gelectra-large-germaner
|
HuggingAlex1247
| 2022-10-05T17:49:34Z | 60 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"electra",
"token-classification",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-10-05T17:45:59Z |
---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: HuggingAlex1247/gelectra-large-germaner
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# HuggingAlex1247/gelectra-large-germaner
This model is a fine-tuned version of [deepset/gelectra-large](https://huggingface.co/deepset/gelectra-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1696
- Validation Loss: 0.0800
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 3475, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.1696 | 0.0800 | 0 |
### Framework versions
- Transformers 4.22.2
- TensorFlow 2.6.2
- Datasets 1.18.0
- Tokenizers 0.12.1
|
grantsl/distilbert-base-uncased-finetuned-emotion-3
|
grantsl
| 2022-10-05T17:49:33Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-05T17:26:26Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion-3
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3696
- Accuracy: 0.8333
- F1: 0.8333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 200
- eval_batch_size: 200
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.4146 | 1.0 | 560 | 0.3742 | 0.8317 | 0.8316 |
| 0.3429 | 2.0 | 1120 | 0.3696 | 0.8333 | 0.8333 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.2
- Tokenizers 0.12.1
|
sd-concepts-library/lazytown-stephanie
|
sd-concepts-library
| 2022-10-05T17:25:11Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2022-10-05T17:24:59Z |
---
license: mit
---
### lazytown-stephanie on Stable Diffusion
This is the `lazytown-stephanie` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:
















|
MM2157/bert-finetuned-propaganda-18
|
MM2157
| 2022-10-05T17:16:57Z | 135 | 1 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-09-08T15:04:19Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-propaganda-18
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-propaganda-18
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6542
- Precision: 0.0924
- Recall: 0.0470
- F1: 0.0623
- Accuracy: 0.8836
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.6679 | 1.0 | 670 | 0.7379 | 0.125 | 0.0035 | 0.0069 | 0.8868 |
| 0.548 | 2.0 | 1340 | 0.5916 | 0.0845 | 0.0435 | 0.0574 | 0.8831 |
| 0.3781 | 3.0 | 2010 | 0.6542 | 0.0924 | 0.0470 | 0.0623 | 0.8836 |
### Framework versions
- Transformers 4.21.3
- Pytorch 1.12.1
- Datasets 2.4.0
- Tokenizers 0.12.1
|
ivanlau/wav2vec2-large-xls-r-300m-cantonese
|
ivanlau
| 2022-10-05T16:10:07Z | 71 | 1 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"zh-HK",
"zh",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- zh
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- mozilla-foundation/common_voice_8_0
- robust-speech-event
- zh-HK
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: XLS-R-300M - Chinese_HongKong (Cantonese)
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: zh-hk
metrics:
- name: Test WER
type: wer
value: 0.8111349803079126
- name: Test CER
type: cer
value: 0.21962250882996914
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: zh-hk
metrics:
- name: Test WER
type: wer
value: 1.0
- name: Test CER
type: cer
value: 0.6160564326503191
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8
type: mozilla-foundation/common_voice_8_0
args: zh-HK
metrics:
- name: Test WER with LM
type: wer
value: 0.8055853920515574
- name: Test CER with LM
type: cer
value: 0.21578686612008757
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: zh-HK
metrics:
- name: Test WER with LM
type: wer
value: 1.0012453300124533
- name: Test CER with LM
type: cer
value: 0.6153006382264025
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: zh-HK
metrics:
- name: Test CER
type: cer
value: 61.55
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - ZH-HK dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4848
- Wer: 0.8004
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| No log | 1.0 | 183 | 47.8442 | 1.0 |
| No log | 2.0 | 366 | 6.3109 | 1.0 |
| 41.8902 | 3.0 | 549 | 6.2392 | 1.0 |
| 41.8902 | 4.0 | 732 | 5.9739 | 1.1123 |
| 41.8902 | 5.0 | 915 | 4.9014 | 1.9474 |
| 5.5817 | 6.0 | 1098 | 3.9892 | 1.0188 |
| 5.5817 | 7.0 | 1281 | 3.5080 | 1.0104 |
| 5.5817 | 8.0 | 1464 | 3.0797 | 0.9905 |
| 3.5579 | 9.0 | 1647 | 2.8111 | 0.9836 |
| 3.5579 | 10.0 | 1830 | 2.6726 | 0.9815 |
| 2.7771 | 11.0 | 2013 | 2.7177 | 0.9809 |
| 2.7771 | 12.0 | 2196 | 2.3582 | 0.9692 |
| 2.7771 | 13.0 | 2379 | 2.1708 | 0.9757 |
| 2.3488 | 14.0 | 2562 | 2.0491 | 0.9526 |
| 2.3488 | 15.0 | 2745 | 1.8518 | 0.9378 |
| 2.3488 | 16.0 | 2928 | 1.6845 | 0.9286 |
| 1.7859 | 17.0 | 3111 | 1.6412 | 0.9280 |
| 1.7859 | 18.0 | 3294 | 1.5488 | 0.9035 |
| 1.7859 | 19.0 | 3477 | 1.4546 | 0.9010 |
| 1.3898 | 20.0 | 3660 | 1.5147 | 0.9201 |
| 1.3898 | 21.0 | 3843 | 1.4467 | 0.8959 |
| 1.1291 | 22.0 | 4026 | 1.4743 | 0.9035 |
| 1.1291 | 23.0 | 4209 | 1.3827 | 0.8762 |
| 1.1291 | 24.0 | 4392 | 1.3437 | 0.8792 |
| 0.8993 | 25.0 | 4575 | 1.2895 | 0.8577 |
| 0.8993 | 26.0 | 4758 | 1.2928 | 0.8558 |
| 0.8993 | 27.0 | 4941 | 1.2947 | 0.9163 |
| 0.6298 | 28.0 | 5124 | 1.3151 | 0.8738 |
| 0.6298 | 29.0 | 5307 | 1.2972 | 0.8514 |
| 0.6298 | 30.0 | 5490 | 1.3030 | 0.8432 |
| 0.4757 | 31.0 | 5673 | 1.3264 | 0.8364 |
| 0.4757 | 32.0 | 5856 | 1.3131 | 0.8421 |
| 0.3735 | 33.0 | 6039 | 1.3457 | 0.8588 |
| 0.3735 | 34.0 | 6222 | 1.3450 | 0.8473 |
| 0.3735 | 35.0 | 6405 | 1.3452 | 0.9218 |
| 0.3253 | 36.0 | 6588 | 1.3754 | 0.8397 |
| 0.3253 | 37.0 | 6771 | 1.3554 | 0.8353 |
| 0.3253 | 38.0 | 6954 | 1.3532 | 0.8312 |
| 0.2816 | 39.0 | 7137 | 1.3694 | 0.8345 |
| 0.2816 | 40.0 | 7320 | 1.3953 | 0.8296 |
| 0.2397 | 41.0 | 7503 | 1.3858 | 0.8293 |
| 0.2397 | 42.0 | 7686 | 1.3959 | 0.8402 |
| 0.2397 | 43.0 | 7869 | 1.4350 | 0.9318 |
| 0.2084 | 44.0 | 8052 | 1.4004 | 0.8806 |
| 0.2084 | 45.0 | 8235 | 1.3871 | 0.8255 |
| 0.2084 | 46.0 | 8418 | 1.4060 | 0.8252 |
| 0.1853 | 47.0 | 8601 | 1.3992 | 0.8501 |
| 0.1853 | 48.0 | 8784 | 1.4186 | 0.8252 |
| 0.1853 | 49.0 | 8967 | 1.4120 | 0.8165 |
| 0.1671 | 50.0 | 9150 | 1.4166 | 0.8214 |
| 0.1671 | 51.0 | 9333 | 1.4411 | 0.8501 |
| 0.1513 | 52.0 | 9516 | 1.4692 | 0.8394 |
| 0.1513 | 53.0 | 9699 | 1.4640 | 0.8391 |
| 0.1513 | 54.0 | 9882 | 1.4501 | 0.8419 |
| 0.133 | 55.0 | 10065 | 1.4134 | 0.8351 |
| 0.133 | 56.0 | 10248 | 1.4593 | 0.8405 |
| 0.133 | 57.0 | 10431 | 1.4560 | 0.8389 |
| 0.1198 | 58.0 | 10614 | 1.4734 | 0.8334 |
| 0.1198 | 59.0 | 10797 | 1.4649 | 0.8318 |
| 0.1198 | 60.0 | 10980 | 1.4659 | 0.8100 |
| 0.1109 | 61.0 | 11163 | 1.4784 | 0.8119 |
| 0.1109 | 62.0 | 11346 | 1.4938 | 0.8149 |
| 0.1063 | 63.0 | 11529 | 1.5050 | 0.8152 |
| 0.1063 | 64.0 | 11712 | 1.4773 | 0.8176 |
| 0.1063 | 65.0 | 11895 | 1.4836 | 0.8261 |
| 0.0966 | 66.0 | 12078 | 1.4979 | 0.8157 |
| 0.0966 | 67.0 | 12261 | 1.4603 | 0.8048 |
| 0.0966 | 68.0 | 12444 | 1.4803 | 0.8127 |
| 0.0867 | 69.0 | 12627 | 1.4974 | 0.8130 |
| 0.0867 | 70.0 | 12810 | 1.4721 | 0.8078 |
| 0.0867 | 71.0 | 12993 | 1.4644 | 0.8192 |
| 0.0827 | 72.0 | 13176 | 1.4835 | 0.8138 |
| 0.0827 | 73.0 | 13359 | 1.4934 | 0.8122 |
| 0.0734 | 74.0 | 13542 | 1.4951 | 0.8062 |
| 0.0734 | 75.0 | 13725 | 1.4908 | 0.8070 |
| 0.0734 | 76.0 | 13908 | 1.4876 | 0.8124 |
| 0.0664 | 77.0 | 14091 | 1.4934 | 0.8053 |
| 0.0664 | 78.0 | 14274 | 1.4603 | 0.8048 |
| 0.0664 | 79.0 | 14457 | 1.4732 | 0.8073 |
| 0.0602 | 80.0 | 14640 | 1.4925 | 0.8078 |
| 0.0602 | 81.0 | 14823 | 1.4812 | 0.8064 |
| 0.057 | 82.0 | 15006 | 1.4950 | 0.8013 |
| 0.057 | 83.0 | 15189 | 1.4785 | 0.8056 |
| 0.057 | 84.0 | 15372 | 1.4856 | 0.7993 |
| 0.0517 | 85.0 | 15555 | 1.4755 | 0.8034 |
| 0.0517 | 86.0 | 15738 | 1.4813 | 0.8034 |
| 0.0517 | 87.0 | 15921 | 1.4966 | 0.8048 |
| 0.0468 | 88.0 | 16104 | 1.4883 | 0.8002 |
| 0.0468 | 89.0 | 16287 | 1.4746 | 0.8023 |
| 0.0468 | 90.0 | 16470 | 1.4697 | 0.7974 |
| 0.0426 | 91.0 | 16653 | 1.4775 | 0.8004 |
| 0.0426 | 92.0 | 16836 | 1.4852 | 0.8023 |
| 0.0387 | 93.0 | 17019 | 1.4868 | 0.8004 |
| 0.0387 | 94.0 | 17202 | 1.4785 | 0.8021 |
| 0.0387 | 95.0 | 17385 | 1.4892 | 0.8015 |
| 0.0359 | 96.0 | 17568 | 1.4862 | 0.8018 |
| 0.0359 | 97.0 | 17751 | 1.4851 | 0.8007 |
| 0.0359 | 98.0 | 17934 | 1.4846 | 0.7999 |
| 0.0347 | 99.0 | 18117 | 1.4852 | 0.7993 |
| 0.0347 | 100.0 | 18300 | 1.4848 | 0.8004 |
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id ivanlau/wav2vec2-large-xls-r-300m-cantonese --dataset mozilla-foundation/common_voice_8_0 --config zh-HK --split test --log_outputs
```
2. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py --model_id ivanlau/wav2vec2-large-xls-r-300m-cantonese --dataset speech-recognition-community-v2/dev_data --config zh-HK --split validation --chunk_length_s 5.0 --stride_length_s 1.0 --log_outputs
```
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
ronanki/MiniLM-L12-v2-SEP-token
|
ronanki
| 2022-10-05T15:37:39Z | 2 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-10-05T15:37:30Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# ronanki/MiniLM-L12-v2-SEP-token
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('ronanki/MiniLM-L12-v2-SEP-token')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('ronanki/MiniLM-L12-v2-SEP-token')
model = AutoModel.from_pretrained('ronanki/MiniLM-L12-v2-SEP-token')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=ronanki/MiniLM-L12-v2-SEP-token)
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 329 with parameters:
```
{'batch_size': 64}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 20,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 658,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
victorbahlangene/distilbert-base-uncased-finetuned-emotion
|
victorbahlangene
| 2022-10-05T14:40:25Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-05T13:57:11Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.922
- name: F1
type: f1
value: 0.9220357543635932
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2141
- Accuracy: 0.922
- F1: 0.9220
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8134 | 1.0 | 250 | 0.3017 | 0.9105 | 0.9087 |
| 0.2455 | 2.0 | 500 | 0.2141 | 0.922 | 0.9220 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.2
- Tokenizers 0.12.1
|
DigitalUmuganda/KinyarwandaTTS
|
DigitalUmuganda
| 2022-10-05T13:59:54Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-10-04T00:39:40Z |
# Grapheme-based statistical parametric synthesizer for Kinyarwanda
A Grapheme-based approach was chosen because they give acceptable performances for low-resource languages. For instance, this model was trained on approximately 5 hours of Kinyarwanda audios with their corresponding transcriptions, no further language-specific information was provided.
The [Festvox](http://festvox.org/) suite of tools was employed to build the model, and the Flite engine was used to generate a small, and portable executable file for this model. Currently, this model can only be run on Linux.
## Model description
To build the voice, we needed to map graphemes to their corresponding phonemes. In this work the UniTran-based approach to building the voice. The graphemes are converted to UTF-8 code points, then these are converted to guessed phonetic transcription in X-Sampa. After obtaining the phonemes, on each one of them we use an HMM model from the Clustergen framework to obtain important features. These features are then used to train RandomForest(20 decision trees) to predict spectral features. It achieves an `MCD` of ` 5.03 `.
## Limitations and Recommendations
The voice produced lacks in crispness and in some cases ignore tonal information which is indispensable in Kinyarwanda. We believe that with a large corpus of linguistic information the voice would sound more natural.
## Usage
Use the following to convert text to a wav file:
``` sh
./flite_du_kin_tts -f kinyarwanda.txt kinyarwanda.wav
```
And to use a terminal prompt, use:
``` sh
./flite_du_kin_tts -t "Muraho Rwanda" kinyarwanda.wav
```
|
murat/kyrgyz_language_NER
|
murat
| 2022-10-05T13:48:18Z | 115 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"ky",
"dataset:wikiann",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-10-05T11:41:45Z |
---
language: ky
datasets:
- wikiann
examples:
widget:
- text: "Бириккен Улуттар Уюму"
example_title: "Sentence_1"
- text: "Жусуп Мамай"
example_title: "Sentence_2"
---
<h1>Kyrgyz Named Entity Recognition</h1>
Fine-tuning bert-base-multilingual-cased on Wikiann dataset for performing NER on Kyrgyz language.
WARNING: this model is not usable (see metrics below) and is built just as a proof of concept.
I'll update the model after cleaning up the Wikiann dataset (`ky` part of it which contains only 100 train/test/valid items) or coming up with a completely new dataset.
## Label ID and its corresponding label name
| Label ID | Label Name|
| -------- | ----- |
| 0 | O |
| 1 | B-PER |
| 2 | I-PER |
| 3 | B-ORG|
| 4 | I-ORG |
| 5 | B-LOC |
| 6 | I-LOC |
<h1>Results</h1>
| Name | Overall F1 | LOC F1 | ORG F1 | PER F1 |
| ---- | -------- | ----- | ---- | ---- |
| Train set | 0.595683 | 0.570312 | 0.687179 | 0.549180 |
| Validation set | 0.461333 | 0.551181 | 0.401913 | 0.425087 |
| Test set | 0.442622 | 0.456852 | 0.469565 | 0.413114 |
Example
```py
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("murat/kyrgyz_language_NER")
model = AutoModelForTokenClassification.from_pretrained("murat/kyrgyz_language_NER")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Жусуп Мамай"
ner_results = nlp(example)
ner_results
```
|
joelearn22/ppo-LunarLander-v2
|
joelearn22
| 2022-10-05T13:14:18Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-10-05T13:13:48Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: MLP
results:
- metrics:
- type: mean_reward
value: 262.49 +/- 31.60
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **MLP** Agent playing **LunarLander-v2**
This is a trained model of a **MLP** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
lanpouthakoun/q-FrozenLake-v1-4x4-noSlippery
|
lanpouthakoun
| 2022-10-05T12:51:51Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-10-05T12:49:43Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="andlanpo/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
LondonStory/txlm-roberta-hindi-sentiment
|
LondonStory
| 2022-10-05T12:25:30Z | 6,377 | 3 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"txlm-roberta-hindi-sentiment",
"hi",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-05T11:31:24Z |
---
tags:
- txlm-roberta-hindi-sentiment
language:
- hi
license: mit
---
# T-XLM-RoBERTa-Hindi-Sentiment
`T-XLM-RoBERTa-Hindi-Sentiment` model is a fine-tuned version of the [Twitter-XLM-RoBERTa-base](https://huggingface.co/cardiffnlp/twitter-xlm-roberta-base) model from Cardiff-NLP.
## Description of the model and the training data
`txlm-roberta-hindi-sentiment` is a Hindi language sentiment classifier model (in Devanagari script) which is trained on a publicly available Hindi language dataset. See the GitHub source of the dataset [HERE](https://github.com/sid573/Hindi_Sentiment_Analysis).
The training, testing and validation datasets consist of 6807, 1634 and 635 numbers of labelled Hindi language examples respectively.
The trained model shows a weighted average macro F1-score of 0.89 (please see the confusion matrix in the Google Colab notebook below).
## Code
The Google Colab notebook, where the model is fine-tuned by employing native PyTorch modules can be found on LondonStory's GitHub page [HERE](https://github.com/LondonStory/Supervised-NLP-models/blob/main/T-XLM-RoBERTa-base-finetuning-with-pytorch.ipynb).
|
arnaudlauer/distilbert-base-uncased-finetuned-emotion
|
arnaudlauer
| 2022-10-05T12:18:14Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-05T12:05:34Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.6
- name: F1
type: f1
value: 0.4998690476190476
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4150
- Accuracy: 0.6
- F1: 0.4999
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.6648 | 1.0 | 25 | 1.4656 | 0.37 | 0.2128 |
| 1.5452 | 2.0 | 50 | 1.4150 | 0.6 | 0.4999 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
bvrau/covid-general-news-bert
|
bvrau
| 2022-10-05T11:38:51Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-05T03:49:02Z |
---
license: afl-3.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: covid-general-news-bert
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# covid-general-news-bert
This model is a fine-tuned version of [bvrau/covid-twitter-bert-v2-struth](https://huggingface.co/bvrau/covid-twitter-bert-v2-struth) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0688
- Accuracy: 0.9774
- Precision: 0.9781
- Recall: 0.9738
- F1: 0.9760
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.2183 | 1.0 | 365 | 0.0688 | 0.9774 | 0.9781 | 0.9738 | 0.9760 |
| 0.0783 | 2.0 | 730 | 0.0754 | 0.9842 | 0.9812 | 0.9855 | 0.9833 |
| 0.0354 | 3.0 | 1095 | 0.0766 | 0.9856 | 0.9785 | 0.9913 | 0.9848 |
| 0.0185 | 4.0 | 1460 | 0.0956 | 0.9822 | 0.9715 | 0.9913 | 0.9813 |
| 0.0227 | 5.0 | 1825 | 0.0693 | 0.9870 | 0.9827 | 0.9898 | 0.9862 |
| 0.0084 | 6.0 | 2190 | 0.0870 | 0.9849 | 0.9926 | 0.9753 | 0.9839 |
| 0.0021 | 7.0 | 2555 | 0.0729 | 0.9877 | 0.9883 | 0.9855 | 0.9869 |
| 0.0002 | 8.0 | 2920 | 0.1197 | 0.9808 | 0.9688 | 0.9913 | 0.9799 |
| 0.0033 | 9.0 | 3285 | 0.0768 | 0.9884 | 0.9912 | 0.9840 | 0.9876 |
| 0.0009 | 10.0 | 3650 | 0.1013 | 0.9863 | 0.9869 | 0.9840 | 0.9854 |
| 0.0 | 11.0 | 4015 | 0.1069 | 0.9863 | 0.9869 | 0.9840 | 0.9854 |
| 0.0 | 12.0 | 4380 | 0.1124 | 0.9856 | 0.9854 | 0.9840 | 0.9847 |
| 0.0 | 13.0 | 4745 | 0.1175 | 0.9849 | 0.9854 | 0.9826 | 0.9840 |
| 0.0 | 14.0 | 5110 | 0.1221 | 0.9849 | 0.9854 | 0.9826 | 0.9840 |
| 0.0 | 15.0 | 5475 | 0.1256 | 0.9849 | 0.9854 | 0.9826 | 0.9840 |
| 0.0 | 16.0 | 5840 | 0.1286 | 0.9849 | 0.9854 | 0.9826 | 0.9840 |
| 0.0 | 17.0 | 6205 | 0.1300 | 0.9856 | 0.9854 | 0.9840 | 0.9847 |
| 0.0 | 18.0 | 6570 | 0.1293 | 0.9849 | 0.9854 | 0.9826 | 0.9840 |
| 0.0 | 19.0 | 6935 | 0.1304 | 0.9849 | 0.9854 | 0.9826 | 0.9840 |
| 0.0 | 20.0 | 7300 | 0.1308 | 0.9849 | 0.9854 | 0.9826 | 0.9840 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
svalabs/rembert-german-question-answering
|
svalabs
| 2022-10-05T09:26:02Z | 111 | 3 |
transformers
|
[
"transformers",
"pytorch",
"rembert",
"question-answering",
"qa",
"de",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-10-05T09:06:25Z |
---
license: cc-by-4.0
language:
- de
task_categories:
- question-answering
tags:
- question-answering
- pytorch
- qa
- de
---
# SVALabs - RemBERT German QA
In this repository we present our german question answering model.
The trained model is based on [RemBERT](https://huggingface.co/google/rembert) and was finetuned using the [SQuAD](https://huggingface.co/datasets/squad) dataset and the [GermanQuAD](https://huggingface.co/datasets/deepset/germanquad) dataset.
### Model Details
| | Description or Link |
|---|---|
|**Base model** | [```RemBERT```](https://huggingface.co/google/rembert) |
|**Finetuning task**| Question Answering |
|**Source datasets**| [```SQuAD```](https://huggingface.co/datasets/squad); [```GermanQuAD```](https://huggingface.co/datasets/deepset/germanquad)|
### Performance
The model was tested on 1692 samples of the GermanQuAD test dataset (the other samples were used for validation)
F1-Score: 87.75
EM: 73.35
|
huggingtweets/anandmahindra-elonmusk-sahilbloom
|
huggingtweets
| 2022-10-05T08:56:54Z | 122 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-10-05T08:53:14Z |
---
language: en
thumbnail: http://www.huggingtweets.com/anandmahindra-elonmusk-sahilbloom/1664960172385/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1572573363255525377/Xz3fufYY_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1574971893765251073/GglyevNe_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1462779172451983370/xAsgPikz_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Elon Musk & anand mahindra & Sahil Bloom</div>
<div style="text-align: center; font-size: 14px;">@anandmahindra-elonmusk-sahilbloom</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Elon Musk & anand mahindra & Sahil Bloom.
| Data | Elon Musk | anand mahindra | Sahil Bloom |
| --- | --- | --- | --- |
| Tweets downloaded | 3200 | 3240 | 3250 |
| Retweets | 123 | 705 | 202 |
| Short tweets | 970 | 174 | 693 |
| Tweets kept | 2107 | 2361 | 2355 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1diitahr/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @anandmahindra-elonmusk-sahilbloom's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1qdx5m74) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1qdx5m74/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/anandmahindra-elonmusk-sahilbloom')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
mfreihaut/refinement-finetuned-mnli-kaggle-yahoo
|
mfreihaut
| 2022-10-05T08:46:49Z | 115 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-05T03:44:00Z |
---
tags:
- generated_from_trainer
model-index:
- name: refinement-finetuned-mnli-kaggle-yahoo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# refinement-finetuned-mnli-kaggle-yahoo
This model is a fine-tuned version of [joeddav/bart-large-mnli-yahoo-answers](https://huggingface.co/joeddav/bart-large-mnli-yahoo-answers) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1324
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.4649 | 1.0 | 12599 | 0.9705 |
| 0.494 | 2.0 | 25198 | 1.1055 |
| 0.3405 | 3.0 | 37797 | 1.1101 |
| 0.5063 | 4.0 | 50396 | 1.1324 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.10.0
- Datasets 2.5.1
- Tokenizers 0.12.1
|
anas-awadalla/gpt2-span-head-few-shot-k-1024-finetuned-squad-seed-4
|
anas-awadalla
| 2022-10-05T07:41:38Z | 46 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:mit",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-09-27T11:07:36Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: gpt2-span-head-few-shot-k-1024-finetuned-squad-seed-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-span-head-few-shot-k-1024-finetuned-squad-seed-4
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.11.6
|
shoaibazam/ppo-LunarLander-v2
|
shoaibazam
| 2022-10-05T07:35:13Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-10-04T12:47:49Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 229.76 +/- 21.12
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
anas-awadalla/gpt2-span-head-few-shot-k-512-finetuned-squad-seed-0
|
anas-awadalla
| 2022-10-05T05:49:56Z | 50 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:mit",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-09-27T10:26:06Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: gpt2-span-head-few-shot-k-512-finetuned-squad-seed-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-span-head-few-shot-k-512-finetuned-squad-seed-0
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.11.6
|
fabriceyhc/bert-base-uncased-amazon_polarity
|
fabriceyhc
| 2022-10-05T05:24:12Z | 149 | 4 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"sibyl",
"dataset:amazon_polarity",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
- sibyl
datasets:
- amazon_polarity
metrics:
- accuracy
model-index:
- name: bert-base-uncased-amazon_polarity
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_polarity
type: amazon_polarity
args: amazon_polarity
metrics:
- name: Accuracy
type: accuracy
value: 0.94647
- task:
type: text-classification
name: Text Classification
dataset:
name: amazon_polarity
type: amazon_polarity
config: amazon_polarity
split: test
metrics:
- name: Accuracy
type: accuracy
value: 0.9464875
verified: true
- name: Precision
type: precision
value: 0.9528844934702675
verified: true
- name: Recall
type: recall
value: 0.939425
verified: true
- name: AUC
type: auc
value: 0.9863499156250001
verified: true
- name: F1
type: f1
value: 0.9461068798388619
verified: true
- name: loss
type: loss
value: 0.2944573760032654
verified: true
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-amazon_polarity
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the amazon_polarity dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2945
- Accuracy: 0.9465
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1782000
- training_steps: 17820000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 0.7155 | 0.0 | 2000 | 0.7060 | 0.4622 |
| 0.7054 | 0.0 | 4000 | 0.6925 | 0.5165 |
| 0.6842 | 0.0 | 6000 | 0.6653 | 0.6116 |
| 0.6375 | 0.0 | 8000 | 0.5721 | 0.7909 |
| 0.4671 | 0.0 | 10000 | 0.3238 | 0.8770 |
| 0.3403 | 0.0 | 12000 | 0.3692 | 0.8861 |
| 0.4162 | 0.0 | 14000 | 0.4560 | 0.8908 |
| 0.4728 | 0.0 | 16000 | 0.5071 | 0.8980 |
| 0.5111 | 0.01 | 18000 | 0.5204 | 0.9015 |
| 0.4792 | 0.01 | 20000 | 0.5193 | 0.9076 |
| 0.544 | 0.01 | 22000 | 0.4835 | 0.9133 |
| 0.4745 | 0.01 | 24000 | 0.4689 | 0.9170 |
| 0.4403 | 0.01 | 26000 | 0.4778 | 0.9177 |
| 0.4405 | 0.01 | 28000 | 0.4754 | 0.9163 |
| 0.4375 | 0.01 | 30000 | 0.4808 | 0.9175 |
| 0.4628 | 0.01 | 32000 | 0.4340 | 0.9244 |
| 0.4488 | 0.01 | 34000 | 0.4162 | 0.9265 |
| 0.4608 | 0.01 | 36000 | 0.4031 | 0.9271 |
| 0.4478 | 0.01 | 38000 | 0.4502 | 0.9253 |
| 0.4237 | 0.01 | 40000 | 0.4087 | 0.9279 |
| 0.4601 | 0.01 | 42000 | 0.4133 | 0.9269 |
| 0.4153 | 0.01 | 44000 | 0.4230 | 0.9306 |
| 0.4096 | 0.01 | 46000 | 0.4108 | 0.9301 |
| 0.4348 | 0.01 | 48000 | 0.4138 | 0.9309 |
| 0.3787 | 0.01 | 50000 | 0.4066 | 0.9324 |
| 0.4172 | 0.01 | 52000 | 0.4812 | 0.9206 |
| 0.3897 | 0.02 | 54000 | 0.4013 | 0.9325 |
| 0.3787 | 0.02 | 56000 | 0.3837 | 0.9344 |
| 0.4253 | 0.02 | 58000 | 0.3925 | 0.9347 |
| 0.3959 | 0.02 | 60000 | 0.3907 | 0.9353 |
| 0.4402 | 0.02 | 62000 | 0.3708 | 0.9341 |
| 0.4115 | 0.02 | 64000 | 0.3477 | 0.9361 |
| 0.3876 | 0.02 | 66000 | 0.3634 | 0.9373 |
| 0.4286 | 0.02 | 68000 | 0.3778 | 0.9378 |
| 0.422 | 0.02 | 70000 | 0.3540 | 0.9361 |
| 0.3732 | 0.02 | 72000 | 0.3853 | 0.9378 |
| 0.3641 | 0.02 | 74000 | 0.3951 | 0.9386 |
| 0.3701 | 0.02 | 76000 | 0.3582 | 0.9388 |
| 0.4498 | 0.02 | 78000 | 0.3268 | 0.9375 |
| 0.3587 | 0.02 | 80000 | 0.3825 | 0.9401 |
| 0.4474 | 0.02 | 82000 | 0.3155 | 0.9391 |
| 0.3598 | 0.02 | 84000 | 0.3666 | 0.9388 |
| 0.389 | 0.02 | 86000 | 0.3745 | 0.9377 |
| 0.3625 | 0.02 | 88000 | 0.3776 | 0.9387 |
| 0.3511 | 0.03 | 90000 | 0.4275 | 0.9336 |
| 0.3428 | 0.03 | 92000 | 0.4301 | 0.9336 |
| 0.4042 | 0.03 | 94000 | 0.3547 | 0.9359 |
| 0.3583 | 0.03 | 96000 | 0.3763 | 0.9396 |
| 0.3887 | 0.03 | 98000 | 0.3213 | 0.9412 |
| 0.3915 | 0.03 | 100000 | 0.3557 | 0.9409 |
| 0.3378 | 0.03 | 102000 | 0.3627 | 0.9418 |
| 0.349 | 0.03 | 104000 | 0.3614 | 0.9402 |
| 0.3596 | 0.03 | 106000 | 0.3834 | 0.9381 |
| 0.3519 | 0.03 | 108000 | 0.3560 | 0.9421 |
| 0.3598 | 0.03 | 110000 | 0.3485 | 0.9419 |
| 0.3642 | 0.03 | 112000 | 0.3754 | 0.9395 |
| 0.3477 | 0.03 | 114000 | 0.3634 | 0.9426 |
| 0.4202 | 0.03 | 116000 | 0.3071 | 0.9427 |
| 0.3656 | 0.03 | 118000 | 0.3155 | 0.9441 |
| 0.3709 | 0.03 | 120000 | 0.2923 | 0.9433 |
| 0.374 | 0.03 | 122000 | 0.3272 | 0.9441 |
| 0.3142 | 0.03 | 124000 | 0.3348 | 0.9444 |
| 0.3452 | 0.04 | 126000 | 0.3603 | 0.9436 |
| 0.3365 | 0.04 | 128000 | 0.3339 | 0.9434 |
| 0.3353 | 0.04 | 130000 | 0.3471 | 0.9450 |
| 0.343 | 0.04 | 132000 | 0.3508 | 0.9418 |
| 0.3174 | 0.04 | 134000 | 0.3753 | 0.9436 |
| 0.3009 | 0.04 | 136000 | 0.3687 | 0.9422 |
| 0.3785 | 0.04 | 138000 | 0.3818 | 0.9396 |
| 0.3199 | 0.04 | 140000 | 0.3291 | 0.9438 |
| 0.4049 | 0.04 | 142000 | 0.3372 | 0.9454 |
| 0.3435 | 0.04 | 144000 | 0.3315 | 0.9459 |
| 0.3814 | 0.04 | 146000 | 0.3462 | 0.9401 |
| 0.359 | 0.04 | 148000 | 0.3981 | 0.9361 |
| 0.3552 | 0.04 | 150000 | 0.3226 | 0.9469 |
| 0.345 | 0.04 | 152000 | 0.3731 | 0.9384 |
| 0.3228 | 0.04 | 154000 | 0.2956 | 0.9471 |
| 0.3637 | 0.04 | 156000 | 0.2869 | 0.9477 |
| 0.349 | 0.04 | 158000 | 0.3331 | 0.9430 |
| 0.3374 | 0.04 | 160000 | 0.4159 | 0.9340 |
| 0.3718 | 0.05 | 162000 | 0.3241 | 0.9459 |
| 0.315 | 0.05 | 164000 | 0.3544 | 0.9391 |
| 0.3215 | 0.05 | 166000 | 0.3311 | 0.9451 |
| 0.3464 | 0.05 | 168000 | 0.3682 | 0.9453 |
| 0.3495 | 0.05 | 170000 | 0.3193 | 0.9469 |
| 0.305 | 0.05 | 172000 | 0.4132 | 0.9389 |
| 0.3479 | 0.05 | 174000 | 0.3465 | 0.947 |
| 0.3537 | 0.05 | 176000 | 0.3277 | 0.9449 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.7.1
- Datasets 1.12.1
- Tokenizers 0.10.3
|
anas-awadalla/gpt2-span-head-few-shot-k-256-finetuned-squad-seed-4
|
anas-awadalla
| 2022-10-05T05:20:56Z | 45 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:mit",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-09-27T10:20:06Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: gpt2-span-head-few-shot-k-256-finetuned-squad-seed-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-span-head-few-shot-k-256-finetuned-squad-seed-4
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.11.6
|
anas-awadalla/gpt2-span-head-few-shot-k-128-finetuned-squad-seed-0
|
anas-awadalla
| 2022-10-05T03:49:12Z | 45 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:mit",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-09-26T23:44:09Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: gpt2-span-head-few-shot-k-128-finetuned-squad-seed-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-span-head-few-shot-k-128-finetuned-squad-seed-0
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.11.6
|
mfreihaut/refinement-finetuned-mnli-kaggle-reversal
|
mfreihaut
| 2022-10-05T03:42:15Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-04T19:27:55Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: refinement-finetuned-mnli-kaggle-reversal
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# refinement-finetuned-mnli-kaggle-reversal
This model is a fine-tuned version of [facebook/bart-large-mnli](https://huggingface.co/facebook/bart-large-mnli) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0382
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.4093 | 1.0 | 12599 | 0.7861 |
| 0.5241 | 2.0 | 25198 | 0.9800 |
| 0.4969 | 3.0 | 37797 | 1.0316 |
| 0.4239 | 4.0 | 50396 | 1.0382 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.10.0
- Datasets 2.5.1
- Tokenizers 0.12.1
|
sd-concepts-library/final-fantasy-logo
|
sd-concepts-library
| 2022-10-05T03:38:42Z | 0 | 2 | null |
[
"license:mit",
"region:us"
] | null | 2022-10-05T03:38:38Z |
---
license: mit
---
### Final Fantasy logo on Stable Diffusion
This is the `<final-fantasy-logo>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:





|
anas-awadalla/gpt2-span-head-few-shot-k-64-finetuned-squad-seed-2
|
anas-awadalla
| 2022-10-05T03:11:15Z | 45 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:mit",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-09-26T23:33:16Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: gpt2-span-head-few-shot-k-64-finetuned-squad-seed-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-span-head-few-shot-k-64-finetuned-squad-seed-2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.11.6
|
anas-awadalla/gpt2-span-head-few-shot-k-64-finetuned-squad-seed-0
|
anas-awadalla
| 2022-10-05T02:55:37Z | 46 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:mit",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-09-26T23:28:29Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: gpt2-span-head-few-shot-k-64-finetuned-squad-seed-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-span-head-few-shot-k-64-finetuned-squad-seed-0
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.11.6
|
anas-awadalla/gpt2-span-head-few-shot-k-32-finetuned-squad-seed-4
|
anas-awadalla
| 2022-10-05T02:28:18Z | 45 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:mit",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-09-26T23:23:08Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: gpt2-span-head-few-shot-k-32-finetuned-squad-seed-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-span-head-few-shot-k-32-finetuned-squad-seed-4
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.11.6
|
anas-awadalla/gpt2-span-head-few-shot-k-32-finetuned-squad-seed-2
|
anas-awadalla
| 2022-10-05T02:13:17Z | 45 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:mit",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-09-26T23:18:00Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: gpt2-span-head-few-shot-k-32-finetuned-squad-seed-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-span-head-few-shot-k-32-finetuned-squad-seed-2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.11.6
|
anas-awadalla/bart-base-few-shot-k-1024-finetuned-squad-seq2seq-seed-2
|
anas-awadalla
| 2022-10-05T01:32:06Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-10-05T01:08:44Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bart-base-few-shot-k-1024-finetuned-squad-seq2seq-seed-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-few-shot-k-1024-finetuned-squad-seq2seq-seed-2
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 35.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.11.6
|
anas-awadalla/gpt2-span-head-few-shot-k-16-finetuned-squad-seed-4
|
anas-awadalla
| 2022-10-05T01:04:35Z | 45 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:mit",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-09-26T23:04:59Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: gpt2-span-head-few-shot-k-16-finetuned-squad-seed-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-span-head-few-shot-k-16-finetuned-squad-seed-4
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.11.6
|
anas-awadalla/t5-base-few-shot-k-16-finetuned-squad-seed-0
|
anas-awadalla
| 2022-10-05T00:56:03Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-09-27T16:28:18Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: t5-base-few-shot-k-16-finetuned-squad-seed-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-few-shot-k-16-finetuned-squad-seed-0
This model is a fine-tuned version of [google/t5-v1_1-base](https://huggingface.co/google/t5-v1_1-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1000
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.11.6
|
anas-awadalla/gpt2-span-head-few-shot-k-16-finetuned-squad-seed-2
|
anas-awadalla
| 2022-10-05T00:49:43Z | 45 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:mit",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-09-26T23:00:05Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: gpt2-span-head-few-shot-k-16-finetuned-squad-seed-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-span-head-few-shot-k-16-finetuned-squad-seed-2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.11.6
|
anas-awadalla/bart-base-few-shot-k-512-finetuned-squad-seq2seq-seed-2
|
anas-awadalla
| 2022-10-05T00:23:21Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-10-05T00:08:50Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bart-base-few-shot-k-512-finetuned-squad-seq2seq-seed-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-few-shot-k-512-finetuned-squad-seq2seq-seed-2
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 35.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.11.6
|
anas-awadalla/t5-base-few-shot-k-64-finetuned-squad-seed-2
|
anas-awadalla
| 2022-10-05T00:08:41Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-09-27T17:22:15Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: t5-base-few-shot-k-64-finetuned-squad-seed-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-few-shot-k-64-finetuned-squad-seed-2
This model is a fine-tuned version of [google/t5-v1_1-base](https://huggingface.co/google/t5-v1_1-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1000
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.11.6
|
anas-awadalla/bart-base-few-shot-k-256-finetuned-squad-seq2seq-seed-4
|
anas-awadalla
| 2022-10-04T23:48:33Z | 116 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-10-04T23:38:08Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bart-base-few-shot-k-256-finetuned-squad-seq2seq-seed-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-few-shot-k-256-finetuned-squad-seq2seq-seed-4
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 35.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.11.6
|
anas-awadalla/gpt2-large-lr-1e5-span-head-finetuned-squad
|
anas-awadalla
| 2022-10-04T23:31:54Z | 3 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:mit",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-10-04T20:19:03Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: gpt2-large-lr-1e5-span-head-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-large-lr-1e5-span-head-finetuned-squad
This model is a fine-tuned version of [gpt2-large](https://huggingface.co/gpt2-large) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 16
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.11.6
|
anas-awadalla/t5-base-few-shot-k-32-finetuned-squad-seed-4
|
anas-awadalla
| 2022-10-04T23:23:17Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-09-27T17:07:31Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: t5-base-few-shot-k-32-finetuned-squad-seed-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-few-shot-k-32-finetuned-squad-seed-4
This model is a fine-tuned version of [google/t5-v1_1-base](https://huggingface.co/google/t5-v1_1-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1000
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.11.6
|
anas-awadalla/bart-base-few-shot-k-64-finetuned-squad-seq2seq-seed-4
|
anas-awadalla
| 2022-10-04T22:42:11Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-10-04T22:34:25Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bart-base-few-shot-k-64-finetuned-squad-seq2seq-seed-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-few-shot-k-64-finetuned-squad-seq2seq-seed-4
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1000
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.11.6
|
anas-awadalla/bart-base-few-shot-k-64-finetuned-squad-seq2seq-seed-0
|
anas-awadalla
| 2022-10-04T22:22:52Z | 118 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-10-04T22:15:27Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bart-base-few-shot-k-64-finetuned-squad-seq2seq-seed-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-few-shot-k-64-finetuned-squad-seq2seq-seed-0
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1000
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.11.6
|
anas-awadalla/t5-base-few-shot-k-16-finetuned-squad-seed-4
|
anas-awadalla
| 2022-10-04T22:19:15Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-09-27T16:44:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: t5-base-few-shot-k-16-finetuned-squad-seed-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-few-shot-k-16-finetuned-squad-seed-4
This model is a fine-tuned version of [google/t5-v1_1-base](https://huggingface.co/google/t5-v1_1-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1000
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.11.6
|
anas-awadalla/bart-base-few-shot-k-32-finetuned-squad-seq2seq-seed-4
|
anas-awadalla
| 2022-10-04T22:13:34Z | 117 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-10-04T22:06:07Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bart-base-few-shot-k-32-finetuned-squad-seq2seq-seed-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-few-shot-k-32-finetuned-squad-seq2seq-seed-4
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1000
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.11.6
|
anas-awadalla/bart-base-few-shot-k-32-finetuned-squad-seq2seq-seed-2
|
anas-awadalla
| 2022-10-04T22:04:20Z | 118 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-10-04T21:57:03Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bart-base-few-shot-k-32-finetuned-squad-seq2seq-seed-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-few-shot-k-32-finetuned-squad-seq2seq-seed-2
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1000
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.11.6
|
svenvonnatzmer/finetuning-sentiment-model-3000-samples
|
svenvonnatzmer
| 2022-10-04T21:55:25Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-04T21:45:00Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: finetuning-sentiment-model-3000-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.6942
- eval_accuracy: 0.5
- eval_f1: 0.0
- eval_runtime: 272.0623
- eval_samples_per_second: 1.103
- eval_steps_per_second: 0.07
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
anas-awadalla/bart-base-few-shot-k-32-finetuned-squad-seq2seq-seed-0
|
anas-awadalla
| 2022-10-04T21:55:10Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-10-04T21:47:55Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bart-base-few-shot-k-32-finetuned-squad-seq2seq-seed-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-few-shot-k-32-finetuned-squad-seq2seq-seed-0
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1000
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.11.6
|
ImageIN/convnext-base-224_finetuned_on_unlabelled_IA_with_snorkel_labels
|
ImageIN
| 2022-10-04T21:54:59Z | 197 | 0 |
transformers
|
[
"transformers",
"pytorch",
"convnext",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-10-04T13:58:18Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: convnext-base-224_finetuned_on_unlabelled_IA_with_snorkel_labels
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convnext-base-224_finetuned_on_unlabelled_IA_with_snorkel_labels
This model is a fine-tuned version of [facebook/convnext-base-224](https://huggingface.co/facebook/convnext-base-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3443
- Precision: 0.9864
- Recall: 0.9822
- F1: 0.9843
- Accuracy: 0.9884
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
- label_smoothing_factor: 0.2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.3611 | 1.0 | 2021 | 0.3467 | 0.9843 | 0.9729 | 0.9784 | 0.9842 |
| 0.3524 | 2.0 | 4042 | 0.3453 | 0.9853 | 0.9790 | 0.9821 | 0.9868 |
| 0.3466 | 3.0 | 6063 | 0.3438 | 0.9854 | 0.9847 | 0.9851 | 0.9889 |
| 0.3433 | 4.0 | 8084 | 0.3434 | 0.9850 | 0.9808 | 0.9829 | 0.9873 |
| 0.3404 | 5.0 | 10105 | 0.3459 | 0.9853 | 0.9790 | 0.9821 | 0.9868 |
| 0.3384 | 6.0 | 12126 | 0.3453 | 0.9853 | 0.9790 | 0.9821 | 0.9868 |
| 0.3382 | 7.0 | 14147 | 0.3437 | 0.9864 | 0.9822 | 0.9843 | 0.9884 |
| 0.3358 | 8.0 | 16168 | 0.3441 | 0.9857 | 0.9829 | 0.9843 | 0.9884 |
| 0.3349 | 9.0 | 18189 | 0.3448 | 0.9857 | 0.9829 | 0.9843 | 0.9884 |
| 0.3325 | 10.0 | 20210 | 0.3443 | 0.9864 | 0.9822 | 0.9843 | 0.9884 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
anas-awadalla/bart-base-few-shot-k-16-finetuned-squad-seq2seq-seed-4
|
anas-awadalla
| 2022-10-04T21:45:46Z | 112 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-10-04T21:38:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bart-base-few-shot-k-16-finetuned-squad-seq2seq-seed-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-few-shot-k-16-finetuned-squad-seq2seq-seed-4
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1000
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.11.6
|
anas-awadalla/bart-base-few-shot-k-16-finetuned-squad-seq2seq-seed-0
|
anas-awadalla
| 2022-10-04T21:27:02Z | 117 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-10-04T20:39:57Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bart-base-few-shot-k-16-finetuned-squad-seq2seq-seed-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-few-shot-k-16-finetuned-squad-seq2seq-seed-0
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1000
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.11.6
|
tner/deberta-v3-large-btc
|
tner
| 2022-10-04T20:45:22Z | 108 | 2 |
transformers
|
[
"transformers",
"pytorch",
"deberta-v2",
"token-classification",
"dataset:tner/btc",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-10-04T20:43:45Z |
---
datasets:
- tner/btc
metrics:
- f1
- precision
- recall
model-index:
- name: tner/deberta-v3-large-btc
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: tner/btc
type: tner/btc
args: tner/btc
metrics:
- name: F1
type: f1
value: 0.8399238265934805
- name: Precision
type: precision
value: 0.8237749945067018
- name: Recall
type: recall
value: 0.8567184643510055
- name: F1 (macro)
type: f1_macro
value: 0.7921150390682584
- name: Precision (macro)
type: precision_macro
value: 0.7766126681668878
- name: Recall (macro)
type: recall_macro
value: 0.8103758198218992
- name: F1 (entity span)
type: f1_entity_span
value: 0.9134087599417496
- name: Precision (entity span)
type: precision_entity_span
value: 0.8958470665787739
- name: Recall (entity span)
type: recall_entity_span
value: 0.931672760511883
pipeline_tag: token-classification
widget:
- text: "Jacob Collier is a Grammy awarded artist from England."
example_title: "NER Example 1"
---
# tner/deberta-v3-large-btc
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the
[tner/btc](https://huggingface.co/datasets/tner/btc) dataset.
Model fine-tuning is done via [T-NER](https://github.com/asahi417/tner)'s hyper-parameter search (see the repository
for more detail). It achieves the following results on the test set:
- F1 (micro): 0.8399238265934805
- Precision (micro): 0.8237749945067018
- Recall (micro): 0.8567184643510055
- F1 (macro): 0.7921150390682584
- Precision (macro): 0.7766126681668878
- Recall (macro): 0.8103758198218992
The per-entity breakdown of the F1 score on the test set are below:
- location: 0.7503949447077408
- organization: 0.7042372881355932
- person: 0.9217128843614413
For F1 scores, the confidence interval is obtained by bootstrap as below:
- F1 (micro):
- 90%: [0.8283024935970381, 0.8507400882379221]
- 95%: [0.8260021524132041, 0.8526162579659953]
- F1 (macro):
- 90%: [0.8283024935970381, 0.8507400882379221]
- 95%: [0.8260021524132041, 0.8526162579659953]
Full evaluation can be found at [metric file of NER](https://huggingface.co/tner/deberta-v3-large-btc/raw/main/eval/metric.json)
and [metric file of entity span](https://huggingface.co/tner/deberta-v3-large-btc/raw/main/eval/metric_span.json).
### Usage
This model can be used through the [tner library](https://github.com/asahi417/tner). Install the library via pip
```shell
pip install tner
```
and activate model as below.
```python
from tner import TransformersNER
model = TransformersNER("tner/deberta-v3-large-btc")
model.predict(["Jacob Collier is a Grammy awarded English artist from London"])
```
It can be used via transformers library but it is not recommended as CRF layer is not supported at the moment.
### Training hyperparameters
The following hyperparameters were used during training:
- dataset: ['tner/btc']
- dataset_split: train
- dataset_name: None
- local_dataset: None
- model: microsoft/deberta-v3-large
- crf: True
- max_length: 128
- epoch: 15
- batch_size: 16
- lr: 1e-05
- random_seed: 42
- gradient_accumulation_steps: 8
- weight_decay: None
- lr_warmup_step_ratio: 0.1
- max_grad_norm: None
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/tner/deberta-v3-large-btc/raw/main/trainer_config.json).
### Reference
If you use any resource from T-NER, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
```
@inproceedings{ushio-camacho-collados-2021-ner,
title = "{T}-{NER}: An All-Round Python Library for Transformer-based Named Entity Recognition",
author = "Ushio, Asahi and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations",
month = apr,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.eacl-demos.7",
doi = "10.18653/v1/2021.eacl-demos.7",
pages = "53--62",
abstract = "Language model (LM) pretraining has led to consistent improvements in many NLP downstream tasks, including named entity recognition (NER). In this paper, we present T-NER (Transformer-based Named Entity Recognition), a Python library for NER LM finetuning. In addition to its practical utility, T-NER facilitates the study and investigation of the cross-domain and cross-lingual generalization ability of LMs finetuned on NER. Our library also provides a web app where users can get model predictions interactively for arbitrary text, which facilitates qualitative model evaluation for non-expert programmers. We show the potential of the library by compiling nine public NER datasets into a unified format and evaluating the cross-domain and cross- lingual performance across the datasets. The results from our initial experiments show that in-domain performance is generally competitive across datasets. However, cross-domain generalization is challenging even with a large pretrained LM, which has nevertheless capacity to learn domain-specific features if fine- tuned on a combined dataset. To facilitate future research, we also release all our LM checkpoints via the Hugging Face model hub.",
}
```
|
NimaBoscarino/dog_food
|
NimaBoscarino
| 2022-10-04T19:07:06Z | 198 | 0 |
transformers
|
[
"transformers",
"pytorch",
"autotrain",
"vision",
"image-classification",
"dataset:lewtun/dog_food",
"model-index",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-10-03T19:12:40Z |
---
tags:
- autotrain
- vision
- image-classification
datasets:
- lewtun/dog_food
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
library_name: transformers
co2_eq_emissions:
emissions: 6.799888815236616
eval_info:
col_mapping: test
model-index:
- name: NimaBoscarino/dog_food
results:
- task:
type: image-classification
name: Image Classification
dataset:
name: lewtun/dog_food
type: lewtun/dog_food
config: lewtun--dog_food
split: test
metrics:
- name: Accuracy
type: accuracy
value: 1.0
verified: true
- name: Precision Macro
type: precision
value: 1.0
verified: true
- name: Precision Micro
type: precision
value: 1.0
verified: true
- name: Precision Weighted
type: precision
value: 1.0
verified: true
- name: Recall Macro
type: recall
value: 1.0
verified: true
- name: Recall Micro
type: recall
value: 1.0
verified: true
- name: Recall Weighted
type: recall
value: 1.0
verified: true
- name: F1 Macro
type: f1
value: 1.0
verified: true
- name: F1 Micro
type: f1
value: 1.0
verified: true
- name: F1 Weighted
type: f1
value: 1.0
verified: true
- name: loss
type: loss
value: 1.848173087637406e-05
verified: true
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 1647758504
- CO2 Emissions (in grams): 6.7999
## Validation Metrics
- Loss: 0.001
- Accuracy: 1.000
- Macro F1: 1.000
- Micro F1: 1.000
- Weighted F1: 1.000
- Macro Precision: 1.000
- Micro Precision: 1.000
- Weighted Precision: 1.000
- Macro Recall: 1.000
- Micro Recall: 1.000
- Weighted Recall: 1.000
|
minjibi/aa
|
minjibi
| 2022-10-04T18:57:45Z | 79 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-10-04T18:33:54Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: aa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# aa
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 15.9757
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 14.5628 | 3.33 | 20 | 16.1808 | 1.0 |
| 14.5379 | 6.67 | 40 | 16.1005 | 1.0 |
| 14.3379 | 10.0 | 60 | 15.9757 | 1.0 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.10.0+cu102
- Datasets 1.4.1
- Tokenizers 0.12.1
|
lilykaw/finetuning-sentiment-BERT-model-samples
|
lilykaw
| 2022-10-04T18:23:44Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-03T18:21:42Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-BERT-model-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.86
- name: F1
type: f1
value: 0.8627450980392156
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-BERT-model-samples
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7999
- Accuracy: 0.86
- F1: 0.8627
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
fchollet/stable-diffusion
|
fchollet
| 2022-10-04T18:16:38Z | 0 | 15 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2022-09-18T21:08:48Z |
---
license: creativeml-openrail-m
---
Ported from weights hosted on original model repo: https://huggingface.co/CompVis/stable-diffusion-v1-4
|
Ktolodozo/Beau
|
Ktolodozo
| 2022-10-04T17:10:41Z | 0 | 0 | null |
[
"license:openrail",
"region:us"
] | null | 2022-10-04T17:07:13Z |
---
license: openrail
---
pip install --upgrade diffusers transformers scipy
huggingface-cli login
import torch
from torch import autocast
from diffusers import StableDiffusionPipeline
model_id = "CompVis/stable-diffusion-v1-4"
device = "cuda"
pipe = StableDiffusionPipeline.from_pretrained(model_id, use_auth_token=True)
pipe = pipe.to(device)
prompt = "a photo of an astronaut riding a horse on mars"
with autocast("cuda"):
image = pipe(prompt, guidance_scale=7.5).images[0]
image.save("astronaut_rides_horse.png")
import torch
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16, revision="fp16", use_auth_token=True)
pipe = pipe.to(device)
prompt = "a photo of an astronaut riding a horse on mars"
with autocast("cuda"):
image = pipe(prompt, guidance_scale=7.5).images[0]
image.save("astronaut_rides_horse.png")
from diffusers import StableDiffusionPipeline, LMSDiscreteScheduler
model_id = "CompVis/stable-diffusion-v1-4"
# Use the K-LMS scheduler here instead
scheduler = LMSDiscreteScheduler(beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", num_train_timesteps=1000)
pipe = StableDiffusionPipeline.from_pretrained(model_id, scheduler=scheduler, use_auth_token=True)
pipe = pipe.to("cuda")
prompt = "a photo of an astronaut riding a horse on mars"
with autocast("cuda"):
image = pipe(prompt, guidance_scale=7.5).images[0]
image.save("astronaut_rides_horse.png")
|
scite/roberta-base-squad2-nq-bioasq
|
scite
| 2022-10-04T16:10:49Z | 124 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-10-04T14:49:13Z |
---
license: apache-2.0
tags:
- question-answering
- generated_from_trainer
model-index:
- name: roberta-base-squad2-nq-bioasq
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-squad2-nq-bioasq
## Model description
This model is a fine-tuned version of [nlpconnect/roberta-base-squad2-nq](https://huggingface.co/nlpconnect/roberta-base-squad2-nq) on the BioASQ 10b dataset.
## Intended uses & limitations
Cross-domain question answering!
## Training and evaluation data
Training: BioASQ 10B with SQUAD sampled evenly to match the same samples as BioASQ 10B
Eval: BioASQ 9B Eval with SQUAD Eval sampled evenly to match the same samples as BioASQ 9B Eval
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
Went from untrained exact match: 60.9% (f1 71.8%) to exact match: 95.2% (96.6% f1) on BioASQ 9B held out training set.
Scores on SQUAD+BioASQ remained stable at exact match: 72.5% (f1 81.4%) to 88.5% (f1 93.3%).
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
SzegedAI/hubertusz-tiny-wiki-seq128
|
SzegedAI
| 2022-10-04T15:45:31Z | 56 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"bert",
"pretraining",
"generated_from_keras_callback",
"hubert",
"hu",
"dataset:wikipedia",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-09-22T17:53:00Z |
---
language: hu
license: apache-2.0
datasets:
- wikipedia
tags:
- generated_from_keras_callback
- hubert
model-index:
- name: hubert-tiny-wiki-seq128
results: []
---
# hubert-tiny-wiki-seq128
Fully trained model with the second phase of training is available here: [SzegedAI/hubert-tiny-wiki](https://huggingface.co/SzegedAI/hubert-tiny-wiki)
This model was trained from scratch on the Wikipedia subset of Hungarian Webcorpus 2.0 with MLM and SOP tasks.
### Pre-Training Parameters:
- Training steps: 500.000
- Sequence length: 128 (the model is capable for 512)
- Batch size: 1024
### Framework versions
- Transformers 4.21.3
- TensorFlow 2.10.0
- Datasets 2.4.0
- Tokenizers 0.12.1
# Acknowledgement
[](https://mi.nemzetilabor.hu/)
|
huggingtweets/breedlove22
|
huggingtweets
| 2022-10-04T15:33:15Z | 123 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-10-04T15:31:13Z |
---
language: en
thumbnail: http://www.huggingtweets.com/breedlove22/1664897591383/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1530319125985169408/SIC_0P3x_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Robert ₿reedlove</div>
<div style="text-align: center; font-size: 14px;">@breedlove22</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Robert ₿reedlove.
| Data | Robert ₿reedlove |
| --- | --- |
| Tweets downloaded | 3240 |
| Retweets | 600 |
| Short tweets | 535 |
| Tweets kept | 2105 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1ip9pkdj/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @breedlove22's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/36ec6xyk) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/36ec6xyk/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/breedlove22')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
mindflayer/south-indian-foods
|
mindflayer
| 2022-10-04T15:01:07Z | 215 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-10-04T15:00:53Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: south-indian-foods
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.6666666865348816
---
# south-indian-foods
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Idli

#### chutney

#### dosa

#### sambar

#### vada

|
GItaf/bert-base-uncased-bert-base-uncased-mc-weight0.25-epoch15
|
GItaf
| 2022-10-04T15:00:20Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-10-04T09:36:21Z |
---
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-bert-base-uncased-mc-weight0.25-epoch15
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-bert-base-uncased-mc-weight0.25-epoch15
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 5.1343
- Cls loss: 3.0991
- Lm loss: 4.3588
- Cls Accuracy: 0.6092
- Cls F1: 0.6066
- Cls Precision: 0.6082
- Cls Recall: 0.6092
- Perplexity: 78.17
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cls loss | Lm loss | Cls Accuracy | Cls F1 | Cls Precision | Cls Recall | Perplexity |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:-------:|:------------:|:------:|:-------------:|:----------:|:----------:|
| 5.3372 | 1.0 | 3470 | 4.9249 | 1.5682 | 4.5325 | 0.5712 | 0.5567 | 0.5751 | 0.5712 | 92.99 |
| 4.8287 | 2.0 | 6940 | 4.7830 | 1.3889 | 4.4355 | 0.6231 | 0.6169 | 0.6448 | 0.6231 | 84.39 |
| 4.6295 | 3.0 | 10410 | 4.7585 | 1.4752 | 4.3894 | 0.6248 | 0.6160 | 0.6340 | 0.6248 | 80.59 |
| 4.4704 | 4.0 | 13880 | 4.7707 | 1.6098 | 4.3678 | 0.6121 | 0.6079 | 0.6156 | 0.6121 | 78.87 |
| 4.3364 | 5.0 | 17350 | 4.8008 | 1.8102 | 4.3478 | 0.6086 | 0.6068 | 0.6105 | 0.6086 | 77.31 |
| 4.2245 | 6.0 | 20820 | 4.8353 | 1.9486 | 4.3477 | 0.6121 | 0.6075 | 0.6131 | 0.6121 | 77.30 |
| 4.1289 | 7.0 | 24290 | 4.8883 | 2.1912 | 4.3400 | 0.6110 | 0.6076 | 0.6182 | 0.6110 | 76.71 |
| 4.0485 | 8.0 | 27760 | 4.9394 | 2.4203 | 4.3337 | 0.5914 | 0.5862 | 0.6016 | 0.5914 | 76.23 |
| 3.9826 | 9.0 | 31230 | 5.0026 | 2.6664 | 4.3354 | 0.6006 | 0.5936 | 0.6035 | 0.6006 | 76.35 |
| 3.9277 | 10.0 | 34700 | 4.9902 | 2.5992 | 4.3398 | 0.6035 | 0.6032 | 0.6088 | 0.6035 | 76.69 |
| 3.8794 | 11.0 | 38170 | 5.0698 | 2.9006 | 4.3441 | 0.6156 | 0.6127 | 0.6213 | 0.6156 | 77.02 |
| 3.8428 | 12.0 | 41640 | 5.0956 | 2.9795 | 4.3501 | 0.6127 | 0.6110 | 0.6184 | 0.6127 | 77.49 |
| 3.8129 | 13.0 | 45110 | 5.1223 | 3.0646 | 4.3555 | 0.6138 | 0.6099 | 0.6172 | 0.6138 | 77.91 |
| 3.7891 | 14.0 | 48580 | 5.1242 | 3.0809 | 4.3534 | 0.6058 | 0.6045 | 0.6071 | 0.6058 | 77.74 |
| 3.7744 | 15.0 | 52050 | 5.1343 | 3.0991 | 4.3588 | 0.6092 | 0.6066 | 0.6082 | 0.6092 | 78.17 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1
- Datasets 2.4.0
- Tokenizers 0.12.1
|
esc-benchmark/conformer-rnnt-switchboard
|
esc-benchmark
| 2022-10-04T14:47:09Z | 2 | 0 |
nemo
|
[
"nemo",
"esc",
"en",
"dataset:switchboard",
"region:us"
] | null | 2022-10-04T14:46:53Z |
---
language:
- en
tags:
- esc
datasets:
- switchboard
---
To reproduce this run, execute:
```python
#!/usr/bin/env bash
CUDA_VISIBLE_DEVICES=0 python run_speech_recognition_rnnt.py \
--config_path="conf/conformer_transducer_bpe_xlarge.yaml" \
--model_name_or_path="stt_en_conformer_transducer_xlarge" \
--dataset_name="esc-benchmark/esc-datasets" \
--tokenizer_path="tokenizer" \
--vocab_size="1024" \
--max_steps="100000" \
--dataset_config_name="switchboard" \
--output_dir="./" \
--run_name="conformer-rnnt-switchboard" \
--wandb_project="rnnt" \
--per_device_train_batch_size="8" \
--per_device_eval_batch_size="4" \
--logging_steps="50" \
--learning_rate="1e-4" \
--warmup_steps="500" \
--save_strategy="steps" \
--save_steps="20000" \
--evaluation_strategy="steps" \
--eval_steps="20000" \
--report_to="wandb" \
--preprocessing_num_workers="4" \
--fused_batch_size="4" \
--length_column_name="input_lengths" \
--fuse_loss_wer \
--group_by_length \
--overwrite_output_dir \
--do_train \
--do_eval \
--do_predict \
--use_auth_token
```
|
esc-benchmark/conformer-rnnt-earnings22
|
esc-benchmark
| 2022-10-04T14:42:49Z | 5 | 0 |
nemo
|
[
"nemo",
"esc",
"en",
"dataset:earnings22",
"region:us"
] | null | 2022-10-04T14:42:33Z |
---
language:
- en
tags:
- esc
datasets:
- earnings22
---
To reproduce this run, execute:
```python
#!/usr/bin/env bash
CUDA_VISIBLE_DEVICES=0 python run_speech_recognition_rnnt.py \
--config_path="conf/conformer_transducer_bpe_xlarge.yaml" \
--model_name_or_path="stt_en_conformer_transducer_xlarge" \
--dataset_name="esc/esc-datsets" \
--tokenizer_path="tokenizer" \
--vocab_size="1024" \
--max_steps="100000" \
--dataset_config_name="earnings22" \
--output_dir="./" \
--run_name="conformer-rnnt-earnings22" \
--wandb_project="rnnt" \
--per_device_train_batch_size="8" \
--per_device_eval_batch_size="4" \
--logging_steps="50" \
--learning_rate="1e-4" \
--warmup_steps="500" \
--save_strategy="steps" \
--save_steps="20000" \
--evaluation_strategy="steps" \
--eval_steps="20000" \
--report_to="wandb" \
--preprocessing_num_workers="4" \
--fused_batch_size="4" \
--length_column_name="input_lengths" \
--fuse_loss_wer \
--group_by_length \
--overwrite_output_dir \
--do_train \
--do_eval \
--do_predict \
--use_auth_token
```
|
esc-benchmark/conformer-rnnt-spgispeech
|
esc-benchmark
| 2022-10-04T14:40:41Z | 4 | 0 |
nemo
|
[
"nemo",
"esc",
"en",
"dataset:spgispeech",
"region:us"
] | null | 2022-10-04T14:40:26Z |
---
language:
- en
tags:
- esc
datasets:
- spgispeech
---
To reproduce this run, execute:
```python
#!/usr/bin/env bash
CUDA_VISIBLE_DEVICES=0 python run_speech_recognition_rnnt.py \
--config_path="conf/conformer_transducer_bpe_xlarge.yaml" \
--model_name_or_path="stt_en_conformer_transducer_xlarge" \
--dataset_name="esc-benchmark/esc-datasets" \
--tokenizer_path="tokenizer" \
--vocab_size="1024" \
--max_steps="100000" \
--dataset_config_name="spgispeech" \
--output_dir="./" \
--run_name="conformer-rnnt-spgispeech" \
--wandb_project="rnnt" \
--per_device_train_batch_size="8" \
--per_device_eval_batch_size="4" \
--logging_steps="50" \
--learning_rate="1e-4" \
--warmup_steps="500" \
--save_strategy="steps" \
--save_steps="20000" \
--evaluation_strategy="steps" \
--eval_steps="20000" \
--report_to="wandb" \
--preprocessing_num_workers="4" \
--fused_batch_size="4" \
--length_column_name="input_lengths" \
--fuse_loss_wer \
--group_by_length \
--overwrite_output_dir \
--do_train \
--do_eval \
--do_predict \
--use_auth_token
```
|
esc-benchmark/conformer-rnnt-gigaspeech
|
esc-benchmark
| 2022-10-04T14:38:31Z | 5 | 0 |
nemo
|
[
"nemo",
"esc",
"en",
"dataset:gigaspeech",
"region:us"
] | null | 2022-10-04T14:38:16Z |
---
language:
- en
tags:
- esc
datasets:
- gigaspeech
---
To reproduce this run, execute:
```python
#!/usr/bin/env bash
CUDA_VISIBLE_DEVICES=0 python run_speech_recognition_rnnt.py \
--config_path="conf/conformer_transducer_bpe_xlarge.yaml" \
--model_name_or_path="stt_en_conformer_transducer_xlarge" \
--dataset_name="esc-benchmark/esc-datasets" \
--tokenizer_path="tokenizer" \
--vocab_size="1024" \
--num_train_epochs="0.88" \
--dataset_config_name="gigaspeech" \
--output_dir="./" \
--run_name="conformer-rnnt-gigaspeech" \
--wandb_project="rnnt" \
--per_device_train_batch_size="8" \
--per_device_eval_batch_size="4" \
--logging_steps="50" \
--learning_rate="1e-4" \
--warmup_steps="500" \
--save_strategy="steps" \
--save_steps="20000" \
--evaluation_strategy="steps" \
--eval_steps="20000" \
--report_to="wandb" \
--preprocessing_num_workers="4" \
--fused_batch_size="4" \
--length_column_name="input_lengths" \
--fuse_loss_wer \
--group_by_length \
--overwrite_output_dir \
--do_train \
--do_eval \
--do_predict \
--use_auth_token
```
|
YassineB/test_Resnet
|
YassineB
| 2022-10-04T14:34:56Z | 41 | 0 |
transformers
|
[
"transformers",
"resnet",
"image-classification",
"vision",
"dataset:imagenet-1k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-10-04T14:10:44Z |
---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
---
|
esc-benchmark/conformer-rnnt-librispeech
|
esc-benchmark
| 2022-10-04T14:29:51Z | 4 | 0 |
nemo
|
[
"nemo",
"esc",
"en",
"dataset:librispeech",
"region:us"
] | null | 2022-10-04T14:29:35Z |
---
language:
- en
tags:
- esc
datasets:
- librispeech
---
To reproduce this run, execute:
```python
#!/usr/bin/env bash
CUDA_VISIBLE_DEVICES=0 python run_speech_recognition_rnnt.py \
--config_path="conf/conformer_transducer_bpe_xlarge.yaml" \
--model_name_or_path="stt_en_conformer_transducer_xlarge" \
--dataset_name="esc-benchmark/esc-datasets" \
--tokenizer_path="tokenizer" \
--vocab_size="1024" \
--max_steps="100000" \
--dataset_config_name="librispeech" \
--output_dir="./" \
--run_name="conformer-rnnt-librispeech" \
--wandb_project="rnnt" \
--per_device_train_batch_size="8" \
--per_device_eval_batch_size="4" \
--logging_steps="50" \
--learning_rate="1e-4" \
--warmup_steps="500" \
--save_strategy="steps" \
--save_steps="20000" \
--evaluation_strategy="steps" \
--eval_steps="20000" \
--report_to="wandb" \
--preprocessing_num_workers="4" \
--fused_batch_size="4" \
--length_column_name="input_lengths" \
--fuse_loss_wer \
--group_by_length \
--overwrite_output_dir \
--do_train \
--do_eval \
--do_predict \
--use_auth_token
```
|
helliun/conversational-qgen
|
helliun
| 2022-10-04T14:22:13Z | 110 | 1 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-10-02T17:58:33Z |
"Question generation model conversational capabilities"
|
esc-benchmark/whisper-aed-ami
|
esc-benchmark
| 2022-10-04T14:17:38Z | 0 | 0 | null |
[
"esc",
"en",
"dataset:ami",
"region:us"
] | null | 2022-10-04T14:17:20Z |
---
language:
- en
tags:
- esc
datasets:
- ami
---
To reproduce this run, execute:
```python
#!/usr/bin/env bash
CUDA_VISIBLE_DEVICES=0 python run_speech_recognition_whisper.py \
--model_name_or_path="medium.en" \
--dataset_name="esc-benchmark/esc-datasets" \
--dataset_config_name="ami" \
--max_steps="2500" \
--output_dir="./" \
--run_name="whisper-ami" \
--dropout_rate="0.1" \
--wandb_project="whisper" \
--per_device_train_batch_size="64" \
--per_device_eval_batch_size="16" \
--logging_steps="25" \
--learning_rate="1e-4" \
--warmup_steps="500" \
--report_to="wandb" \
--preprocessing_num_workers="16" \
--evaluation_strategy="steps" \
--eval_steps="500" \
--save_strategy="steps" \
--save_steps="500" \
--generation_max_length="224" \
--length_column_name="input_lengths" \
--gradient_checkpointing \
--group_by_length \
--freeze_encoder \
--fp16 \
--overwrite_output_dir \
--do_train \
--do_eval \
--do_predict \
--predict_with_generate \
--use_auth_token
```
|
esc-benchmark/whisper-aed-voxpopuli
|
esc-benchmark
| 2022-10-04T14:01:05Z | 0 | 0 | null |
[
"esc",
"en",
"dataset:voxpopuli",
"region:us"
] | null | 2022-10-04T14:00:48Z |
---
language:
- en
tags:
- esc
datasets:
- voxpopuli
---
To reproduce this run, execute:
```python
#!/usr/bin/env bash
CUDA_VISIBLE_DEVICES=0 python run_speech_recognition_whisper.py \
--model_name_or_path="medium.en" \
--dataset_name="esc-benchmark/esc-datasets" \
--dataset_config_name="voxpopuli" \
--max_steps="5000" \
--output_dir="./" \
--run_name="whisper-voxpopuli" \
--wandb_project="whisper" \
--per_device_train_batch_size="64" \
--per_device_eval_batch_size="16" \
--logging_steps="25" \
--learning_rate="1e-4" \
--warmup_steps="500" \
--report_to="wandb" \
--preprocessing_num_workers="16" \
--evaluation_strategy="steps" \
--eval_steps="500" \
--save_strategy="steps" \
--save_steps="500" \
--generation_max_length="224" \
--length_column_name="input_lengths" \
--gradient_checkpointing \
--group_by_length \
--freeze_encoder \
--fp16 \
--overwrite_output_dir \
--do_train \
--do_eval \
--do_predict \
--predict_with_generate \
--use_auth_token
```
|
melll-uff/bertweetbr
|
melll-uff
| 2022-10-04T14:00:33Z | 379 | 10 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"pt",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-08-29T21:10:30Z |
---
language: pt
license: apache-2.0
---
# <a name="introduction"></a> BERTweet.BR: A Pre-Trained Language Model for Tweets in Portuguese
Having the same architecture of [BERTweet](https://huggingface.co/docs/transformers/model_doc/bertweet) we trained our model from scratch following [RoBERTa](https://huggingface.co/docs/transformers/model_doc/roberta) pre-training procedure on a corpus of approximately 9GB containing 100M Portuguese Tweets.
## Usage
### Normalized Inputs
```python
import torch
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained('melll-uff/bertweetbr')
tokenizer = AutoTokenizer.from_pretrained('melll-uff/bertweetbr', normalization=False)
# INPUT TWEETS ALREADY NORMALIZED!
inputs = [
"Procuro um amor , que seja bom pra mim ... vou procurar , eu vou até o fim :nota_musical:",
"Que jogo ontem @USER :mãos_juntas:",
"Demojizer para Python é :polegar_para_cima: e está disponível em HTTPURL"]
encoded_inputs = tokenizer(inputs, return_tensors="pt", padding=True)
with torch.no_grad():
last_hidden_states = model(**encoded_inputs)
# CLS Token of last hidden states. Shape: (number of input sentences, hidden sizeof the model)
last_hidden_states[0][:,0,:]
tensor([[-0.1430, -0.1325, 0.1595, ..., -0.0802, -0.0153, -0.1358],
[-0.0108, 0.1415, 0.0695, ..., 0.1420, 0.1153, -0.0176],
[-0.1854, 0.1866, 0.3163, ..., -0.2117, 0.2123, -0.1907]])
```
### Normalize raw input Tweets
```python
from emoji import demojize
import torch
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained('melll-uff/bertweetbr')
tokenizer = AutoTokenizer.from_pretrained('melll-uff/bertweetbr', normalization=True)
inputs = [
"Procuro um amor , que seja bom pra mim ... vou procurar , eu vou até o fim 🎵",
"Que jogo ontem @cristiano 🙏",
"Demojizer para Python é 👍 e está disponível em https://pypi.org/project/emoji/"]
tokenizer.demojizer = lambda x: demojize(x, language='pt')
[tokenizer.normalizeTweet(s) for s in inputs]
# Tokenizer first normalizes tweet sentences
['Procuro um amor , que seja bom pra mim ... vou procurar , eu vou até o fim :nota_musical:',
'Que jogo ontem @USER :mãos_juntas:',
'Demojizer para Python é :polegar_para_cima: e está disponível em HTTPURL']
encoded_inputs = tokenizer(inputs, return_tensors="pt", padding=True)
with torch.no_grad():
last_hidden_states = model(**encoded_inputs)
# CLS Token of last hidden states. Shape: (number of input sentences, hidden sizeof the model)
last_hidden_states[0][:,0,:]
tensor([[-0.1430, -0.1325, 0.1595, ..., -0.0802, -0.0153, -0.1358],
[-0.0108, 0.1415, 0.0695, ..., 0.1420, 0.1153, -0.0176],
[-0.1854, 0.1866, 0.3163, ..., -0.2117, 0.2123, -0.1907]])
```
### Mask Filling with Pipeline
```python
from transformers import pipeline
model_name = 'melll-uff/bertweetbr'
tokenizer = AutoTokenizer.from_pretrained('melll-uff/bertweetbr', normalization=False)
filler_mask = pipeline("fill-mask", model=model_name, tokenizer=tokenizer)
filler_mask("Rio é a <mask> cidade do Brasil.", top_k=5)
# Output
[{'sequence': 'Rio é a melhor cidade do Brasil.',
'score': 0.9871652126312256,
'token': 120,
'token_str': 'm e l h o r'},
{'sequence': 'Rio é a pior cidade do Brasil.',
'score': 0.005050931591540575,
'token': 316,
'token_str': 'p i o r'},
{'sequence': 'Rio é a maior cidade do Brasil.',
'score': 0.004420778248459101,
'token': 389,
'token_str': 'm a i o r'},
{'sequence': 'Rio é a minha cidade do Brasil.',
'score': 0.0021856199018657207,
'token': 38,
'token_str': 'm i n h a'},
{'sequence': 'Rio é a segunda cidade do Brasil.',
'score': 0.0002110043278662488,
'token': 667,
'token_str': 's e g u n d a'}]
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.