modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-13 00:37:47
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 555
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-13 00:35:18
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
lewtun/setfit-ethos-multilabel-example
|
lewtun
| 2022-11-02T17:03:41Z | 1,614 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-11-02T17:03:33Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 228 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 228,
"warmup_steps": 23,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
henilp105/wav2vec2-base-ASR-telugu
|
henilp105
| 2022-11-02T16:50:41Z | 0 | 0 | null |
[
"audio",
"automatic-speech-recognition",
"speech",
"xlsr-fine-tuning",
"te",
"license:apache-2.0",
"model-index",
"region:us"
] |
automatic-speech-recognition
| 2022-10-29T12:15:01Z |
---
language: te
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning
license: apache-2.0
model-index:
- name: Henil Panchal Facebook XLSR Wav2Vec2 Large 53 Telugu
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
metrics:
- name: Test WER
type: wer
value: 41.90
---
# Wav2Vec2-Large-XLSR-53-Telugu
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Telugu using the ASR IIIT-H dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
**Test Result**: 41.90%
## Training
70% of the O part of ASR IIIT-H Telugu dataset was used for training.
|
safoinme/zenml-mnist
|
safoinme
| 2022-11-02T16:42:56Z | 2 | 0 |
tf-keras
|
[
"tf-keras",
"vision",
"image-classification",
"dataset:mnist",
"license:apache-2.0",
"region:us"
] |
image-classification
| 2022-10-24T13:41:35Z |
---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- mnist
---
# ZenML Community Hour Demo
This model is deployed using zenml framework, it goes from local deployment with mlflow to huggingface deployment!
## Model description
This is a mnist datatset trained using keras framework
## Intended uses & limitations
More information needed
|
adit94/sentenceTest_kbert2
|
adit94
| 2022-11-02T15:25:56Z | 2 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-11-02T15:25:44Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 3185 with parameters:
```
{'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.TripletLoss.TripletLoss` with parameters:
```
{'distance_metric': 'TripletDistanceMetric.EUCLIDEAN', 'triplet_margin': 5}
```
Parameters of the fit()-Method:
```
{
"epochs": 2,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.TripletEvaluator.TripletEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
svo2/roberta-finetuned-facility
|
svo2
| 2022-11-02T15:13:27Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"generated_from_trainer",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-11-02T14:55:58Z |
---
license: cc-by-4.0
tags:
- generated_from_trainer
model-index:
- name: roberta-finetuned-facility
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-finetuned-facility
This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
north/fine_North_large_8bit
|
north
| 2022-11-02T15:07:25Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"no",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-02T10:13:36Z |
---
language:
- en
- no
tags:
- text2text-generation
license: apache-2.0
---
|
Watwat100/my-awesome-setfit-model
|
Watwat100
| 2022-11-02T14:54:52Z | 1 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-11-02T14:54:31Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 2 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 2,
"warmup_steps": 1,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
lewtun/distilhubert-finetuned-music-genres
|
lewtun
| 2022-11-02T14:52:06Z | 9 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hubert",
"audio-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2022-11-02T12:41:26Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-music-genres
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-music-genres
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6982
- Accuracy: 0.458
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 187 | 2.1291 | 0.312 |
| 2.2402 | 2.0 | 374 | 1.9922 | 0.388 |
| 2.2402 | 3.0 | 561 | 1.7594 | 0.444 |
| 1.6793 | 4.0 | 748 | 1.7164 | 0.447 |
| 1.6793 | 5.0 | 935 | 1.6982 | 0.458 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.11.0
- Datasets 2.6.1
- Tokenizers 0.11.6
|
adit94/sentenceTest_kbert
|
adit94
| 2022-11-02T14:16:39Z | 5 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-11-02T14:16:00Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 3185 with parameters:
```
{'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.TripletLoss.TripletLoss` with parameters:
```
{'distance_metric': 'TripletDistanceMetric.EUCLIDEAN', 'triplet_margin': 5}
```
Parameters of the fit()-Method:
```
{
"epochs": 3,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.TripletEvaluator.TripletEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 956,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
troesy/distil-added-voca
|
troesy
| 2022-11-02T13:46:18Z | 16 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-11-02T13:35:41Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distil-added-voca
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distil-added-voca
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2515
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 174 | 0.2577 |
| No log | 2.0 | 348 | 0.2488 |
| 0.2546 | 3.0 | 522 | 0.2515 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
jbk1/ddpm-butterflies-128
|
jbk1
| 2022-11-02T12:39:23Z | 1 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"en",
"dataset:jbk",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-11-02T11:56:14Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: jbk
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `jbk` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/jbk1/ddpm-butterflies-128/tensorboard?#scalars)
|
pig4431/amazonPolarity_ALBERT_5E
|
pig4431
| 2022-11-02T12:01:45Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"albert",
"text-classification",
"generated_from_trainer",
"dataset:amazon_polarity",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-02T12:01:25Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- amazon_polarity
metrics:
- accuracy
model-index:
- name: amazonPolarity_ALBERT_5E
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_polarity
type: amazon_polarity
config: amazon_polarity
split: train
args: amazon_polarity
metrics:
- name: Accuracy
type: accuracy
value: 0.9533333333333334
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# amazonPolarity_ALBERT_5E
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the amazon_polarity dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2404
- Accuracy: 0.9533
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.43 | 0.05 | 50 | 0.4090 | 0.8467 |
| 0.2597 | 0.11 | 100 | 0.3132 | 0.8933 |
| 0.2517 | 0.16 | 150 | 0.2642 | 0.9 |
| 0.2218 | 0.21 | 200 | 0.1973 | 0.9333 |
| 0.21 | 0.27 | 250 | 0.2880 | 0.88 |
| 0.2076 | 0.32 | 300 | 0.2646 | 0.8933 |
| 0.2219 | 0.37 | 350 | 0.2053 | 0.94 |
| 0.2086 | 0.43 | 400 | 0.2122 | 0.92 |
| 0.1725 | 0.48 | 450 | 0.2145 | 0.92 |
| 0.2074 | 0.53 | 500 | 0.2174 | 0.9267 |
| 0.1966 | 0.59 | 550 | 0.2013 | 0.9467 |
| 0.1777 | 0.64 | 600 | 0.2352 | 0.9133 |
| 0.1695 | 0.69 | 650 | 0.2965 | 0.9133 |
| 0.177 | 0.75 | 700 | 0.2204 | 0.94 |
| 0.187 | 0.8 | 750 | 0.2328 | 0.9133 |
| 0.1721 | 0.85 | 800 | 0.1713 | 0.9267 |
| 0.1747 | 0.91 | 850 | 0.2365 | 0.9 |
| 0.1627 | 0.96 | 900 | 0.2202 | 0.9267 |
| 0.1421 | 1.01 | 950 | 0.2681 | 0.9133 |
| 0.1516 | 1.07 | 1000 | 0.2116 | 0.9333 |
| 0.1196 | 1.12 | 1050 | 0.1885 | 0.94 |
| 0.1444 | 1.17 | 1100 | 0.2121 | 0.9267 |
| 0.1198 | 1.23 | 1150 | 0.2335 | 0.9333 |
| 0.1474 | 1.28 | 1200 | 0.2348 | 0.9067 |
| 0.125 | 1.33 | 1250 | 0.2401 | 0.9267 |
| 0.117 | 1.39 | 1300 | 0.2041 | 0.9467 |
| 0.114 | 1.44 | 1350 | 0.1985 | 0.9467 |
| 0.1293 | 1.49 | 1400 | 0.1891 | 0.9533 |
| 0.1231 | 1.55 | 1450 | 0.2168 | 0.9467 |
| 0.1306 | 1.6 | 1500 | 0.2097 | 0.94 |
| 0.1449 | 1.65 | 1550 | 0.1790 | 0.9333 |
| 0.132 | 1.71 | 1600 | 0.1838 | 0.9333 |
| 0.124 | 1.76 | 1650 | 0.1890 | 0.94 |
| 0.1419 | 1.81 | 1700 | 0.1575 | 0.9533 |
| 0.139 | 1.87 | 1750 | 0.1794 | 0.94 |
| 0.1171 | 1.92 | 1800 | 0.1981 | 0.9533 |
| 0.1343 | 1.97 | 1850 | 0.1539 | 0.96 |
| 0.0924 | 2.03 | 1900 | 0.1875 | 0.9533 |
| 0.0662 | 2.08 | 1950 | 0.2658 | 0.9467 |
| 0.1024 | 2.13 | 2000 | 0.1869 | 0.9467 |
| 0.1051 | 2.19 | 2050 | 0.1967 | 0.94 |
| 0.1047 | 2.24 | 2100 | 0.1625 | 0.9533 |
| 0.0972 | 2.29 | 2150 | 0.1754 | 0.9533 |
| 0.0885 | 2.35 | 2200 | 0.1831 | 0.94 |
| 0.0999 | 2.4 | 2250 | 0.1830 | 0.9533 |
| 0.0628 | 2.45 | 2300 | 0.1663 | 0.96 |
| 0.0957 | 2.51 | 2350 | 0.1708 | 0.9467 |
| 0.0864 | 2.56 | 2400 | 0.1977 | 0.9467 |
| 0.0752 | 2.61 | 2450 | 0.2427 | 0.9467 |
| 0.0913 | 2.67 | 2500 | 0.2325 | 0.94 |
| 0.139 | 2.72 | 2550 | 0.1470 | 0.96 |
| 0.0839 | 2.77 | 2600 | 0.2193 | 0.94 |
| 0.1045 | 2.83 | 2650 | 0.1672 | 0.9533 |
| 0.0775 | 2.88 | 2700 | 0.1782 | 0.96 |
| 0.0909 | 2.93 | 2750 | 0.2241 | 0.94 |
| 0.1182 | 2.99 | 2800 | 0.1942 | 0.9533 |
| 0.0721 | 3.04 | 2850 | 0.1774 | 0.9533 |
| 0.0562 | 3.09 | 2900 | 0.1877 | 0.9467 |
| 0.0613 | 3.14 | 2950 | 0.1576 | 0.96 |
| 0.0433 | 3.2 | 3000 | 0.2294 | 0.9467 |
| 0.0743 | 3.25 | 3050 | 0.2050 | 0.9533 |
| 0.0568 | 3.3 | 3100 | 0.1770 | 0.9667 |
| 0.0785 | 3.36 | 3150 | 0.1732 | 0.96 |
| 0.0434 | 3.41 | 3200 | 0.2130 | 0.9533 |
| 0.0534 | 3.46 | 3250 | 0.1902 | 0.9667 |
| 0.0748 | 3.52 | 3300 | 0.2082 | 0.9333 |
| 0.0691 | 3.57 | 3350 | 0.1820 | 0.96 |
| 0.0493 | 3.62 | 3400 | 0.1933 | 0.9533 |
| 0.0388 | 3.68 | 3450 | 0.2319 | 0.94 |
| 0.0649 | 3.73 | 3500 | 0.2071 | 0.94 |
| 0.0369 | 3.78 | 3550 | 0.2092 | 0.9533 |
| 0.0381 | 3.84 | 3600 | 0.2171 | 0.9533 |
| 0.0461 | 3.89 | 3650 | 0.2430 | 0.9467 |
| 0.0682 | 3.94 | 3700 | 0.2372 | 0.9467 |
| 0.0438 | 4.0 | 3750 | 0.2335 | 0.9467 |
| 0.0293 | 4.05 | 3800 | 0.2337 | 0.9533 |
| 0.0313 | 4.1 | 3850 | 0.2349 | 0.9467 |
| 0.0467 | 4.16 | 3900 | 0.2806 | 0.94 |
| 0.0243 | 4.21 | 3950 | 0.2493 | 0.94 |
| 0.0409 | 4.26 | 4000 | 0.2460 | 0.9533 |
| 0.041 | 4.32 | 4050 | 0.2550 | 0.9533 |
| 0.0319 | 4.37 | 4100 | 0.2438 | 0.9533 |
| 0.0457 | 4.42 | 4150 | 0.2469 | 0.9533 |
| 0.0343 | 4.48 | 4200 | 0.2298 | 0.9533 |
| 0.0464 | 4.53 | 4250 | 0.2555 | 0.9467 |
| 0.0289 | 4.58 | 4300 | 0.2486 | 0.9533 |
| 0.0416 | 4.64 | 4350 | 0.2539 | 0.9533 |
| 0.0422 | 4.69 | 4400 | 0.2534 | 0.9533 |
| 0.037 | 4.74 | 4450 | 0.2492 | 0.9467 |
| 0.0387 | 4.8 | 4500 | 0.2406 | 0.9533 |
| 0.0472 | 4.85 | 4550 | 0.2411 | 0.9533 |
| 0.0404 | 4.9 | 4600 | 0.2419 | 0.9533 |
| 0.0267 | 4.96 | 4650 | 0.2404 | 0.9533 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0
- Datasets 2.6.1
- Tokenizers 0.13.1
|
lmvasque/readability-es-benchmark-bertin-es-sentences-2class
|
lmvasque
| 2022-11-02T11:42:14Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-01T16:32:36Z |
---
license: cc-by-4.0
---
## Readability benchmark (ES): bertin-es-sentences-2class
This project is part of a series of models from the paper "A Benchmark for Neural Readability Assessment of Texts in Spanish".
You can find more details about the project in our [GitHub](https://github.com/lmvasque/readability-es-benchmark).
## Models
Our models were fine-tuned in multiple settings, including readability assessment in 2-class (simple/complex) and 3-class (basic/intermediate/advanced) for sentences and paragraph datasets.
You can find more details in our [paper](https://drive.google.com/file/d/1KdwvqrjX8MWYRDGBKeHmiR1NCzDcVizo/view?usp=share_link).
These are the available models you can use (current model page in bold):
| Model | Granularity | # classes |
|-----------------------------------------------------------------------------------------------------------|----------------|:---------:|
| [BERTIN (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-bertin-es-paragraphs-2class) | paragraphs | 2 |
| [BERTIN (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-bertin-es-paragraphs-3class) | paragraphs | 3 |
| [mBERT (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-es-paragraphs-2class) | paragraphs | 2 |
| [mBERT (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-es-paragraphs-3class) | paragraphs | 3 |
| [mBERT (EN+ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-en-es-paragraphs-3class) | paragraphs | 3 |
| **[BERTIN (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-bertin-es-sentences-2class)** | **sentences** | **2** |
| [BERTIN (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-bertin-es-sentences-3class) | sentences | 3 |
| [mBERT (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-es-sentences-2class) | sentences | 2 |
| [mBERT (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-es-sentences-3class) | sentences | 3 |
| [mBERT (EN+ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-en-es-sentences-3class) | sentences | 3 |
For the zero-shot setting, we used the original models [BERTIN](bertin-project/bertin-roberta-base-spanish) and [mBERT](https://huggingface.co/bert-base-multilingual-uncased) with no further training.
## Results
These are our results for all the readability models in different settings. Please select your model based on the desired performance:
| Granularity | Model | F1 Score (2-class) | Precision (2-class) | Recall (2-class) | F1 Score (3-class) | Precision (3-class) | Recall (3-class) |
|-------------|---------------|:-------------------:|:---------------------:|:------------------:|:--------------------:|:---------------------:|:------------------:|
| Paragraph | Baseline (TF-IDF+LR) | 0.829 | 0.832 | 0.827 | 0.556 | 0.563 | 0.550 |
| Paragraph | BERTIN (Zero) | 0.308 | 0.222 | 0.500 | 0.227 | 0.284 | 0.338 |
| Paragraph | BERTIN (ES) | 0.924 | 0.923 | 0.925 | 0.772 | 0.776 | 0.768 |
| Paragraph | mBERT (Zero) | 0.308 | 0.222 | 0.500 | 0.253 | 0.312 | 0.368 |
| Paragraph | mBERT (EN) | - | - | - | 0.505 | 0.560 | 0.552 |
| Paragraph | mBERT (ES) | **0.933** | **0.932** | **0.936** | 0.776 | 0.777 | 0.778 |
| Paragraph | mBERT (EN+ES) | - | - | - | **0.779** | **0.783** | **0.779** |
| Sentence | Baseline (TF-IDF+LR) | 0.811 | 0.814 | 0.808 | 0.525 | 0.531 | 0.521 |
| Sentence | BERTIN (Zero) | 0.367 | 0.290 | 0.500 | 0.188 | 0.232 | 0.335 |
| Sentence | BERTIN (ES) | **0.900** | **0.900** | **0.900** | **0.699** | **0.701** | **0.698** |
| Sentence | mBERT (Zero) | 0.367 | 0.290 | 0.500 | 0.278 | 0.329 | 0.351 |
| Sentence | mBERT (EN) | - | - | - | 0.521 | 0.565 | 0.539 |
| Sentence | mBERT (ES) | 0.893 | 0.891 | 0.896 | 0.688 | 0.686 | 0.691 |
| Sentence | mBERT (EN+ES) | - | - | - | 0.679 | 0.676 | 0.682 |
## Citation
If you use our results and scripts in your research, please cite our work: "[A Benchmark for Neural Readability Assessment of Texts in Spanish](https://drive.google.com/file/d/1KdwvqrjX8MWYRDGBKeHmiR1NCzDcVizo/view?usp=share_link)" (to be published)
```
@inproceedings{vasquez-rodriguez-etal-2022-benchmarking,
title = "A Benchmark for Neural Readability Assessment of Texts in Spanish",
author = "V{\'a}squez-Rodr{\'\i}guez, Laura and
Cuenca-Jim{\'\e}nez, Pedro-Manuel and
Morales-Esquivel, Sergio Esteban and
Alva-Manchego, Fernando",
booktitle = "Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022), EMNLP 2022",
month = dec,
year = "2022",
}
```
|
lmvasque/readability-es-benchmark-bertin-es-paragraphs-2class
|
lmvasque
| 2022-11-02T11:41:35Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-01T12:04:27Z |
---
license: cc-by-4.0
---
## Readability benchmark (ES): bertin-es-paragraphs-2class
This project is part of a series of models from the paper "A Benchmark for Neural Readability Assessment of Texts in Spanish".
You can find more details about the project in our [GitHub](https://github.com/lmvasque/readability-es-benchmark).
## Models
Our models were fine-tuned in multiple settings, including readability assessment in 2-class (simple/complex) and 3-class (basic/intermediate/advanced) for sentences and paragraph datasets.
You can find more details in our [paper](https://drive.google.com/file/d/1KdwvqrjX8MWYRDGBKeHmiR1NCzDcVizo/view?usp=share_link).
These are the available models you can use (current model page in bold):
| Model | Granularity | # classes |
|-----------------------------------------------------------------------------------------------------------|----------------|:---------:|
| **[BERTIN (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-bertin-es-paragraphs-2class)** | **paragraphs** | **2** |
| [BERTIN (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-bertin-es-paragraphs-3class) | paragraphs | 3 |
| [mBERT (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-es-paragraphs-2class) | paragraphs | 2 |
| [mBERT (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-es-paragraphs-3class) | paragraphs | 3 |
| [mBERT (EN+ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-en-es-paragraphs-3class) | paragraphs | 3 |
| [BERTIN (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-bertin-es-sentences-2class) | sentences | 2 |
| [BERTIN (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-bertin-es-sentences-3class) | sentences | 3 |
| [mBERT (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-es-sentences-2class) | sentences | 2 |
| [mBERT (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-es-sentences-3class) | sentences | 3 |
| [mBERT (EN+ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-en-es-sentences-3class) | sentences | 3 |
For the zero-shot setting, we used the original models [BERTIN](bertin-project/bertin-roberta-base-spanish) and [mBERT](https://huggingface.co/bert-base-multilingual-uncased) with no further training.
## Results
These are our results for all the readability models in different settings. Please select your model based on the desired performance:
| Granularity | Model | F1 Score (2-class) | Precision (2-class) | Recall (2-class) | F1 Score (3-class) | Precision (3-class) | Recall (3-class) |
|-------------|---------------|:-------------------:|:---------------------:|:------------------:|:--------------------:|:---------------------:|:------------------:|
| Paragraph | Baseline (TF-IDF+LR) | 0.829 | 0.832 | 0.827 | 0.556 | 0.563 | 0.550 |
| Paragraph | BERTIN (Zero) | 0.308 | 0.222 | 0.500 | 0.227 | 0.284 | 0.338 |
| Paragraph | BERTIN (ES) | 0.924 | 0.923 | 0.925 | 0.772 | 0.776 | 0.768 |
| Paragraph | mBERT (Zero) | 0.308 | 0.222 | 0.500 | 0.253 | 0.312 | 0.368 |
| Paragraph | mBERT (EN) | - | - | - | 0.505 | 0.560 | 0.552 |
| Paragraph | mBERT (ES) | **0.933** | **0.932** | **0.936** | 0.776 | 0.777 | 0.778 |
| Paragraph | mBERT (EN+ES) | - | - | - | **0.779** | **0.783** | **0.779** |
| Sentence | Baseline (TF-IDF+LR) | 0.811 | 0.814 | 0.808 | 0.525 | 0.531 | 0.521 |
| Sentence | BERTIN (Zero) | 0.367 | 0.290 | 0.500 | 0.188 | 0.232 | 0.335 |
| Sentence | BERTIN (ES) | **0.900** | **0.900** | **0.900** | **0.699** | **0.701** | **0.698** |
| Sentence | mBERT (Zero) | 0.367 | 0.290 | 0.500 | 0.278 | 0.329 | 0.351 |
| Sentence | mBERT (EN) | - | - | - | 0.521 | 0.565 | 0.539 |
| Sentence | mBERT (ES) | 0.893 | 0.891 | 0.896 | 0.688 | 0.686 | 0.691 |
| Sentence | mBERT (EN+ES) | - | - | - | 0.679 | 0.676 | 0.682 |
## Citation
If you use our results and scripts in your research, please cite our work: "[A Benchmark for Neural Readability Assessment of Texts in Spanish](https://drive.google.com/file/d/1KdwvqrjX8MWYRDGBKeHmiR1NCzDcVizo/view?usp=share_link)" (to be published)
```
@inproceedings{vasquez-rodriguez-etal-2022-benchmarking,
title = "A Benchmark for Neural Readability Assessment of Texts in Spanish",
author = "V{\'a}squez-Rodr{\'\i}guez, Laura and
Cuenca-Jim{\'\e}nez, Pedro-Manuel and
Morales-Esquivel, Sergio Esteban and
Alva-Manchego, Fernando",
booktitle = "Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022), EMNLP 2022",
month = dec,
year = "2022",
}
```
|
lmvasque/readability-es-benchmark-mbert-en-es-paragraphs-3class
|
lmvasque
| 2022-11-02T11:41:13Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-01T18:11:26Z |
---
license: cc-by-4.0
---
## Readability benchmark (ES): mbert-en-es-paragraphs-3class
This project is part of a series of models from the paper "A Benchmark for Neural Readability Assessment of Texts in Spanish".
You can find more details about the project in our [GitHub](https://github.com/lmvasque/readability-es-benchmark).
## Models
Our models were fine-tuned in multiple settings, including readability assessment in 2-class (simple/complex) and 3-class (basic/intermediate/advanced) for sentences and paragraph datasets.
You can find more details in our [paper](https://drive.google.com/file/d/1KdwvqrjX8MWYRDGBKeHmiR1NCzDcVizo/view?usp=share_link).
These are the available models you can use (current model page in bold):
| Model | Granularity | # classes |
|-------------------------------------------------------------------------------------------------------------|----------------|:---------:|
| [BERTIN (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-bertin-es-paragraphs-2class) | paragraphs | 2 |
| [BERTIN (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-bertin-es-paragraphs-3class) | paragraphs | 3 |
| [mBERT (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-es-paragraphs-2class) | paragraphs | 2 |
| [mBERT (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-es-paragraphs-3class) | paragraphs | 3 |
| **[mBERT (EN+ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-en-es-paragraphs-3class)** | **paragraphs** | **3** |
| [BERTIN (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-bertin-es-sentences-2class) | sentences | 2 |
| [BERTIN (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-bertin-es-sentences-3class) | sentences | 3 |
| [mBERT (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-es-sentences-2class) | sentences | 2 |
| [mBERT (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-es-sentences-3class) | sentences | 3 |
| [mBERT (EN+ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-en-es-sentences-3class) | sentences | 3 |
For the zero-shot setting, we used the original models [BERTIN](bertin-project/bertin-roberta-base-spanish) and [mBERT](https://huggingface.co/bert-base-multilingual-uncased) with no further training.
## Results
These are our results for all the readability models in different settings. Please select your model based on the desired performance:
| Granularity | Model | F1 Score (2-class) | Precision (2-class) | Recall (2-class) | F1 Score (3-class) | Precision (3-class) | Recall (3-class) |
|-------------|---------------|:-------------------:|:---------------------:|:------------------:|:--------------------:|:---------------------:|:------------------:|
| Paragraph | Baseline (TF-IDF+LR) | 0.829 | 0.832 | 0.827 | 0.556 | 0.563 | 0.550 |
| Paragraph | BERTIN (Zero) | 0.308 | 0.222 | 0.500 | 0.227 | 0.284 | 0.338 |
| Paragraph | BERTIN (ES) | 0.924 | 0.923 | 0.925 | 0.772 | 0.776 | 0.768 |
| Paragraph | mBERT (Zero) | 0.308 | 0.222 | 0.500 | 0.253 | 0.312 | 0.368 |
| Paragraph | mBERT (EN) | - | - | - | 0.505 | 0.560 | 0.552 |
| Paragraph | mBERT (ES) | **0.933** | **0.932** | **0.936** | 0.776 | 0.777 | 0.778 |
| Paragraph | mBERT (EN+ES) | - | - | - | **0.779** | **0.783** | **0.779** |
| Sentence | Baseline (TF-IDF+LR) | 0.811 | 0.814 | 0.808 | 0.525 | 0.531 | 0.521 |
| Sentence | BERTIN (Zero) | 0.367 | 0.290 | 0.500 | 0.188 | 0.232 | 0.335 |
| Sentence | BERTIN (ES) | **0.900** | **0.900** | **0.900** | **0.699** | **0.701** | **0.698** |
| Sentence | mBERT (Zero) | 0.367 | 0.290 | 0.500 | 0.278 | 0.329 | 0.351 |
| Sentence | mBERT (EN) | - | - | - | 0.521 | 0.565 | 0.539 |
| Sentence | mBERT (ES) | 0.893 | 0.891 | 0.896 | 0.688 | 0.686 | 0.691 |
| Sentence | mBERT (EN+ES) | - | - | - | 0.679 | 0.676 | 0.682 |
## Citation
If you use our results and scripts in your research, please cite our work: "[A Benchmark for Neural Readability Assessment of Texts in Spanish](https://drive.google.com/file/d/1KdwvqrjX8MWYRDGBKeHmiR1NCzDcVizo/view?usp=share_link)" (to be published)
```
@inproceedings{vasquez-rodriguez-etal-2022-benchmarking,
title = "A Benchmark for Neural Readability Assessment of Texts in Spanish",
author = "V{\'a}squez-Rodr{\'\i}guez, Laura and
Cuenca-Jim{\'\e}nez, Pedro-Manuel and
Morales-Esquivel, Sergio Esteban and
Alva-Manchego, Fernando",
booktitle = "Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022), EMNLP 2022",
month = dec,
year = "2022",
}
```
|
lmvasque/readability-es-benchmark-mbert-es-sentences-2class
|
lmvasque
| 2022-11-02T11:40:54Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-01T17:00:06Z |
---
license: cc-by-4.0
---
## Readability benchmark (ES): mbert-es-sentences-2class
This project is part of a series of models from the paper "A Benchmark for Neural Readability Assessment of Texts in Spanish".
You can find more details about the project in our [GitHub](https://github.com/lmvasque/readability-es-benchmark).
## Models
Our models were fine-tuned in multiple settings, including readability assessment in 2-class (simple/complex) and 3-class (basic/intermediate/advanced) for sentences and paragraph datasets.
You can find more details in our [paper](https://drive.google.com/file/d/1KdwvqrjX8MWYRDGBKeHmiR1NCzDcVizo/view?usp=share_link).
These are the available models you can use (current model page in bold):
| Model | Granularity | # classes |
|-----------------------------------------------------------------------------------------------------------|----------------|:---------:|
| [BERTIN (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-bertin-es-paragraphs-2class) | paragraphs | 2 |
| [BERTIN (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-bertin-es-paragraphs-3class) | paragraphs | 3 |
| [mBERT (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-es-paragraphs-2class) | paragraphs | 2 |
| [mBERT (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-es-paragraphs-3class) | paragraphs | 3 |
| [mBERT (EN+ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-en-es-paragraphs-3class) | paragraphs | 3 |
| [BERTIN (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-bertin-es-sentences-2class) | sentences | 2 |
| [BERTIN (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-bertin-es-sentences-3class) | sentences | 3 |
| **[mBERT (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-es-sentences-2class)** | **sentences** | **2** |
| [mBERT (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-es-sentences-3class) | sentences | 3 |
| [mBERT (EN+ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-en-es-sentences-3class) | sentences | 3 |
For the zero-shot setting, we used the original models [BERTIN](bertin-project/bertin-roberta-base-spanish) and [mBERT](https://huggingface.co/bert-base-multilingual-uncased) with no further training.
## Results
These are our results for all the readability models in different settings. Please select your model based on the desired performance:
| Granularity | Model | F1 Score (2-class) | Precision (2-class) | Recall (2-class) | F1 Score (3-class) | Precision (3-class) | Recall (3-class) |
|-------------|---------------|:-------------------:|:---------------------:|:------------------:|:--------------------:|:---------------------:|:------------------:|
| Paragraph | Baseline (TF-IDF+LR) | 0.829 | 0.832 | 0.827 | 0.556 | 0.563 | 0.550 |
| Paragraph | BERTIN (Zero) | 0.308 | 0.222 | 0.500 | 0.227 | 0.284 | 0.338 |
| Paragraph | BERTIN (ES) | 0.924 | 0.923 | 0.925 | 0.772 | 0.776 | 0.768 |
| Paragraph | mBERT (Zero) | 0.308 | 0.222 | 0.500 | 0.253 | 0.312 | 0.368 |
| Paragraph | mBERT (EN) | - | - | - | 0.505 | 0.560 | 0.552 |
| Paragraph | mBERT (ES) | **0.933** | **0.932** | **0.936** | 0.776 | 0.777 | 0.778 |
| Paragraph | mBERT (EN+ES) | - | - | - | **0.779** | **0.783** | **0.779** |
| Sentence | Baseline (TF-IDF+LR) | 0.811 | 0.814 | 0.808 | 0.525 | 0.531 | 0.521 |
| Sentence | BERTIN (Zero) | 0.367 | 0.290 | 0.500 | 0.188 | 0.232 | 0.335 |
| Sentence | BERTIN (ES) | **0.900** | **0.900** | **0.900** | **0.699** | **0.701** | **0.698** |
| Sentence | mBERT (Zero) | 0.367 | 0.290 | 0.500 | 0.278 | 0.329 | 0.351 |
| Sentence | mBERT (EN) | - | - | - | 0.521 | 0.565 | 0.539 |
| Sentence | mBERT (ES) | 0.893 | 0.891 | 0.896 | 0.688 | 0.686 | 0.691 |
| Sentence | mBERT (EN+ES) | - | - | - | 0.679 | 0.676 | 0.682 |
## Citation
If you use our results and scripts in your research, please cite our work: "[A Benchmark for Neural Readability Assessment of Texts in Spanish](https://drive.google.com/file/d/1KdwvqrjX8MWYRDGBKeHmiR1NCzDcVizo/view?usp=share_link)" (to be published)
```
@inproceedings{vasquez-rodriguez-etal-2022-benchmarking,
title = "A Benchmark for Neural Readability Assessment of Texts in Spanish",
author = "V{\'a}squez-Rodr{\'\i}guez, Laura and
Cuenca-Jim{\'\e}nez, Pedro-Manuel and
Morales-Esquivel, Sergio Esteban and
Alva-Manchego, Fernando",
booktitle = "Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022), EMNLP 2022",
month = dec,
year = "2022",
}
```
|
lmvasque/readability-es-benchmark-mbert-en-es-sentences-3class
|
lmvasque
| 2022-11-02T11:40:09Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-01T18:13:26Z |
---
license: cc-by-4.0
---
## Readability benchmark (ES): mbert-en-es-sentences-3class
This project is part of a series of models from the paper "A Benchmark for Neural Readability Assessment of Texts in Spanish".
You can find more details about the project in our [GitHub](https://github.com/lmvasque/readability-es-benchmark).
## Models
Our models were fine-tuned in multiple settings, including readability assessment in 2-class (simple/complex) and 3-class (basic/intermediate/advanced) for sentences and paragraph datasets.
You can find more details in our [paper](https://drive.google.com/file/d/1KdwvqrjX8MWYRDGBKeHmiR1NCzDcVizo/view?usp=share_link).
These are the available models you can use (current model page in bold):
| Model | Granularity | # classes |
|-----------------------------------------------------------------------------------------------------------|----------------|:---------:|
| [BERTIN (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-bertin-es-paragraphs-2class) | paragraphs | 2 |
| [BERTIN (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-bertin-es-paragraphs-3class) | paragraphs | 3 |
| [mBERT (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-es-paragraphs-2class) | paragraphs | 2 |
| [mBERT (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-es-paragraphs-3class) | paragraphs | 3 |
| [mBERT (EN+ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-en-es-paragraphs-3class) | paragraphs | 3 |
| [BERTIN (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-bertin-es-sentences-2class) | sentences | 2 |
| [BERTIN (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-bertin-es-sentences-3class) | sentences | 3 |
| [mBERT (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-es-sentences-2class) | sentences | 2 |
| [mBERT (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-es-sentences-3class) | sentences | 3 |
| **[mBERT (EN+ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-en-es-sentences-3class)** | **sentences** | **3** |
For the zero-shot setting, we used the original models [BERTIN](bertin-project/bertin-roberta-base-spanish) and [mBERT](https://huggingface.co/bert-base-multilingual-uncased) with no further training.
## Results
These are our results for all the readability models in different settings. Please select your model based on the desired performance:
| Granularity | Model | F1 Score (2-class) | Precision (2-class) | Recall (2-class) | F1 Score (3-class) | Precision (3-class) | Recall (3-class) |
|-------------|---------------|:-------------------:|:---------------------:|:------------------:|:--------------------:|:---------------------:|:------------------:|
| Paragraph | Baseline (TF-IDF+LR) | 0.829 | 0.832 | 0.827 | 0.556 | 0.563 | 0.550 |
| Paragraph | BERTIN (Zero) | 0.308 | 0.222 | 0.500 | 0.227 | 0.284 | 0.338 |
| Paragraph | BERTIN (ES) | 0.924 | 0.923 | 0.925 | 0.772 | 0.776 | 0.768 |
| Paragraph | mBERT (Zero) | 0.308 | 0.222 | 0.500 | 0.253 | 0.312 | 0.368 |
| Paragraph | mBERT (EN) | - | - | - | 0.505 | 0.560 | 0.552 |
| Paragraph | mBERT (ES) | **0.933** | **0.932** | **0.936** | 0.776 | 0.777 | 0.778 |
| Paragraph | mBERT (EN+ES) | - | - | - | **0.779** | **0.783** | **0.779** |
| Sentence | Baseline (TF-IDF+LR) | 0.811 | 0.814 | 0.808 | 0.525 | 0.531 | 0.521 |
| Sentence | BERTIN (Zero) | 0.367 | 0.290 | 0.500 | 0.188 | 0.232 | 0.335 |
| Sentence | BERTIN (ES) | **0.900** | **0.900** | **0.900** | **0.699** | **0.701** | **0.698** |
| Sentence | mBERT (Zero) | 0.367 | 0.290 | 0.500 | 0.278 | 0.329 | 0.351 |
| Sentence | mBERT (EN) | - | - | - | 0.521 | 0.565 | 0.539 |
| Sentence | mBERT (ES) | 0.893 | 0.891 | 0.896 | 0.688 | 0.686 | 0.691 |
| Sentence | mBERT (EN+ES) | - | - | - | 0.679 | 0.676 | 0.682 |
## Citation
If you use our results and scripts in your research, please cite our work: "[A Benchmark for Neural Readability Assessment of Texts in Spanish](https://drive.google.com/file/d/1KdwvqrjX8MWYRDGBKeHmiR1NCzDcVizo/view?usp=share_link)" (to be published)
```
@inproceedings{vasquez-rodriguez-etal-2022-benchmarking,
title = "A Benchmark for Neural Readability Assessment of Texts in Spanish",
author = "V{\'a}squez-Rodr{\'\i}guez, Laura and
Cuenca-Jim{\'\e}nez, Pedro-Manuel and
Morales-Esquivel, Sergio Esteban and
Alva-Manchego, Fernando",
booktitle = "Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022), EMNLP 2022",
month = dec,
year = "2022",
}
```
|
lmvasque/readability-es-benchmark-bertin-es-paragraphs-3class
|
lmvasque
| 2022-11-02T11:39:45Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-01T13:18:41Z |
---
license: cc-by-4.0
---
## Readability benchmark (ES): bertin-es-paragraphs-3class
This project is part of a series of models from the paper "A Benchmark for Neural Readability Assessment of Texts in Spanish".
You can find more details about the project in our [GitHub](https://github.com/lmvasque/readability-es-benchmark).
## Models
Our models were fine-tuned in multiple settings, including readability assessment in 2-class (simple/complex) and 3-class (basic/intermediate/advanced) for sentences and paragraph datasets.
You can find more details in our [paper](https://drive.google.com/file/d/1KdwvqrjX8MWYRDGBKeHmiR1NCzDcVizo/view?usp=share_link).
These are the available models you can use (current model page in bold):
| Model | Granularity | # classes |
|-----------------------------------------------------------------------------------------------------------|----------------|:---------:|
| [BERTIN (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-bertin-es-paragraphs-2class) | paragraphs | 2 |
| **[BERTIN (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-bertin-es-paragraphs-3class)** | **paragraphs** | **3** |
| [mBERT (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-es-paragraphs-2class) | paragraphs | 2 |
| [mBERT (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-es-paragraphs-3class) | paragraphs | 3 |
| [mBERT (EN+ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-en-es-paragraphs-3class) | paragraphs | 3 |
| [BERTIN (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-bertin-es-sentences-2class) | sentences | 2 |
| [BERTIN (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-bertin-es-sentences-3class) | sentences | 3 |
| [mBERT (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-es-sentences-2class) | sentences | 2 |
| [mBERT (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-es-sentences-3class) | sentences | 3 |
| [mBERT (EN+ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-en-es-sentences-3class) | sentences | 3 |
For the zero-shot setting, we used the original models [BERTIN](bertin-project/bertin-roberta-base-spanish) and [mBERT](https://huggingface.co/bert-base-multilingual-uncased) with no further training.
## Results
These are our results for all the readability models in different settings. Please select your model based on the desired performance:
| Granularity | Model | F1 Score (2-class) | Precision (2-class) | Recall (2-class) | F1 Score (3-class) | Precision (3-class) | Recall (3-class) |
|-------------|---------------|:-------------------:|:---------------------:|:------------------:|:--------------------:|:---------------------:|:------------------:|
| Paragraph | Baseline (TF-IDF+LR) | 0.829 | 0.832 | 0.827 | 0.556 | 0.563 | 0.550 |
| Paragraph | BERTIN (Zero) | 0.308 | 0.222 | 0.500 | 0.227 | 0.284 | 0.338 |
| Paragraph | BERTIN (ES) | 0.924 | 0.923 | 0.925 | 0.772 | 0.776 | 0.768 |
| Paragraph | mBERT (Zero) | 0.308 | 0.222 | 0.500 | 0.253 | 0.312 | 0.368 |
| Paragraph | mBERT (EN) | - | - | - | 0.505 | 0.560 | 0.552 |
| Paragraph | mBERT (ES) | **0.933** | **0.932** | **0.936** | 0.776 | 0.777 | 0.778 |
| Paragraph | mBERT (EN+ES) | - | - | - | **0.779** | **0.783** | **0.779** |
| Sentence | Baseline (TF-IDF+LR) | 0.811 | 0.814 | 0.808 | 0.525 | 0.531 | 0.521 |
| Sentence | BERTIN (Zero) | 0.367 | 0.290 | 0.500 | 0.188 | 0.232 | 0.335 |
| Sentence | BERTIN (ES) | **0.900** | **0.900** | **0.900** | **0.699** | **0.701** | **0.698** |
| Sentence | mBERT (Zero) | 0.367 | 0.290 | 0.500 | 0.278 | 0.329 | 0.351 |
| Sentence | mBERT (EN) | - | - | - | 0.521 | 0.565 | 0.539 |
| Sentence | mBERT (ES) | 0.893 | 0.891 | 0.896 | 0.688 | 0.686 | 0.691 |
| Sentence | mBERT (EN+ES) | - | - | - | 0.679 | 0.676 | 0.682 |
## Citation
If you use our results and scripts in your research, please cite our work: "[A Benchmark for Neural Readability Assessment of Texts in Spanish](https://drive.google.com/file/d/1KdwvqrjX8MWYRDGBKeHmiR1NCzDcVizo/view?usp=share_link)" (to be published)
```
@inproceedings{vasquez-rodriguez-etal-2022-benchmarking,
title = "A Benchmark for Neural Readability Assessment of Texts in Spanish",
author = "V{\'a}squez-Rodr{\'\i}guez, Laura and
Cuenca-Jim{\'\e}nez, Pedro-Manuel and
Morales-Esquivel, Sergio Esteban and
Alva-Manchego, Fernando",
booktitle = "Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022), EMNLP 2022",
month = dec,
year = "2022",
}
```
|
lmvasque/readability-es-benchmark-bertin-es-sentences-3class
|
lmvasque
| 2022-11-02T11:39:21Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-01T16:51:26Z |
---
license: cc-by-4.0
---
## Readability benchmark (ES): bertin-es-sentences-3class
This project is part of a series of models from the paper "A Benchmark for Neural Readability Assessment of Texts in Spanish".
You can find more details about the project in our [GitHub](https://github.com/lmvasque/readability-es-benchmark).
## Models
Our models were fine-tuned in multiple settings, including readability assessment in 2-class (simple/complex) and 3-class (basic/intermediate/advanced) for sentences and paragraph datasets.
You can find more details in our [paper](https://drive.google.com/file/d/1KdwvqrjX8MWYRDGBKeHmiR1NCzDcVizo/view?usp=share_link).
These are the available models you can use (current model page in bold):
| Model | Granularity | # classes |
|-----------------------------------------------------------------------------------------------------------|----------------|:---------:|
| [BERTIN (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-bertin-es-paragraphs-2class) | paragraphs | 2 |
| [BERTIN (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-bertin-es-paragraphs-3class) | paragraphs | 3 |
| [mBERT (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-es-paragraphs-2class) | paragraphs | 2 |
| [mBERT (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-es-paragraphs-3class) | paragraphs | 3 |
| [mBERT (EN+ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-en-es-paragraphs-3class) | paragraphs | 3 |
| [BERTIN (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-bertin-es-sentences-2class) | sentences | 2 |
| **[BERTIN (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-bertin-es-sentences-3class)** | **sentences** | **3** |
| [mBERT (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-es-sentences-2class) | sentences | 2 |
| [mBERT (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-es-sentences-3class) | sentences | 3 |
| [mBERT (EN+ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-en-es-sentences-3class) | sentences | 3 |
For the zero-shot setting, we used the original models [BERTIN](bertin-project/bertin-roberta-base-spanish) and [mBERT](https://huggingface.co/bert-base-multilingual-uncased) with no further training.
## Results
These are our results for all the readability models in different settings. Please select your model based on the desired performance:
| Granularity | Model | F1 Score (2-class) | Precision (2-class) | Recall (2-class) | F1 Score (3-class) | Precision (3-class) | Recall (3-class) |
|-------------|---------------|:-------------------:|:---------------------:|:------------------:|:--------------------:|:---------------------:|:------------------:|
| Paragraph | Baseline (TF-IDF+LR) | 0.829 | 0.832 | 0.827 | 0.556 | 0.563 | 0.550 |
| Paragraph | BERTIN (Zero) | 0.308 | 0.222 | 0.500 | 0.227 | 0.284 | 0.338 |
| Paragraph | BERTIN (ES) | 0.924 | 0.923 | 0.925 | 0.772 | 0.776 | 0.768 |
| Paragraph | mBERT (Zero) | 0.308 | 0.222 | 0.500 | 0.253 | 0.312 | 0.368 |
| Paragraph | mBERT (EN) | - | - | - | 0.505 | 0.560 | 0.552 |
| Paragraph | mBERT (ES) | **0.933** | **0.932** | **0.936** | 0.776 | 0.777 | 0.778 |
| Paragraph | mBERT (EN+ES) | - | - | - | **0.779** | **0.783** | **0.779** |
| Sentence | Baseline (TF-IDF+LR) | 0.811 | 0.814 | 0.808 | 0.525 | 0.531 | 0.521 |
| Sentence | BERTIN (Zero) | 0.367 | 0.290 | 0.500 | 0.188 | 0.232 | 0.335 |
| Sentence | BERTIN (ES) | **0.900** | **0.900** | **0.900** | **0.699** | **0.701** | **0.698** |
| Sentence | mBERT (Zero) | 0.367 | 0.290 | 0.500 | 0.278 | 0.329 | 0.351 |
| Sentence | mBERT (EN) | - | - | - | 0.521 | 0.565 | 0.539 |
| Sentence | mBERT (ES) | 0.893 | 0.891 | 0.896 | 0.688 | 0.686 | 0.691 |
| Sentence | mBERT (EN+ES) | - | - | - | 0.679 | 0.676 | 0.682 |
## Citation
If you use our results and scripts in your research, please cite our work: "[A Benchmark for Neural Readability Assessment of Texts in Spanish](https://drive.google.com/file/d/1KdwvqrjX8MWYRDGBKeHmiR1NCzDcVizo/view?usp=share_link)" (to be published)
```
@inproceedings{vasquez-rodriguez-etal-2022-benchmarking,
title = "A Benchmark for Neural Readability Assessment of Texts in Spanish",
author = "V{\'a}squez-Rodr{\'\i}guez, Laura and
Cuenca-Jim{\'\e}nez, Pedro-Manuel and
Morales-Esquivel, Sergio Esteban and
Alva-Manchego, Fernando",
booktitle = "Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022), EMNLP 2022",
month = dec,
year = "2022",
}
```
|
lmvasque/readability-es-benchmark-mbert-es-sentences-3class
|
lmvasque
| 2022-11-02T11:38:41Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-01T17:06:41Z |
---
license: cc-by-4.0
---
## Readability benchmark (ES): mbert-es-sentences-3class
This project is part of a series of models from the paper "A Benchmark for Neural Readability Assessment of Texts in Spanish".
You can find more details about the project in our [GitHub](https://github.com/lmvasque/readability-es-benchmark).
## Models
Our models were fine-tuned in multiple settings, including readability assessment in 2-class (simple/complex) and 3-class (basic/intermediate/advanced) for sentences and paragraph datasets.
You can find more details in our [paper](https://drive.google.com/file/d/1KdwvqrjX8MWYRDGBKeHmiR1NCzDcVizo/view?usp=share_link).
These are the available models you can use (current model page in bold):
| Model | Granularity | # classes |
|-----------------------------------------------------------------------------------------------------------|----------------|:---------:|
| [BERTIN (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-bertin-es-paragraphs-2class) | paragraphs | 2 |
| [BERTIN (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-bertin-es-paragraphs-3class) | paragraphs | 3 |
| [mBERT (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-es-paragraphs-2class) | paragraphs | 2 |
| [mBERT (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-es-paragraphs-3class) | paragraphs | 3 |
| [mBERT (EN+ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-en-es-paragraphs-3class) | paragraphs | 3 |
| [BERTIN (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-bertin-es-sentences-2class) | sentences | 2 |
| **[mBERT (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-es-sentences-3class)** | **sentences** | **3** |
| [mBERT (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-es-sentences-2class) | sentences | 2 |
| [mBERT (ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-es-sentences-3class) | sentences | 3 |
| [mBERT (EN+ES)](https://huggingface.co/lmvasque/readability-es-benchmark-mbert-en-es-sentences-3class) | sentences | 3 |
For the zero-shot setting, we used the original models [BERTIN](bertin-project/bertin-roberta-base-spanish) and [mBERT](https://huggingface.co/bert-base-multilingual-uncased) with no further training.
## Results
These are our results for all the readability models in different settings. Please select your model based on the desired performance:
| Granularity | Model | F1 Score (2-class) | Precision (2-class) | Recall (2-class) | F1 Score (3-class) | Precision (3-class) | Recall (3-class) |
|-------------|---------------|:-------------------:|:---------------------:|:------------------:|:--------------------:|:---------------------:|:------------------:|
| Paragraph | Baseline (TF-IDF+LR) | 0.829 | 0.832 | 0.827 | 0.556 | 0.563 | 0.550 |
| Paragraph | BERTIN (Zero) | 0.308 | 0.222 | 0.500 | 0.227 | 0.284 | 0.338 |
| Paragraph | BERTIN (ES) | 0.924 | 0.923 | 0.925 | 0.772 | 0.776 | 0.768 |
| Paragraph | mBERT (Zero) | 0.308 | 0.222 | 0.500 | 0.253 | 0.312 | 0.368 |
| Paragraph | mBERT (EN) | - | - | - | 0.505 | 0.560 | 0.552 |
| Paragraph | mBERT (ES) | **0.933** | **0.932** | **0.936** | 0.776 | 0.777 | 0.778 |
| Paragraph | mBERT (EN+ES) | - | - | - | **0.779** | **0.783** | **0.779** |
| Sentence | Baseline (TF-IDF+LR) | 0.811 | 0.814 | 0.808 | 0.525 | 0.531 | 0.521 |
| Sentence | BERTIN (Zero) | 0.367 | 0.290 | 0.500 | 0.188 | 0.232 | 0.335 |
| Sentence | BERTIN (ES) | **0.900** | **0.900** | **0.900** | **0.699** | **0.701** | **0.698** |
| Sentence | mBERT (Zero) | 0.367 | 0.290 | 0.500 | 0.278 | 0.329 | 0.351 |
| Sentence | mBERT (EN) | - | - | - | 0.521 | 0.565 | 0.539 |
| Sentence | mBERT (ES) | 0.893 | 0.891 | 0.896 | 0.688 | 0.686 | 0.691 |
| Sentence | mBERT (EN+ES) | - | - | - | 0.679 | 0.676 | 0.682 |
## Citation
If you use our results and scripts in your research, please cite our work: "[A Benchmark for Neural Readability Assessment of Texts in Spanish](https://drive.google.com/file/d/1KdwvqrjX8MWYRDGBKeHmiR1NCzDcVizo/view?usp=share_link)" (to be published)
```
@inproceedings{vasquez-rodriguez-etal-2022-benchmarking,
title = "A Benchmark for Neural Readability Assessment of Texts in Spanish",
author = "V{\'a}squez-Rodr{\'\i}guez, Laura and
Cuenca-Jim{\'\e}nez, Pedro-Manuel and
Morales-Esquivel, Sergio Esteban and
Alva-Manchego, Fernando",
booktitle = "Workshop on Text Simplification, Accessibility, and Readability (TSAR-2022), EMNLP 2022",
month = dec,
year = "2022",
}
```
|
Pablo94/racism-finetuned-detests-02-11-2022
|
Pablo94
| 2022-11-02T11:27:28Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"license:cc",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-02T11:11:28Z |
---
license: cc
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: racism-finetuned-detests-02-11-2022
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# racism-finetuned-detests-02-11-2022
This model is a fine-tuned version of [davidmasip/racism](https://huggingface.co/davidmasip/racism) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8819
- F1: 0.6199
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.3032 | 0.64 | 25 | 0.3482 | 0.6434 |
| 0.1132 | 1.28 | 50 | 0.3707 | 0.6218 |
| 0.1253 | 1.92 | 75 | 0.4004 | 0.6286 |
| 0.0064 | 2.56 | 100 | 0.6223 | 0.6254 |
| 0.0007 | 3.21 | 125 | 0.7347 | 0.6032 |
| 0.0006 | 3.85 | 150 | 0.7705 | 0.6312 |
| 0.0004 | 4.49 | 175 | 0.7988 | 0.6304 |
| 0.0003 | 5.13 | 200 | 0.8206 | 0.6255 |
| 0.0003 | 5.77 | 225 | 0.8371 | 0.6097 |
| 0.0003 | 6.41 | 250 | 0.8503 | 0.6148 |
| 0.0003 | 7.05 | 275 | 0.8610 | 0.6148 |
| 0.0002 | 7.69 | 300 | 0.8693 | 0.6199 |
| 0.0002 | 8.33 | 325 | 0.8755 | 0.6199 |
| 0.0002 | 8.97 | 350 | 0.8797 | 0.6199 |
| 0.0002 | 9.62 | 375 | 0.8819 | 0.6199 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
debbiesoon/summarise_v11
|
debbiesoon
| 2022-11-02T11:08:52Z | 101 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"led",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-02T10:13:48Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: summarise_v11
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# summarise_v11
This model is a fine-tuned version of [allenai/led-base-16384](https://huggingface.co/allenai/led-base-16384) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6322
- Rouge1 Precision: 0.6059
- Rouge1 Recall: 0.6233
- Rouge1 Fmeasure: 0.5895
- Rouge2 Precision: 0.4192
- Rouge2 Recall: 0.4512
- Rouge2 Fmeasure: 0.4176
- Rougel Precision: 0.4622
- Rougel Recall: 0.4946
- Rougel Fmeasure: 0.4566
- Rougelsum Precision: 0.4622
- Rougelsum Recall: 0.4946
- Rougelsum Fmeasure: 0.4566
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 Precision | Rouge1 Recall | Rouge1 Fmeasure | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure | Rougel Precision | Rougel Recall | Rougel Fmeasure | Rougelsum Precision | Rougelsum Recall | Rougelsum Fmeasure |
|:-------------:|:-----:|:----:|:---------------:|:----------------:|:-------------:|:---------------:|:----------------:|:-------------:|:---------------:|:----------------:|:-------------:|:---------------:|:-------------------:|:----------------:|:------------------:|
| 1.6201 | 0.45 | 10 | 1.4875 | 0.3203 | 0.64 | 0.3932 | 0.197 | 0.3839 | 0.2385 | 0.1952 | 0.4051 | 0.2454 | 0.1952 | 0.4051 | 0.2454 |
| 0.9172 | 0.91 | 20 | 1.4404 | 0.4917 | 0.5134 | 0.4699 | 0.288 | 0.3095 | 0.276 | 0.3371 | 0.3594 | 0.3277 | 0.3371 | 0.3594 | 0.3277 |
| 1.0923 | 1.36 | 30 | 1.3575 | 0.519 | 0.5505 | 0.4936 | 0.3114 | 0.3237 | 0.2958 | 0.3569 | 0.3702 | 0.3364 | 0.3569 | 0.3702 | 0.3364 |
| 1.1287 | 1.82 | 40 | 1.3269 | 0.4913 | 0.5997 | 0.5068 | 0.3108 | 0.3964 | 0.3269 | 0.3355 | 0.427 | 0.3521 | 0.3355 | 0.427 | 0.3521 |
| 0.9938 | 2.27 | 50 | 1.3189 | 0.5339 | 0.5781 | 0.4973 | 0.3555 | 0.3883 | 0.3345 | 0.3914 | 0.4289 | 0.3678 | 0.3914 | 0.4289 | 0.3678 |
| 0.8659 | 2.73 | 60 | 1.3241 | 0.525 | 0.638 | 0.5165 | 0.3556 | 0.4349 | 0.3535 | 0.3914 | 0.4793 | 0.3886 | 0.3914 | 0.4793 | 0.3886 |
| 0.6187 | 3.18 | 70 | 1.3360 | 0.5875 | 0.5864 | 0.5416 | 0.4005 | 0.4045 | 0.3701 | 0.4485 | 0.4556 | 0.414 | 0.4485 | 0.4556 | 0.414 |
| 0.3941 | 3.64 | 80 | 1.4176 | 0.5373 | 0.6415 | 0.5328 | 0.3576 | 0.446 | 0.3642 | 0.3787 | 0.4586 | 0.3781 | 0.3787 | 0.4586 | 0.3781 |
| 0.4145 | 4.09 | 90 | 1.3936 | 0.4127 | 0.6553 | 0.4568 | 0.2568 | 0.4498 | 0.2988 | 0.2918 | 0.4933 | 0.328 | 0.2918 | 0.4933 | 0.328 |
| 0.4203 | 4.55 | 100 | 1.4703 | 0.6545 | 0.601 | 0.5981 | 0.4789 | 0.4373 | 0.438 | 0.5251 | 0.4851 | 0.4818 | 0.5251 | 0.4851 | 0.4818 |
| 0.687 | 5.0 | 110 | 1.4304 | 0.5566 | 0.6357 | 0.5637 | 0.3734 | 0.4186 | 0.3748 | 0.4251 | 0.4825 | 0.4286 | 0.4251 | 0.4825 | 0.4286 |
| 0.4006 | 5.45 | 120 | 1.5399 | 0.5994 | 0.5794 | 0.5515 | 0.4215 | 0.4218 | 0.398 | 0.4359 | 0.4369 | 0.4084 | 0.4359 | 0.4369 | 0.4084 |
| 0.2536 | 5.91 | 130 | 1.5098 | 0.5074 | 0.6254 | 0.4874 | 0.3369 | 0.4189 | 0.3256 | 0.3802 | 0.4738 | 0.3664 | 0.3802 | 0.4738 | 0.3664 |
| 0.2218 | 6.36 | 140 | 1.5278 | 0.5713 | 0.6059 | 0.5688 | 0.3887 | 0.4233 | 0.3916 | 0.4414 | 0.4795 | 0.4457 | 0.4414 | 0.4795 | 0.4457 |
| 0.2577 | 6.82 | 150 | 1.5469 | 0.5148 | 0.5941 | 0.5175 | 0.3284 | 0.3856 | 0.3335 | 0.3616 | 0.4268 | 0.3681 | 0.3616 | 0.4268 | 0.3681 |
| 0.1548 | 7.27 | 160 | 1.5986 | 0.5983 | 0.657 | 0.5862 | 0.4322 | 0.4877 | 0.4287 | 0.4466 | 0.5167 | 0.4482 | 0.4466 | 0.5167 | 0.4482 |
| 0.1535 | 7.73 | 170 | 1.5796 | 0.5609 | 0.641 | 0.5616 | 0.3856 | 0.4428 | 0.3892 | 0.4238 | 0.4921 | 0.4263 | 0.4238 | 0.4921 | 0.4263 |
| 0.1568 | 8.18 | 180 | 1.6052 | 0.5669 | 0.617 | 0.5679 | 0.3911 | 0.4382 | 0.3969 | 0.4363 | 0.4877 | 0.4417 | 0.4363 | 0.4877 | 0.4417 |
| 0.2038 | 8.64 | 190 | 1.6191 | 0.5466 | 0.5973 | 0.5313 | 0.3543 | 0.4114 | 0.3531 | 0.4061 | 0.4666 | 0.404 | 0.4061 | 0.4666 | 0.404 |
| 0.1808 | 9.09 | 200 | 1.6165 | 0.5751 | 0.5919 | 0.5587 | 0.3831 | 0.4097 | 0.3817 | 0.4482 | 0.4728 | 0.4405 | 0.4482 | 0.4728 | 0.4405 |
| 0.1021 | 9.55 | 210 | 1.6316 | 0.5316 | 0.6315 | 0.535 | 0.3588 | 0.4563 | 0.3697 | 0.405 | 0.502 | 0.4126 | 0.405 | 0.502 | 0.4126 |
| 0.1407 | 10.0 | 220 | 1.6322 | 0.6059 | 0.6233 | 0.5895 | 0.4192 | 0.4512 | 0.4176 | 0.4622 | 0.4946 | 0.4566 | 0.4622 | 0.4946 | 0.4566 |
### Framework versions
- Transformers 4.21.3
- Pytorch 1.12.1+cu113
- Datasets 1.2.1
- Tokenizers 0.12.1
|
jayantapaul888/twitter-data-xlm-roberta-base-eng-only-sentiment-finetuned-memes
|
jayantapaul888
| 2022-11-02T10:58:36Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-02T10:26:12Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: twitter-data-xlm-roberta-base-eng-only-sentiment-finetuned-memes
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# twitter-data-xlm-roberta-base-eng-only-sentiment-finetuned-memes
This model is a fine-tuned version of [jayantapaul888/twitter-data-xlm-roberta-base-sentiment-finetuned-memes](https://huggingface.co/jayantapaul888/twitter-data-xlm-roberta-base-sentiment-finetuned-memes) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6286
- Accuracy: 0.8660
- Precision: 0.8796
- Recall: 0.8795
- F1: 0.8795
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| No log | 1.0 | 378 | 0.3421 | 0.8407 | 0.8636 | 0.8543 | 0.8553 |
| 0.396 | 2.0 | 756 | 0.3445 | 0.8496 | 0.8726 | 0.8634 | 0.8631 |
| 0.2498 | 3.0 | 1134 | 0.3656 | 0.8585 | 0.8764 | 0.8727 | 0.8723 |
| 0.1543 | 4.0 | 1512 | 0.4549 | 0.8600 | 0.8742 | 0.8740 | 0.8741 |
| 0.1543 | 5.0 | 1890 | 0.5932 | 0.8645 | 0.8783 | 0.8780 | 0.8780 |
| 0.0815 | 6.0 | 2268 | 0.6286 | 0.8660 | 0.8796 | 0.8795 | 0.8795 |
### Framework versions
- Transformers 4.24.0.dev0
- Pytorch 1.11.0+cu102
- Datasets 2.6.1
- Tokenizers 0.13.1
|
Pablo94/bert-base-uncased-finetuned-detests-02-11-2022
|
Pablo94
| 2022-11-02T10:54:53Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-02T10:09:12Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: bert-base-uncased-finetuned-detests-02-11-2022
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-detests-02-11-2022
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0794
- F1: 0.5455
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.014 | 0.64 | 25 | 0.6229 | 0.5536 |
| 0.0698 | 1.28 | 50 | 0.6996 | 0.5907 |
| 0.0173 | 1.92 | 75 | 0.7531 | 0.5882 |
| 0.0032 | 2.56 | 100 | 0.8054 | 0.4928 |
| 0.0087 | 3.21 | 125 | 0.9557 | 0.5735 |
| 0.0028 | 3.85 | 150 | 0.8859 | 0.5352 |
| 0.013 | 4.49 | 175 | 0.9674 | 0.5536 |
| 0.0031 | 5.13 | 200 | 0.9073 | 0.5691 |
| 0.0032 | 5.77 | 225 | 0.9253 | 0.5439 |
| 0.0483 | 6.41 | 250 | 0.9705 | 0.5837 |
| 0.0323 | 7.05 | 275 | 1.0368 | 0.5824 |
| 0.0019 | 7.69 | 300 | 1.0221 | 0.5520 |
| 0.0256 | 8.33 | 325 | 1.0419 | 0.5523 |
| 0.0319 | 8.97 | 350 | 1.0764 | 0.5425 |
| 0.0125 | 9.62 | 375 | 1.0794 | 0.5455 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
Voicelab/sbert-base-cased-pl
|
Voicelab
| 2022-11-02T10:44:31Z | 30,091 | 8 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"pl",
"dataset:Wikipedia",
"arxiv:1908.10084",
"license:cc-by-4.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-04-11T06:57:47Z |
---
license: cc-by-4.0
language:
- pl
datasets:
- Wikipedia
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
widget:
- source_sentence: "Uczenie maszynowe jest konsekwencją rozwoju idei sztucznej inteligencji i metod jej wdrażania praktycznego."
sentences:
- "Głębokie uczenie maszynowe jest sktukiem wdrażania praktycznego metod sztucznej inteligencji oraz jej rozwoju."
- "Kasparow zarzucił firmie IBM oszustwo, kiedy odmówiła mu dostępu do historii wcześniejszych gier Deep Blue. "
- "Samica o długości ciała 10–11 mm, szczoteczki na tylnych nogach służące do zbierania pyłku oraz włoski na końcu odwłoka jaskrawo pomarańczowoczerwone. "
example_title: "Uczenie maszynowe"
---
<img src="https://public.3.basecamp.com/p/rs5XqmAuF1iEuW6U7nMHcZeY/upload/download/VL-NLP-short.png" alt="logo voicelab nlp" style="width:300px;"/>
# SHerbert - Polish SentenceBERT
SentenceBERT is a modification of the pretrained BERT network that use siamese and triplet network structures to derive semantically meaningful sentence embeddings that can be compared using cosine-similarity. Training was based on the original paper [Siamese BERT models for the task of semantic textual similarity (STS)](https://arxiv.org/abs/1908.10084) with a slight modification of how the training data was used. The goal of the model is to generate different embeddings based on the semantic and topic similarity of the given text.
> Semantic textual similarity analyzes how similar two pieces of texts are.
Read more about how the model was prepared in our [blog post](https://voicelab.ai/blog/).
The base trained model is a Polish HerBERT. HerBERT is a BERT-based Language Model. For more details, please refer to: "HerBERT: Efficiently Pretrained Transformer-based Language Model for Polish".
# Corpus
Te model was trained solely on [Wikipedia](https://dumps.wikimedia.org/).
# Tokenizer
As in the original HerBERT implementation, the training dataset was tokenized into subwords using a character level byte-pair encoding (CharBPETokenizer) with a vocabulary size of 50k tokens. The tokenizer itself was trained with a tokenizers library.
We kindly encourage you to use the Fast version of the tokenizer, namely HerbertTokenizerFast.
# Usage
```python
from transformers import AutoTokenizer, AutoModel
from sklearn.metrics import pairwise
sbert = AutoModel.from_pretrained("Voicelab/sbert-base-cased-pl")
tokenizer = AutoTokenizer.from_pretrained("Voicelab/sbert-base-cased-pl")
s0 = "Uczenie maszynowe jest konsekwencją rozwoju idei sztucznej inteligencji i metod jej wdrażania praktycznego."
s1 = "Głębokie uczenie maszynowe jest sktukiem wdrażania praktycznego metod sztucznej inteligencji oraz jej rozwoju."
s2 = "Kasparow zarzucił firmie IBM oszustwo, kiedy odmówiła mu dostępu do historii wcześniejszych gier Deep Blue. "
tokens = tokenizer([s0, s1, s2],
padding=True,
truncation=True,
return_tensors='pt')
x = sbert(tokens["input_ids"],
tokens["attention_mask"]).pooler_output
# similarity between sentences s0 and s1
print(pairwise.cosine_similarity(x[0], x[1])) # Result: 0.7952354
# similarity between sentences s0 and s2
print(pairwise.cosine_similarity(x[0], x[2])) # Result: 0.42359722
```
# Results
| Model | Accuracy | Source |
|--------------------------|------------|---------------------------------------------------------|
| SBERT-WikiSec-base (EN) | 80.42% | https://arxiv.org/abs/1908.10084 |
| SBERT-WikiSec-large (EN) | 80.78% | https://arxiv.org/abs/1908.10084 |
| **sbert-base-cased-pl** | **82.31%** | **https://huggingface.co/Voicelab/sbert-base-cased-pl** |
| sbert-large-cased-pl | 84.42% | https://huggingface.co/Voicelab/sbert-large-cased-pl |
# License
CC BY 4.0
# Citation
If you use this model, please cite the following paper:
# Authors
The model was trained by NLP Research Team at Voicelab.ai.
You can contact us [here](https://voicelab.ai/contact/).
|
Voicelab/sbert-large-cased-pl
|
Voicelab
| 2022-11-02T10:44:13Z | 554 | 7 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"pl",
"dataset:Wikipedia",
"arxiv:1908.10084",
"license:cc-by-4.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-04-13T07:33:36Z |
---
license: cc-by-4.0
language:
- pl
datasets:
- Wikipedia
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
widget:
- source_sentence: "Uczenie maszynowe jest konsekwencją rozwoju idei sztucznej inteligencji i metod jej wdrażania praktycznego."
sentences:
- "Głębokie uczenie maszynowe jest sktukiem wdrażania praktycznego metod sztucznej inteligencji oraz jej rozwoju."
- "Kasparow zarzucił firmie IBM oszustwo, kiedy odmówiła mu dostępu do historii wcześniejszych gier Deep Blue. "
- "Samica o długości ciała 10–11 mm, szczoteczki na tylnych nogach służące do zbierania pyłku oraz włoski na końcu odwłoka jaskrawo pomarańczowoczerwone. "
example_title: "Uczenie maszynowe"
---
<img src="https://public.3.basecamp.com/p/rs5XqmAuF1iEuW6U7nMHcZeY/upload/download/VL-NLP-short.png" alt="logo voicelab nlp" style="width:300px;"/>
# SHerbert large - Polish SentenceBERT
SentenceBERT is a modification of the pretrained BERT network that use siamese and triplet network structures to derive semantically meaningful sentence embeddings that can be compared using cosine-similarity. Training was based on the original paper [Siamese BERT models for the task of semantic textual similarity (STS)](https://arxiv.org/abs/1908.10084) with a slight modification of how the training data was used. The goal of the model is to generate different embeddings based on the semantic and topic similarity of the given text.
> Semantic textual similarity analyzes how similar two pieces of texts are.
Read more about how the model was prepared in our [blog post](https://voicelab.ai/blog/).
The base trained model is a Polish HerBERT. HerBERT is a BERT-based Language Model. For more details, please refer to: "HerBERT: Efficiently Pretrained Transformer-based Language Model for Polish".
# Corpus
Te model was trained solely on [Wikipedia](https://dumps.wikimedia.org/).
# Tokenizer
As in the original HerBERT implementation, the training dataset was tokenized into subwords using a character level byte-pair encoding (CharBPETokenizer) with a vocabulary size of 50k tokens. The tokenizer itself was trained with a tokenizers library.
We kindly encourage you to use the Fast version of the tokenizer, namely HerbertTokenizerFast.
# Usage
```python
from transformers import AutoTokenizer, AutoModel
from sklearn.metrics import pairwise
sbert = AutoModel.from_pretrained("Voicelab/sbert-large-cased-pl")
tokenizer = AutoTokenizer.from_pretrained("Voicelab/sbert-large-cased-pl")
s0 = "Uczenie maszynowe jest konsekwencją rozwoju idei sztucznej inteligencji i metod jej wdrażania praktycznego."
s1 = "Głębokie uczenie maszynowe jest sktukiem wdrażania praktycznego metod sztucznej inteligencji oraz jej rozwoju."
s2 = "Kasparow zarzucił firmie IBM oszustwo, kiedy odmówiła mu dostępu do historii wcześniejszych gier Deep Blue. "
tokens = tokenizer([s0, s1, s2],
padding=True,
truncation=True,
return_tensors='pt')
x = sbert(tokens["input_ids"],
tokens["attention_mask"]).pooler_output
# similarity between sentences s0 and s1
print(pairwise.cosine_similarity(x[0], x[1])) # Result: 0.8011128
# similarity between sentences s0 and s2
print(pairwise.cosine_similarity(x[0], x[2])) # Result: 0.58822715
```
# Results
| Model | Accuracy | Source |
|--------------------------|------------|----------------------------------------------------------|
| SBERT-WikiSec-base (EN) | 80.42% | https://arxiv.org/abs/1908.10084 |
| SBERT-WikiSec-large (EN) | 80.78% | https://arxiv.org/abs/1908.10084 |
| sbert-base-cased-pl | 82.31% | https://huggingface.co/Voicelab/sbert-base-cased-pl |
| **sbert-large-cased-pl** | **84.42%** | **https://huggingface.co/Voicelab/sbert-large-cased-pl** |
# License
CC BY 4.0
# Citation
If you use this model, please cite the following paper:
# Authors
The model was trained by NLP Research Team at Voicelab.ai.
You can contact us [here](https://voicelab.ai/contact/).
|
shed-e/scipaper-summary
|
shed-e
| 2022-11-02T09:42:23Z | 4 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:scitldr",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-02T07:17:08Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- scitldr
metrics:
- rouge
model-index:
- name: paper-summary
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: scitldr
type: scitldr
config: Abstract
split: train
args: Abstract
metrics:
- name: Rouge1
type: rouge
value: 0.3484
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# paper-summary
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the scitldr dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8631
- Rouge1: 0.3484
- Rouge2: 0.1596
- Rougel: 0.2971
- Rougelsum: 0.3047
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| 3.0545 | 1.0 | 63 | 2.9939 | 0.3387 | 0.1538 | 0.2887 | 0.2957 |
| 2.7871 | 2.0 | 126 | 2.9360 | 0.3448 | 0.1577 | 0.2947 | 0.3019 |
| 2.7188 | 3.0 | 189 | 2.8977 | 0.3477 | 0.1585 | 0.2967 | 0.3035 |
| 2.6493 | 4.0 | 252 | 2.8837 | 0.3488 | 0.1597 | 0.2973 | 0.3046 |
| 2.6207 | 5.0 | 315 | 2.8690 | 0.3472 | 0.1566 | 0.2958 | 0.3033 |
| 2.5893 | 6.0 | 378 | 2.8668 | 0.3493 | 0.1592 | 0.2972 | 0.305 |
| 2.5494 | 7.0 | 441 | 2.8657 | 0.3486 | 0.1595 | 0.2976 | 0.3053 |
| 2.5554 | 8.0 | 504 | 2.8631 | 0.3484 | 0.1596 | 0.2971 | 0.3047 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
pig4431/amazonPolarity_ELECTRA_5E
|
pig4431
| 2022-11-02T09:31:24Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"electra",
"text-classification",
"generated_from_trainer",
"dataset:amazon_polarity",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-02T09:30:49Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- amazon_polarity
metrics:
- accuracy
model-index:
- name: amazonPolarity_ELECTRA_5E
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_polarity
type: amazon_polarity
config: amazon_polarity
split: train
args: amazon_polarity
metrics:
- name: Accuracy
type: accuracy
value: 0.9333333333333333
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# amazonPolarity_ELECTRA_5E
This model is a fine-tuned version of [google/electra-base-discriminator](https://huggingface.co/google/electra-base-discriminator) on the amazon_polarity dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3512
- Accuracy: 0.9333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6705 | 0.03 | 50 | 0.5768 | 0.8867 |
| 0.4054 | 0.05 | 100 | 0.2968 | 0.8933 |
| 0.2461 | 0.08 | 150 | 0.2233 | 0.92 |
| 0.1795 | 0.11 | 200 | 0.2265 | 0.9333 |
| 0.2293 | 0.13 | 250 | 0.2329 | 0.9267 |
| 0.1541 | 0.16 | 300 | 0.2240 | 0.94 |
| 0.2006 | 0.19 | 350 | 0.2779 | 0.92 |
| 0.1826 | 0.21 | 400 | 0.2765 | 0.9133 |
| 0.1935 | 0.24 | 450 | 0.2346 | 0.9333 |
| 0.1887 | 0.27 | 500 | 0.2085 | 0.94 |
| 0.1688 | 0.29 | 550 | 0.2193 | 0.94 |
| 0.1884 | 0.32 | 600 | 0.1982 | 0.9467 |
| 0.189 | 0.35 | 650 | 0.1873 | 0.94 |
| 0.1564 | 0.37 | 700 | 0.2226 | 0.94 |
| 0.1733 | 0.4 | 750 | 0.2462 | 0.9333 |
| 0.1436 | 0.43 | 800 | 0.2328 | 0.94 |
| 0.1517 | 0.45 | 850 | 0.2128 | 0.9533 |
| 0.1922 | 0.48 | 900 | 0.1626 | 0.9467 |
| 0.1401 | 0.51 | 950 | 0.2391 | 0.94 |
| 0.1606 | 0.53 | 1000 | 0.2001 | 0.94 |
| 0.1597 | 0.56 | 1050 | 0.1788 | 0.9467 |
| 0.184 | 0.59 | 1100 | 0.1656 | 0.9467 |
| 0.1448 | 0.61 | 1150 | 0.1752 | 0.96 |
| 0.1575 | 0.64 | 1200 | 0.1878 | 0.9533 |
| 0.1836 | 0.67 | 1250 | 0.1416 | 0.9533 |
| 0.1378 | 0.69 | 1300 | 0.1866 | 0.9467 |
| 0.1901 | 0.72 | 1350 | 0.1654 | 0.9533 |
| 0.1697 | 0.75 | 1400 | 0.1720 | 0.9533 |
| 0.1624 | 0.77 | 1450 | 0.1700 | 0.9467 |
| 0.1487 | 0.8 | 1500 | 0.1786 | 0.94 |
| 0.1367 | 0.83 | 1550 | 0.1974 | 0.9267 |
| 0.1535 | 0.85 | 1600 | 0.1823 | 0.9267 |
| 0.1366 | 0.88 | 1650 | 0.1515 | 0.94 |
| 0.1505 | 0.91 | 1700 | 0.1527 | 0.94 |
| 0.1554 | 0.93 | 1750 | 0.1855 | 0.9467 |
| 0.1478 | 0.96 | 1800 | 0.1885 | 0.9333 |
| 0.1603 | 0.99 | 1850 | 0.1990 | 0.9467 |
| 0.1637 | 1.01 | 1900 | 0.1901 | 0.9467 |
| 0.1074 | 1.04 | 1950 | 0.1886 | 0.9533 |
| 0.0874 | 1.07 | 2000 | 0.2399 | 0.94 |
| 0.1245 | 1.09 | 2050 | 0.2107 | 0.9467 |
| 0.1175 | 1.12 | 2100 | 0.2226 | 0.94 |
| 0.1279 | 1.15 | 2150 | 0.2267 | 0.94 |
| 0.0947 | 1.17 | 2200 | 0.2342 | 0.94 |
| 0.0837 | 1.2 | 2250 | 0.2519 | 0.9467 |
| 0.1091 | 1.23 | 2300 | 0.2531 | 0.94 |
| 0.0867 | 1.25 | 2350 | 0.2519 | 0.94 |
| 0.0845 | 1.28 | 2400 | 0.2431 | 0.9467 |
| 0.0836 | 1.31 | 2450 | 0.1936 | 0.9533 |
| 0.1633 | 1.33 | 2500 | 0.1875 | 0.9333 |
| 0.1029 | 1.36 | 2550 | 0.2345 | 0.94 |
| 0.0755 | 1.39 | 2600 | 0.3028 | 0.94 |
| 0.1539 | 1.41 | 2650 | 0.2497 | 0.94 |
| 0.1055 | 1.44 | 2700 | 0.2002 | 0.9467 |
| 0.1234 | 1.47 | 2750 | 0.1763 | 0.9533 |
| 0.1312 | 1.49 | 2800 | 0.1998 | 0.94 |
| 0.1067 | 1.52 | 2850 | 0.1820 | 0.96 |
| 0.1092 | 1.55 | 2900 | 0.1903 | 0.9467 |
| 0.1209 | 1.57 | 2950 | 0.1912 | 0.9467 |
| 0.0627 | 1.6 | 3000 | 0.2208 | 0.9467 |
| 0.1121 | 1.63 | 3050 | 0.2607 | 0.9333 |
| 0.1106 | 1.65 | 3100 | 0.1852 | 0.9533 |
| 0.0724 | 1.68 | 3150 | 0.2122 | 0.9533 |
| 0.1247 | 1.71 | 3200 | 0.2112 | 0.9467 |
| 0.1247 | 1.73 | 3250 | 0.2021 | 0.9533 |
| 0.096 | 1.76 | 3300 | 0.2340 | 0.9467 |
| 0.1056 | 1.79 | 3350 | 0.2165 | 0.94 |
| 0.1055 | 1.81 | 3400 | 0.2563 | 0.94 |
| 0.1199 | 1.84 | 3450 | 0.2251 | 0.9467 |
| 0.0899 | 1.87 | 3500 | 0.1996 | 0.9533 |
| 0.109 | 1.89 | 3550 | 0.1924 | 0.9533 |
| 0.13 | 1.92 | 3600 | 0.1769 | 0.9467 |
| 0.1037 | 1.95 | 3650 | 0.2003 | 0.9533 |
| 0.0934 | 1.97 | 3700 | 0.2325 | 0.94 |
| 0.1254 | 2.0 | 3750 | 0.2037 | 0.9467 |
| 0.0619 | 2.03 | 3800 | 0.2252 | 0.9533 |
| 0.093 | 2.05 | 3850 | 0.2145 | 0.9533 |
| 0.0827 | 2.08 | 3900 | 0.2237 | 0.9533 |
| 0.0679 | 2.11 | 3950 | 0.2643 | 0.9467 |
| 0.076 | 2.13 | 4000 | 0.2287 | 0.9533 |
| 0.0526 | 2.16 | 4050 | 0.3210 | 0.9267 |
| 0.0354 | 2.19 | 4100 | 0.3259 | 0.9333 |
| 0.026 | 2.21 | 4150 | 0.3448 | 0.9333 |
| 0.0466 | 2.24 | 4200 | 0.3751 | 0.9333 |
| 0.043 | 2.27 | 4250 | 0.3122 | 0.9333 |
| 0.0521 | 2.29 | 4300 | 0.3155 | 0.9333 |
| 0.1018 | 2.32 | 4350 | 0.3066 | 0.94 |
| 0.0572 | 2.35 | 4400 | 0.2848 | 0.94 |
| 0.0903 | 2.37 | 4450 | 0.2289 | 0.9467 |
| 0.0718 | 2.4 | 4500 | 0.2661 | 0.9467 |
| 0.0689 | 2.43 | 4550 | 0.2544 | 0.9467 |
| 0.0829 | 2.45 | 4600 | 0.2816 | 0.9333 |
| 0.0909 | 2.48 | 4650 | 0.2244 | 0.94 |
| 0.0888 | 2.51 | 4700 | 0.2620 | 0.94 |
| 0.0998 | 2.53 | 4750 | 0.2773 | 0.94 |
| 0.0604 | 2.56 | 4800 | 0.2344 | 0.94 |
| 0.0619 | 2.59 | 4850 | 0.2551 | 0.9467 |
| 0.056 | 2.61 | 4900 | 0.2787 | 0.94 |
| 0.1037 | 2.64 | 4950 | 0.2388 | 0.9467 |
| 0.0858 | 2.67 | 5000 | 0.2213 | 0.94 |
| 0.0674 | 2.69 | 5050 | 0.2339 | 0.9467 |
| 0.0438 | 2.72 | 5100 | 0.2759 | 0.9467 |
| 0.0615 | 2.75 | 5150 | 0.2739 | 0.9467 |
| 0.064 | 2.77 | 5200 | 0.2488 | 0.9467 |
| 0.0824 | 2.8 | 5250 | 0.2590 | 0.9467 |
| 0.074 | 2.83 | 5300 | 0.2314 | 0.9467 |
| 0.1077 | 2.85 | 5350 | 0.2571 | 0.9467 |
| 0.0482 | 2.88 | 5400 | 0.2678 | 0.9467 |
| 0.0732 | 2.91 | 5450 | 0.2626 | 0.9333 |
| 0.0564 | 2.93 | 5500 | 0.2586 | 0.94 |
| 0.1019 | 2.96 | 5550 | 0.2706 | 0.9333 |
| 0.0675 | 2.99 | 5600 | 0.2568 | 0.9267 |
| 0.056 | 3.01 | 5650 | 0.2881 | 0.9333 |
| 0.0266 | 3.04 | 5700 | 0.2789 | 0.9467 |
| 0.0207 | 3.07 | 5750 | 0.2535 | 0.9467 |
| 0.0246 | 3.09 | 5800 | 0.2597 | 0.9467 |
| 0.0631 | 3.12 | 5850 | 0.2403 | 0.9533 |
| 0.0627 | 3.15 | 5900 | 0.2336 | 0.9533 |
| 0.1061 | 3.17 | 5950 | 0.2773 | 0.94 |
| 0.0257 | 3.2 | 6000 | 0.2587 | 0.9467 |
| 0.0375 | 3.23 | 6050 | 0.2560 | 0.9467 |
| 0.0404 | 3.25 | 6100 | 0.2851 | 0.94 |
| 0.0748 | 3.28 | 6150 | 0.3005 | 0.94 |
| 0.0384 | 3.31 | 6200 | 0.2442 | 0.9533 |
| 0.0426 | 3.33 | 6250 | 0.2618 | 0.9533 |
| 0.0611 | 3.36 | 6300 | 0.2710 | 0.9467 |
| 0.0282 | 3.39 | 6350 | 0.3200 | 0.94 |
| 0.0449 | 3.41 | 6400 | 0.3203 | 0.94 |
| 0.0508 | 3.44 | 6450 | 0.3197 | 0.94 |
| 0.0385 | 3.47 | 6500 | 0.3391 | 0.9333 |
| 0.0458 | 3.49 | 6550 | 0.3450 | 0.9333 |
| 0.0245 | 3.52 | 6600 | 0.3737 | 0.9333 |
| 0.0547 | 3.55 | 6650 | 0.2889 | 0.94 |
| 0.0398 | 3.57 | 6700 | 0.3751 | 0.9333 |
| 0.0497 | 3.6 | 6750 | 0.2748 | 0.9467 |
| 0.0466 | 3.63 | 6800 | 0.3438 | 0.9333 |
| 0.0241 | 3.65 | 6850 | 0.3279 | 0.9267 |
| 0.0631 | 3.68 | 6900 | 0.2921 | 0.94 |
| 0.0256 | 3.71 | 6950 | 0.3595 | 0.9267 |
| 0.0615 | 3.73 | 7000 | 0.3190 | 0.9333 |
| 0.0495 | 3.76 | 7050 | 0.3451 | 0.9267 |
| 0.0519 | 3.79 | 7100 | 0.3303 | 0.9333 |
| 0.0243 | 3.81 | 7150 | 0.3344 | 0.9333 |
| 0.0348 | 3.84 | 7200 | 0.3609 | 0.9333 |
| 0.0542 | 3.87 | 7250 | 0.2797 | 0.9333 |
| 0.0791 | 3.89 | 7300 | 0.2504 | 0.94 |
| 0.0272 | 3.92 | 7350 | 0.3165 | 0.9333 |
| 0.0701 | 3.95 | 7400 | 0.3039 | 0.9333 |
| 0.0866 | 3.97 | 7450 | 0.3233 | 0.9267 |
| 0.0461 | 4.0 | 7500 | 0.3114 | 0.9267 |
| 0.0486 | 4.03 | 7550 | 0.2995 | 0.94 |
| 0.0052 | 4.05 | 7600 | 0.3128 | 0.94 |
| 0.0312 | 4.08 | 7650 | 0.3723 | 0.9333 |
| 0.0277 | 4.11 | 7700 | 0.3158 | 0.94 |
| 0.0407 | 4.13 | 7750 | 0.3187 | 0.94 |
| 0.0224 | 4.16 | 7800 | 0.3258 | 0.9333 |
| 0.0335 | 4.19 | 7850 | 0.3539 | 0.9333 |
| 0.0425 | 4.21 | 7900 | 0.3391 | 0.9333 |
| 0.0394 | 4.24 | 7950 | 0.3470 | 0.9333 |
| 0.015 | 4.27 | 8000 | 0.3680 | 0.9333 |
| 0.0166 | 4.29 | 8050 | 0.3689 | 0.9333 |
| 0.0358 | 4.32 | 8100 | 0.3281 | 0.94 |
| 0.0152 | 4.35 | 8150 | 0.3391 | 0.9333 |
| 0.0235 | 4.37 | 8200 | 0.3506 | 0.94 |
| 0.0357 | 4.4 | 8250 | 0.3549 | 0.94 |
| 0.0153 | 4.43 | 8300 | 0.3564 | 0.94 |
| 0.0366 | 4.45 | 8350 | 0.3836 | 0.9333 |
| 0.0381 | 4.48 | 8400 | 0.3428 | 0.9333 |
| 0.0349 | 4.51 | 8450 | 0.3600 | 0.94 |
| 0.028 | 4.53 | 8500 | 0.3592 | 0.9333 |
| 0.0322 | 4.56 | 8550 | 0.3478 | 0.9333 |
| 0.0237 | 4.59 | 8600 | 0.3636 | 0.94 |
| 0.0398 | 4.61 | 8650 | 0.3433 | 0.9333 |
| 0.062 | 4.64 | 8700 | 0.3158 | 0.94 |
| 0.0148 | 4.67 | 8750 | 0.3435 | 0.9333 |
| 0.0197 | 4.69 | 8800 | 0.3394 | 0.9333 |
| 0.0594 | 4.72 | 8850 | 0.3336 | 0.9333 |
| 0.0426 | 4.75 | 8900 | 0.3351 | 0.9333 |
| 0.003 | 4.77 | 8950 | 0.3479 | 0.9333 |
| 0.0268 | 4.8 | 9000 | 0.3479 | 0.9333 |
| 0.0524 | 4.83 | 9050 | 0.3485 | 0.9333 |
| 0.0259 | 4.85 | 9100 | 0.3501 | 0.9333 |
| 0.0326 | 4.88 | 9150 | 0.3498 | 0.9333 |
| 0.0236 | 4.91 | 9200 | 0.3482 | 0.9333 |
| 0.0209 | 4.93 | 9250 | 0.3504 | 0.9333 |
| 0.0366 | 4.96 | 9300 | 0.3503 | 0.9333 |
| 0.0246 | 4.99 | 9350 | 0.3512 | 0.9333 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0
- Datasets 2.6.1
- Tokenizers 0.13.1
|
Galeros/testpyramidsrnd
|
Galeros
| 2022-11-02T08:21:19Z | 3 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2022-11-02T08:20:27Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: Galeros/testpyramidsrnd
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
HankyStyle/Multi-ling-BERT
|
HankyStyle
| 2022-11-02T07:21:01Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-04-23T13:02:08Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: Multi-ling-BERT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Multi-ling-BERT
This model is a fine-tuned version of [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) on an unknown dataset.
## Usage
### In Transformers
```python
from transformers import pipeline,AutoTokenizer
model_name = "distilbert-base-uncased-finetuned-sst-2-english"
tokenizer = AutoTokenizer.from_pretrained(model_name)
text = "I feel happy today!"
inputs = tokenizer(text,return_tensors="pt",padding=True, truncation=True)
{
'input_ids': tensor([[ 101, 1045, 2514, 3407, 2651, 999, 102]]),
'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1]])
}
tokenizer.convert_ids_to_tokens(inputs["input_ids"][0])
['[CLS]', 'i', 'feel', 'happy', 'today', '!', '[SEP]']
tokenizer.decode(inputs["input_ids"][0])
[CLS] i feel happy today! [SEP]
text = "This is the question"
query = "This is the context with lots of information. Some useless. The answer is here some more words."
inputs = tokenizer(text,query,return_tensors="pt",padding=True, truncation=True)
{
'input_ids': tensor([ 101, 2023, 2003, 1996, 3160, 102, 2023, 2003, 1996, 6123,
2007, 7167, 1997, 2592, 1012, 2070, 11809, 1012, 1996, 3437,
2003, 2182, 2070, 2062, 2616, 1012, 102])
}
tokenizer.decode(inputs ["input_ids"][0])
text = "I feel happy today!"
# BertTokenizerFast
tokenizer = BertTokenizerFast.from_pretrained(model_name)
inputs_for_BertTokenizer = tokenizer(text, return_tensors="pt",padding=False, truncation=True, max_length=512, stride=256)
{
'input_ids': tensor([[ 101, 100, 11297, 9200, 11262, 106, 102]]),
'token_type_ids': tensor([[0, 0, 0, 0, 0, 0, 0]]),
'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1]])
}
# BartTokenizerFast
tokenizer = BartTokenizerFast.from_pretrained("facebook/bart-base")
inputs_for_BartTokenizerFast= tokenizer(text, return_tensors="pt",padding=False, truncation=True, max_length=512, stride=256)
{
'input_ids': tensor([[ 0, 100, 619, 1372, 452, 328, 2]]),
'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1]])
}
# Model
from transformers import AutoModel
model_name = "distilbert-base-uncased-finetuned-sst-2-english"
model = AutoModel.from_pretrained(model_name)
outputs = model(**inputs)
print(outputs.last_hidden_state.shape)
{
torch.Size([1, 7, 768])
}
from transformers import AutoModelForSequenceClassification
model_name = "distilbert-base-uncased-finetuned-sst-2-english"
model = AutoModelForSequenceClassification.from_pretrained(model_name)
outputs = model(**inputs)
print(outputs.logits)
{
tensor([[-4.3450, 4.6878]], grad_fn=<AddmmBackward0>)
}
import torch
predictions = torch.nn.functional.softmax(outputs.logits, dim=-1)
print(predictions)
{
tensor([[1.1942e-04, 9.9988e-01]], grad_fn=<SoftmaxBackward0>)
}
```
|
GItaf/gpt2-gpt2-mc-weight0.25-epoch15-new-nosharing
|
GItaf
| 2022-11-02T05:46:30Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-01T07:46:59Z |
---
tags:
- generated_from_trainer
model-index:
- name: gpt2-gpt2-mc-weight0.25-epoch15-new-nosharing
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-gpt2-mc-weight0.25-epoch15-new-nosharing
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.6937
- Cls loss: 2.9228
- Lm loss: 3.9625
- Cls Accuracy: 0.6046
- Cls F1: 0.5997
- Cls Precision: 0.6013
- Cls Recall: 0.6046
- Perplexity: 52.59
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cls loss | Lm loss | Cls Accuracy | Cls F1 | Cls Precision | Cls Recall | Perplexity |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:-------:|:------------:|:------:|:-------------:|:----------:|:----------:|
| 4.6729 | 1.0 | 3470 | 4.4248 | 1.5425 | 4.0392 | 0.5689 | 0.5448 | 0.5732 | 0.5689 | 56.78 |
| 4.3854 | 2.0 | 6940 | 4.3672 | 1.4634 | 4.0012 | 0.6121 | 0.6023 | 0.6288 | 0.6121 | 54.66 |
| 4.2559 | 3.0 | 10410 | 4.3660 | 1.5252 | 3.9844 | 0.6133 | 0.6086 | 0.6428 | 0.6133 | 53.75 |
| 4.1479 | 4.0 | 13880 | 4.4069 | 1.7167 | 3.9774 | 0.6075 | 0.6023 | 0.6134 | 0.6075 | 53.38 |
| 4.0501 | 5.0 | 17350 | 4.4152 | 1.7953 | 3.9661 | 0.6023 | 0.5971 | 0.6063 | 0.6023 | 52.78 |
| 3.964 | 6.0 | 20820 | 4.4789 | 2.0438 | 3.9676 | 0.6086 | 0.6035 | 0.6198 | 0.6086 | 52.86 |
| 3.8964 | 7.0 | 24290 | 4.5107 | 2.1849 | 3.9641 | 0.6052 | 0.5990 | 0.6096 | 0.6052 | 52.67 |
| 3.8441 | 8.0 | 27760 | 4.5674 | 2.4206 | 3.9618 | 0.6104 | 0.6043 | 0.6137 | 0.6104 | 52.55 |
| 3.8043 | 9.0 | 31230 | 4.5939 | 2.5361 | 3.9594 | 0.5954 | 0.5911 | 0.5980 | 0.5954 | 52.43 |
| 3.7743 | 10.0 | 34700 | 4.6247 | 2.6479 | 3.9623 | 0.5937 | 0.5906 | 0.5932 | 0.5937 | 52.58 |
| 3.7484 | 11.0 | 38170 | 4.6686 | 2.8241 | 3.9622 | 0.5983 | 0.5924 | 0.6000 | 0.5983 | 52.57 |
| 3.7305 | 12.0 | 41640 | 4.6729 | 2.8435 | 3.9616 | 0.6 | 0.5949 | 0.5958 | 0.6 | 52.54 |
| 3.7149 | 13.0 | 45110 | 4.6809 | 2.8759 | 3.9614 | 0.5931 | 0.5875 | 0.5899 | 0.5931 | 52.53 |
| 3.7047 | 14.0 | 48580 | 4.6895 | 2.9094 | 3.9617 | 0.5983 | 0.5928 | 0.5955 | 0.5983 | 52.55 |
| 3.6973 | 15.0 | 52050 | 4.6937 | 2.9228 | 3.9625 | 0.6046 | 0.5997 | 0.6013 | 0.6046 | 52.59 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1
- Datasets 2.4.0
- Tokenizers 0.12.1
|
GItaf/gpt2-gpt2-mc-weight0.25-epoch15-new
|
GItaf
| 2022-11-02T05:46:01Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-01T07:43:22Z |
---
tags:
- generated_from_trainer
model-index:
- name: gpt2-gpt2-mc-weight0.25-epoch15-new
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-gpt2-mc-weight0.25-epoch15-new
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.7276
- Cls loss: 3.0579
- Lm loss: 3.9626
- Cls Accuracy: 0.6110
- Cls F1: 0.6054
- Cls Precision: 0.6054
- Cls Recall: 0.6110
- Perplexity: 52.59
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cls loss | Lm loss | Cls Accuracy | Cls F1 | Cls Precision | Cls Recall | Perplexity |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:-------:|:------------:|:------:|:-------------:|:----------:|:----------:|
| 4.674 | 1.0 | 3470 | 4.4372 | 1.5961 | 4.0380 | 0.5487 | 0.5279 | 0.5643 | 0.5487 | 56.71 |
| 4.3809 | 2.0 | 6940 | 4.3629 | 1.4483 | 4.0006 | 0.6023 | 0.5950 | 0.6174 | 0.6023 | 54.63 |
| 4.2522 | 3.0 | 10410 | 4.3721 | 1.5476 | 3.9849 | 0.6012 | 0.5981 | 0.6186 | 0.6012 | 53.78 |
| 4.1478 | 4.0 | 13880 | 4.3892 | 1.6429 | 3.9782 | 0.6081 | 0.6019 | 0.6128 | 0.6081 | 53.42 |
| 4.0491 | 5.0 | 17350 | 4.4182 | 1.8093 | 3.9656 | 0.6156 | 0.6091 | 0.6163 | 0.6156 | 52.75 |
| 3.9624 | 6.0 | 20820 | 4.4757 | 2.0348 | 3.9666 | 0.6121 | 0.6048 | 0.6189 | 0.6121 | 52.81 |
| 3.8954 | 7.0 | 24290 | 4.4969 | 2.1327 | 3.9634 | 0.6092 | 0.6028 | 0.6087 | 0.6092 | 52.64 |
| 3.846 | 8.0 | 27760 | 4.5632 | 2.4063 | 3.9613 | 0.6017 | 0.5972 | 0.6014 | 0.6017 | 52.52 |
| 3.8036 | 9.0 | 31230 | 4.6068 | 2.5888 | 3.9592 | 0.6052 | 0.5988 | 0.6026 | 0.6052 | 52.41 |
| 3.7724 | 10.0 | 34700 | 4.6175 | 2.6197 | 3.9621 | 0.6052 | 0.6006 | 0.6009 | 0.6052 | 52.57 |
| 3.7484 | 11.0 | 38170 | 4.6745 | 2.8470 | 3.9622 | 0.6046 | 0.5996 | 0.6034 | 0.6046 | 52.57 |
| 3.7291 | 12.0 | 41640 | 4.6854 | 2.8950 | 3.9611 | 0.6110 | 0.6056 | 0.6049 | 0.6110 | 52.52 |
| 3.7148 | 13.0 | 45110 | 4.7103 | 2.9919 | 3.9618 | 0.6063 | 0.6002 | 0.6029 | 0.6063 | 52.55 |
| 3.703 | 14.0 | 48580 | 4.7226 | 3.0417 | 3.9616 | 0.6081 | 0.6027 | 0.6021 | 0.6081 | 52.54 |
| 3.6968 | 15.0 | 52050 | 4.7276 | 3.0579 | 3.9626 | 0.6110 | 0.6054 | 0.6054 | 0.6110 | 52.59 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Chloecakee/finetuning-sentiment-model-imdb
|
Chloecakee
| 2022-11-02T04:30:05Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-26T16:32:06Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2095
- Accuracy: 0.943
- F1: 0.9420
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
pepperjirakit/Daimond_Price
|
pepperjirakit
| 2022-11-02T03:57:26Z | 0 | 0 | null |
[
"joblib",
"license:cc-by-3.0",
"region:us"
] | null | 2022-10-26T12:50:54Z |
---
title: Daimond_Price
emoji: 💩
colorFrom: blue
colorTo: green
sdk: streamlit
sdk_version: 1.10.0
app_file: app.py
pinned: false
license: cc-by-3.0
---
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
huggingtweets/callmecarsonyt-jerma985-vgdunkey
|
huggingtweets
| 2022-11-02T03:39:39Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-02T03:38:15Z |
---
language: en
thumbnail: http://www.huggingtweets.com/callmecarsonyt-jerma985-vgdunkey/1667360374615/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/803601382943162368/F36Z7ypy_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/676614171849453568/AZd1Bh-s_400x400.png')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1580752812476071936/E0qU4suK_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Jerma & dunkey & Carson</div>
<div style="text-align: center; font-size: 14px;">@callmecarsonyt-jerma985-vgdunkey</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Jerma & dunkey & Carson.
| Data | Jerma | dunkey | Carson |
| --- | --- | --- | --- |
| Tweets downloaded | 2719 | 1309 | 3220 |
| Retweets | 108 | 151 | 6 |
| Short tweets | 286 | 331 | 793 |
| Tweets kept | 2325 | 827 | 2421 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/mqcymy6y/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @callmecarsonyt-jerma985-vgdunkey's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3oq9dor5) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3oq9dor5/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/callmecarsonyt-jerma985-vgdunkey')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
salascorp/categorizacion_comercios_v_0.0.5
|
salascorp
| 2022-11-02T03:05:01Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-02T02:47:13Z |
---
license: apache-2.0
tags:
- text-classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: categorizacion_comercios_v_0.0.5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# categorizacion_comercios_v_0.0.5
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the datasetX dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2025
- Accuracy: 0.9466
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.23.1
- Pytorch 1.13.0+cpu
- Datasets 2.6.1
- Tokenizers 0.13.1
|
egumasa/en_engagement_RoBERTa_combined
|
egumasa
| 2022-11-02T01:56:39Z | 6 | 1 |
spacy
|
[
"spacy",
"token-classification",
"en",
"doi:10.57967/hf/0082",
"model-index",
"region:us"
] |
token-classification
| 2022-11-02T01:53:35Z |
---
tags:
- spacy
- token-classification
language:
- en
model-index:
- name: en_engagement_RoBERTa_combined
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.0
- name: NER Recall
type: recall
value: 0.0
- name: NER F Score
type: f_score
value: 0.0
- task:
name: TAG
type: token-classification
metrics:
- name: TAG (XPOS) Accuracy
type: accuracy
value: 0.0
- task:
name: LEMMA
type: token-classification
metrics:
- name: Lemma Accuracy
type: accuracy
value: 0.0
- task:
name: UNLABELED_DEPENDENCIES
type: token-classification
metrics:
- name: Unlabeled Attachment Score (UAS)
type: f_score
value: 0.0
- task:
name: LABELED_DEPENDENCIES
type: token-classification
metrics:
- name: Labeled Attachment Score (LAS)
type: f_score
value: 0.0
- task:
name: SENTS
type: token-classification
metrics:
- name: Sentences F-Score
type: f_score
value: 0.9764065336
---
| Feature | Description |
| --- | --- |
| **Name** | `en_engagement_RoBERTa_combined` |
| **Version** | `AtoI_0.1.85` |
| **spaCy** | `>=3.3.0,<3.4.0` |
| **Default Pipeline** | `transformer`, `tagger`, `parser`, `ner`, `trainable_transformer`, `span_finder`, `spancat` |
| **Components** | `transformer`, `tagger`, `parser`, `ner`, `trainable_transformer`, `span_finder`, `spancat` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | n/a |
| **License** | n/a |
| **Author** | [n/a]() |
### Label Scheme
<details>
<summary>View label scheme (124 labels for 4 components)</summary>
| Component | Labels |
| --- | --- |
| **`tagger`** | `$`, `''`, `,`, `-LRB-`, `-RRB-`, `.`, `:`, `ADD`, `AFX`, `CC`, `CD`, `DT`, `EX`, `FW`, `HYPH`, `IN`, `JJ`, `JJR`, `JJS`, `LS`, `MD`, `NFP`, `NN`, `NNP`, `NNPS`, `NNS`, `PDT`, `POS`, `PRP`, `PRP$`, `RB`, `RBR`, `RBS`, `RP`, `SYM`, `TO`, `UH`, `VB`, `VBD`, `VBG`, `VBN`, `VBP`, `VBZ`, `WDT`, `WP`, `WP$`, `WRB`, `XX`, ```` |
| **`parser`** | `ROOT`, `acl`, `acomp`, `advcl`, `advmod`, `agent`, `amod`, `appos`, `attr`, `aux`, `auxpass`, `case`, `cc`, `ccomp`, `compound`, `conj`, `csubj`, `csubjpass`, `dative`, `dep`, `det`, `dobj`, `expl`, `intj`, `mark`, `meta`, `neg`, `nmod`, `npadvmod`, `nsubj`, `nsubjpass`, `nummod`, `oprd`, `parataxis`, `pcomp`, `pobj`, `poss`, `preconj`, `predet`, `prep`, `prt`, `punct`, `quantmod`, `relcl`, `xcomp` |
| **`ner`** | `CARDINAL`, `DATE`, `EVENT`, `FAC`, `GPE`, `LANGUAGE`, `LAW`, `LOC`, `MONEY`, `NORP`, `ORDINAL`, `ORG`, `PERCENT`, `PERSON`, `PRODUCT`, `QUANTITY`, `TIME`, `WORK_OF_ART` |
| **`spancat`** | `MONOGLOSS`, `ATTRIBUTE`, `JUSTIFY`, `COUNTER`, `CITATION`, `ENTERTAIN`, `ENDORSE`, `DENY`, `CONCUR`, `PRONOUNCE`, `TEMPORAL`, `CONTRAST` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `TAG_ACC` | 0.00 |
| `DEP_UAS` | 0.00 |
| `DEP_LAS` | 0.00 |
| `DEP_LAS_PER_TYPE` | 0.00 |
| `SENTS_P` | 96.76 |
| `SENTS_R` | 98.53 |
| `SENTS_F` | 97.64 |
| `ENTS_F` | 0.00 |
| `ENTS_P` | 0.00 |
| `ENTS_R` | 0.00 |
| `SPAN_FINDER_SPAN_CANDIDATES_F` | 50.09 |
| `SPAN_FINDER_SPAN_CANDIDATES_P` | 35.70 |
| `SPAN_FINDER_SPAN_CANDIDATES_R` | 83.94 |
| `SPANS_SC_F` | 76.49 |
| `SPANS_SC_P` | 75.89 |
| `SPANS_SC_R` | 77.11 |
| `LEMMA_ACC` | 0.00 |
| `TRAINABLE_TRANSFORMER_LOSS` | 1535.39 |
| `SPAN_FINDER_LOSS` | 20411.83 |
| `SPANCAT_LOSS` | 24075.13 |
|
crescendonow/pwa_ner
|
crescendonow
| 2022-11-02T01:03:30Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"camembert",
"token-classification",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-11-02T00:57:00Z |
---
license: apache-2.0
---
Finetune from WangchanBERTa use for Provincial Waterworks Autority of Thailand.
|
scjones/xlm-roberta-base-finetuned-panx-de
|
scjones
| 2022-11-02T00:43:43Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-11-02T00:19:09Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8648740833380706
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1365
- F1: 0.8649
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2553 | 1.0 | 525 | 0.1575 | 0.8279 |
| 0.1284 | 2.0 | 1050 | 0.1386 | 0.8463 |
| 0.0813 | 3.0 | 1575 | 0.1365 | 0.8649 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
huggingtweets/liverightananda
|
huggingtweets
| 2022-11-02T00:41:28Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-02T00:40:55Z |
---
language: en
thumbnail: http://www.huggingtweets.com/liverightananda/1667349684656/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1486406479938793473/yG_E0wx-_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">nanda</div>
<div style="text-align: center; font-size: 14px;">@liverightananda</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from nanda.
| Data | nanda |
| --- | --- |
| Tweets downloaded | 900 |
| Retweets | 30 |
| Short tweets | 121 |
| Tweets kept | 749 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/908ty3ha/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @liverightananda's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3mtd8cgk) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3mtd8cgk/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/liverightananda')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/trashfil
|
huggingtweets
| 2022-11-02T00:30:34Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-02T00:29:17Z |
---
language: en
thumbnail: http://www.huggingtweets.com/trashfil/1667349030665/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1530748346935148544/J8kNSD8f_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">💌</div>
<div style="text-align: center; font-size: 14px;">@trashfil</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from 💌.
| Data | 💌 |
| --- | --- |
| Tweets downloaded | 467 |
| Retweets | 32 |
| Short tweets | 106 |
| Tweets kept | 329 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3arew141/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @trashfil's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/34h0nac5) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/34h0nac5/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/trashfil')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
sergiocannata/convnext-tiny-224-finetuned-brs2
|
sergiocannata
| 2022-11-02T00:15:25Z | 26 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"convnext",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-11-01T23:03:50Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
- f1
model-index:
- name: convnext-tiny-224-finetuned-brs2
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.7924528301886793
- name: F1
type: f1
value: 0.7555555555555556
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convnext-tiny-224-finetuned-brs2
This model is a fine-tuned version of [facebook/convnext-tiny-224](https://huggingface.co/facebook/convnext-tiny-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2502
- Accuracy: 0.7925
- F1: 0.7556
- Precision (ppv): 0.8095
- Recall (sensitivity): 0.7083
- Specificity: 0.8621
- Npv: 0.7812
- Auc: 0.7852
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision (ppv) | Recall (sensitivity) | Specificity | Npv | Auc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------------:|:--------------------:|:-----------:|:------:|:------:|
| 0.6884 | 1.89 | 100 | 0.6907 | 0.5472 | 0.4286 | 0.5 | 0.375 | 0.6897 | 0.5714 | 0.5323 |
| 0.5868 | 3.77 | 200 | 0.6604 | 0.6415 | 0.4242 | 0.7778 | 0.2917 | 0.9310 | 0.6136 | 0.6114 |
| 0.4759 | 5.66 | 300 | 0.6273 | 0.6604 | 0.5 | 0.75 | 0.375 | 0.8966 | 0.6341 | 0.6358 |
| 0.3599 | 7.55 | 400 | 0.6520 | 0.6604 | 0.5 | 0.75 | 0.375 | 0.8966 | 0.6341 | 0.6358 |
| 0.3248 | 9.43 | 500 | 0.9115 | 0.6415 | 0.4571 | 0.7273 | 0.3333 | 0.8966 | 0.6190 | 0.6149 |
| 0.3117 | 11.32 | 600 | 0.8608 | 0.6604 | 0.5263 | 0.7143 | 0.4167 | 0.8621 | 0.6410 | 0.6394 |
| 0.4208 | 13.21 | 700 | 0.8774 | 0.6792 | 0.5641 | 0.7333 | 0.4583 | 0.8621 | 0.6579 | 0.6602 |
| 0.5267 | 15.09 | 800 | 1.0131 | 0.6792 | 0.5405 | 0.7692 | 0.4167 | 0.8966 | 0.65 | 0.6566 |
| 0.234 | 16.98 | 900 | 1.1498 | 0.6981 | 0.5556 | 0.8333 | 0.4167 | 0.9310 | 0.6585 | 0.6739 |
| 0.7581 | 18.87 | 1000 | 1.0952 | 0.7170 | 0.6154 | 0.8 | 0.5 | 0.8966 | 0.6842 | 0.6983 |
| 0.1689 | 20.75 | 1100 | 1.1653 | 0.6981 | 0.5789 | 0.7857 | 0.4583 | 0.8966 | 0.6667 | 0.6774 |
| 0.0765 | 22.64 | 1200 | 1.1245 | 0.7170 | 0.6667 | 0.7143 | 0.625 | 0.7931 | 0.7188 | 0.7091 |
| 0.6287 | 24.53 | 1300 | 1.2222 | 0.6981 | 0.6 | 0.75 | 0.5 | 0.8621 | 0.6757 | 0.6810 |
| 0.0527 | 26.42 | 1400 | 1.2350 | 0.7358 | 0.6818 | 0.75 | 0.625 | 0.8276 | 0.7273 | 0.7263 |
| 0.3622 | 28.3 | 1500 | 1.1022 | 0.7547 | 0.6667 | 0.8667 | 0.5417 | 0.9310 | 0.7105 | 0.7364 |
| 0.3227 | 30.19 | 1600 | 1.1541 | 0.7170 | 0.6154 | 0.8 | 0.5 | 0.8966 | 0.6842 | 0.6983 |
| 0.3849 | 32.08 | 1700 | 1.2818 | 0.7170 | 0.6154 | 0.8 | 0.5 | 0.8966 | 0.6842 | 0.6983 |
| 0.4528 | 33.96 | 1800 | 1.3213 | 0.6981 | 0.5789 | 0.7857 | 0.4583 | 0.8966 | 0.6667 | 0.6774 |
| 0.1824 | 35.85 | 1900 | 1.3171 | 0.7170 | 0.6512 | 0.7368 | 0.5833 | 0.8276 | 0.7059 | 0.7055 |
| 0.0367 | 37.74 | 2000 | 1.4484 | 0.7170 | 0.6154 | 0.8 | 0.5 | 0.8966 | 0.6842 | 0.6983 |
| 0.07 | 39.62 | 2100 | 1.3521 | 0.7547 | 0.6977 | 0.7895 | 0.625 | 0.8621 | 0.7353 | 0.7435 |
| 0.0696 | 41.51 | 2200 | 1.2636 | 0.7358 | 0.65 | 0.8125 | 0.5417 | 0.8966 | 0.7027 | 0.7191 |
| 0.1554 | 43.4 | 2300 | 1.2225 | 0.7358 | 0.6667 | 0.7778 | 0.5833 | 0.8621 | 0.7143 | 0.7227 |
| 0.2346 | 45.28 | 2400 | 1.2627 | 0.7547 | 0.6829 | 0.8235 | 0.5833 | 0.8966 | 0.7222 | 0.7399 |
| 0.097 | 47.17 | 2500 | 1.4892 | 0.7170 | 0.6667 | 0.7143 | 0.625 | 0.7931 | 0.7188 | 0.7091 |
| 0.2494 | 49.06 | 2600 | 1.5282 | 0.7170 | 0.6512 | 0.7368 | 0.5833 | 0.8276 | 0.7059 | 0.7055 |
| 0.0734 | 50.94 | 2700 | 1.3989 | 0.7170 | 0.6341 | 0.7647 | 0.5417 | 0.8621 | 0.6944 | 0.7019 |
| 0.1077 | 52.83 | 2800 | 1.5155 | 0.6792 | 0.5641 | 0.7333 | 0.4583 | 0.8621 | 0.6579 | 0.6602 |
| 0.2456 | 54.72 | 2900 | 1.4400 | 0.7170 | 0.6512 | 0.7368 | 0.5833 | 0.8276 | 0.7059 | 0.7055 |
| 0.0823 | 56.6 | 3000 | 1.4511 | 0.7358 | 0.65 | 0.8125 | 0.5417 | 0.8966 | 0.7027 | 0.7191 |
| 0.0471 | 58.49 | 3100 | 1.5114 | 0.7547 | 0.6829 | 0.8235 | 0.5833 | 0.8966 | 0.7222 | 0.7399 |
| 0.0144 | 60.38 | 3200 | 1.4412 | 0.7925 | 0.7317 | 0.8824 | 0.625 | 0.9310 | 0.75 | 0.7780 |
| 0.1235 | 62.26 | 3300 | 1.2029 | 0.7547 | 0.6977 | 0.7895 | 0.625 | 0.8621 | 0.7353 | 0.7435 |
| 0.0121 | 64.15 | 3400 | 1.4925 | 0.7358 | 0.6667 | 0.7778 | 0.5833 | 0.8621 | 0.7143 | 0.7227 |
| 0.2126 | 66.04 | 3500 | 1.3614 | 0.7547 | 0.6667 | 0.8667 | 0.5417 | 0.9310 | 0.7105 | 0.7364 |
| 0.0496 | 67.92 | 3600 | 1.2960 | 0.7736 | 0.7143 | 0.8333 | 0.625 | 0.8966 | 0.7429 | 0.7608 |
| 0.1145 | 69.81 | 3700 | 1.3763 | 0.7547 | 0.6829 | 0.8235 | 0.5833 | 0.8966 | 0.7222 | 0.7399 |
| 0.1272 | 71.7 | 3800 | 1.6328 | 0.7170 | 0.5946 | 0.8462 | 0.4583 | 0.9310 | 0.675 | 0.6947 |
| 0.0007 | 73.58 | 3900 | 1.5622 | 0.7547 | 0.6977 | 0.7895 | 0.625 | 0.8621 | 0.7353 | 0.7435 |
| 0.0101 | 75.47 | 4000 | 1.1811 | 0.7925 | 0.7442 | 0.8421 | 0.6667 | 0.8966 | 0.7647 | 0.7816 |
| 0.0002 | 77.36 | 4100 | 1.8533 | 0.6981 | 0.5789 | 0.7857 | 0.4583 | 0.8966 | 0.6667 | 0.6774 |
| 0.0423 | 79.25 | 4200 | 1.2510 | 0.7547 | 0.6977 | 0.7895 | 0.625 | 0.8621 | 0.7353 | 0.7435 |
| 0.0036 | 81.13 | 4300 | 1.3443 | 0.7547 | 0.6829 | 0.8235 | 0.5833 | 0.8966 | 0.7222 | 0.7399 |
| 0.0432 | 83.02 | 4400 | 1.2864 | 0.7736 | 0.7273 | 0.8 | 0.6667 | 0.8621 | 0.7576 | 0.7644 |
| 0.0021 | 84.91 | 4500 | 0.8999 | 0.7925 | 0.7755 | 0.76 | 0.7917 | 0.7931 | 0.8214 | 0.7924 |
| 0.0002 | 86.79 | 4600 | 1.3634 | 0.7925 | 0.7442 | 0.8421 | 0.6667 | 0.8966 | 0.7647 | 0.7816 |
| 0.0044 | 88.68 | 4700 | 1.7830 | 0.7358 | 0.65 | 0.8125 | 0.5417 | 0.8966 | 0.7027 | 0.7191 |
| 0.0003 | 90.57 | 4800 | 1.2640 | 0.7736 | 0.7273 | 0.8 | 0.6667 | 0.8621 | 0.7576 | 0.7644 |
| 0.0253 | 92.45 | 4900 | 1.2649 | 0.7925 | 0.7442 | 0.8421 | 0.6667 | 0.8966 | 0.7647 | 0.7816 |
| 0.0278 | 94.34 | 5000 | 1.7485 | 0.7170 | 0.6512 | 0.7368 | 0.5833 | 0.8276 | 0.7059 | 0.7055 |
| 0.1608 | 96.23 | 5100 | 1.2641 | 0.8113 | 0.7727 | 0.85 | 0.7083 | 0.8966 | 0.7879 | 0.8024 |
| 0.0017 | 98.11 | 5200 | 1.6380 | 0.7170 | 0.6667 | 0.7143 | 0.625 | 0.7931 | 0.7188 | 0.7091 |
| 0.001 | 100.0 | 5300 | 1.2502 | 0.7925 | 0.7556 | 0.8095 | 0.7083 | 0.8621 | 0.7812 | 0.7852 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
pig4431/amazonPolarity_XLNET_5E
|
pig4431
| 2022-11-01T23:26:13Z | 89 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlnet",
"text-classification",
"generated_from_trainer",
"dataset:amazon_polarity",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-01T23:17:18Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- amazon_polarity
metrics:
- accuracy
model-index:
- name: amazonPolarity_XLNET_5E
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_polarity
type: amazon_polarity
config: amazon_polarity
split: train
args: amazon_polarity
metrics:
- name: Accuracy
type: accuracy
value: 0.9266666666666666
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# amazonPolarity_XLNET_5E
This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on the amazon_polarity dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4490
- Accuracy: 0.9267
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.6238 | 0.01 | 50 | 0.3703 | 0.86 |
| 0.3149 | 0.03 | 100 | 0.3715 | 0.9 |
| 0.3849 | 0.04 | 150 | 0.4125 | 0.8867 |
| 0.4051 | 0.05 | 200 | 0.4958 | 0.8933 |
| 0.3345 | 0.07 | 250 | 0.4258 | 0.9067 |
| 0.439 | 0.08 | 300 | 0.2650 | 0.9067 |
| 0.2248 | 0.09 | 350 | 0.3314 | 0.9267 |
| 0.2849 | 0.11 | 400 | 0.3097 | 0.8933 |
| 0.3468 | 0.12 | 450 | 0.3060 | 0.9067 |
| 0.3216 | 0.13 | 500 | 0.3826 | 0.9067 |
| 0.3462 | 0.15 | 550 | 0.2207 | 0.94 |
| 0.3632 | 0.16 | 600 | 0.1864 | 0.94 |
| 0.2483 | 0.17 | 650 | 0.3069 | 0.9267 |
| 0.3709 | 0.19 | 700 | 0.2859 | 0.9333 |
| 0.2953 | 0.2 | 750 | 0.3010 | 0.9333 |
| 0.3222 | 0.21 | 800 | 0.2668 | 0.9133 |
| 0.3142 | 0.23 | 850 | 0.3545 | 0.8667 |
| 0.2637 | 0.24 | 900 | 0.1922 | 0.9467 |
| 0.3929 | 0.25 | 950 | 0.2712 | 0.92 |
| 0.2918 | 0.27 | 1000 | 0.2516 | 0.9333 |
| 0.2269 | 0.28 | 1050 | 0.4227 | 0.8933 |
| 0.239 | 0.29 | 1100 | 0.3639 | 0.9133 |
| 0.2439 | 0.31 | 1150 | 0.3430 | 0.9133 |
| 0.2417 | 0.32 | 1200 | 0.2920 | 0.94 |
| 0.3223 | 0.33 | 1250 | 0.3426 | 0.9067 |
| 0.2775 | 0.35 | 1300 | 0.3752 | 0.8867 |
| 0.2733 | 0.36 | 1350 | 0.3015 | 0.9333 |
| 0.3737 | 0.37 | 1400 | 0.2875 | 0.9267 |
| 0.2907 | 0.39 | 1450 | 0.4926 | 0.8933 |
| 0.316 | 0.4 | 1500 | 0.2948 | 0.9333 |
| 0.2472 | 0.41 | 1550 | 0.4003 | 0.8933 |
| 0.2607 | 0.43 | 1600 | 0.3608 | 0.92 |
| 0.2848 | 0.44 | 1650 | 0.3332 | 0.9133 |
| 0.2708 | 0.45 | 1700 | 0.3424 | 0.92 |
| 0.3721 | 0.47 | 1750 | 0.2384 | 0.9267 |
| 0.2925 | 0.48 | 1800 | 0.4472 | 0.88 |
| 0.3619 | 0.49 | 1850 | 0.3824 | 0.9 |
| 0.1994 | 0.51 | 1900 | 0.4160 | 0.9133 |
| 0.3586 | 0.52 | 1950 | 0.3198 | 0.8867 |
| 0.2455 | 0.53 | 2000 | 0.3119 | 0.92 |
| 0.2683 | 0.55 | 2050 | 0.4262 | 0.8867 |
| 0.2983 | 0.56 | 2100 | 0.3552 | 0.9067 |
| 0.2973 | 0.57 | 2150 | 0.2966 | 0.8933 |
| 0.2299 | 0.59 | 2200 | 0.2972 | 0.92 |
| 0.295 | 0.6 | 2250 | 0.3122 | 0.9067 |
| 0.2716 | 0.61 | 2300 | 0.2556 | 0.9267 |
| 0.2842 | 0.63 | 2350 | 0.3317 | 0.92 |
| 0.2723 | 0.64 | 2400 | 0.4409 | 0.8933 |
| 0.2492 | 0.65 | 2450 | 0.3871 | 0.88 |
| 0.2297 | 0.67 | 2500 | 0.3526 | 0.9133 |
| 0.2125 | 0.68 | 2550 | 0.4597 | 0.9067 |
| 0.3003 | 0.69 | 2600 | 0.3374 | 0.8933 |
| 0.2622 | 0.71 | 2650 | 0.3492 | 0.9267 |
| 0.2436 | 0.72 | 2700 | 0.3438 | 0.9267 |
| 0.2599 | 0.73 | 2750 | 0.3725 | 0.9133 |
| 0.2759 | 0.75 | 2800 | 0.3260 | 0.9333 |
| 0.1841 | 0.76 | 2850 | 0.4218 | 0.9067 |
| 0.252 | 0.77 | 2900 | 0.2730 | 0.92 |
| 0.248 | 0.79 | 2950 | 0.3628 | 0.92 |
| 0.2356 | 0.8 | 3000 | 0.4012 | 0.9067 |
| 0.191 | 0.81 | 3050 | 0.3500 | 0.9267 |
| 0.2351 | 0.83 | 3100 | 0.4038 | 0.9133 |
| 0.2758 | 0.84 | 3150 | 0.3361 | 0.9067 |
| 0.2952 | 0.85 | 3200 | 0.2301 | 0.9267 |
| 0.2137 | 0.87 | 3250 | 0.3837 | 0.9133 |
| 0.2386 | 0.88 | 3300 | 0.2739 | 0.94 |
| 0.2786 | 0.89 | 3350 | 0.2820 | 0.9333 |
| 0.2284 | 0.91 | 3400 | 0.2557 | 0.9333 |
| 0.2546 | 0.92 | 3450 | 0.2744 | 0.9267 |
| 0.2514 | 0.93 | 3500 | 0.2908 | 0.94 |
| 0.3052 | 0.95 | 3550 | 0.2362 | 0.9333 |
| 0.2366 | 0.96 | 3600 | 0.3047 | 0.9333 |
| 0.2147 | 0.97 | 3650 | 0.3375 | 0.9333 |
| 0.3347 | 0.99 | 3700 | 0.2669 | 0.9267 |
| 0.3076 | 1.0 | 3750 | 0.2453 | 0.94 |
| 0.1685 | 1.01 | 3800 | 0.4117 | 0.9133 |
| 0.1954 | 1.03 | 3850 | 0.3074 | 0.9333 |
| 0.2512 | 1.04 | 3900 | 0.3942 | 0.9133 |
| 0.1365 | 1.05 | 3950 | 0.3211 | 0.92 |
| 0.1985 | 1.07 | 4000 | 0.4188 | 0.9133 |
| 0.1585 | 1.08 | 4050 | 0.4177 | 0.9133 |
| 0.1798 | 1.09 | 4100 | 0.3298 | 0.9333 |
| 0.1458 | 1.11 | 4150 | 0.5283 | 0.9 |
| 0.1831 | 1.12 | 4200 | 0.3884 | 0.92 |
| 0.1452 | 1.13 | 4250 | 0.4130 | 0.9133 |
| 0.1679 | 1.15 | 4300 | 0.3678 | 0.9267 |
| 0.1688 | 1.16 | 4350 | 0.3268 | 0.9333 |
| 0.1175 | 1.17 | 4400 | 0.4722 | 0.92 |
| 0.1661 | 1.19 | 4450 | 0.3899 | 0.9133 |
| 0.1688 | 1.2 | 4500 | 0.4050 | 0.9133 |
| 0.228 | 1.21 | 4550 | 0.4608 | 0.9 |
| 0.1946 | 1.23 | 4600 | 0.5080 | 0.9 |
| 0.1849 | 1.24 | 4650 | 0.4340 | 0.9067 |
| 0.1365 | 1.25 | 4700 | 0.4592 | 0.9133 |
| 0.2432 | 1.27 | 4750 | 0.3683 | 0.92 |
| 0.1679 | 1.28 | 4800 | 0.4604 | 0.9 |
| 0.2107 | 1.29 | 4850 | 0.3952 | 0.9 |
| 0.1499 | 1.31 | 4900 | 0.4275 | 0.92 |
| 0.1504 | 1.32 | 4950 | 0.3370 | 0.9333 |
| 0.1013 | 1.33 | 5000 | 0.3723 | 0.92 |
| 0.1303 | 1.35 | 5050 | 0.2925 | 0.9333 |
| 0.1205 | 1.36 | 5100 | 0.3452 | 0.9267 |
| 0.1427 | 1.37 | 5150 | 0.3080 | 0.94 |
| 0.1518 | 1.39 | 5200 | 0.3190 | 0.94 |
| 0.1885 | 1.4 | 5250 | 0.2726 | 0.9467 |
| 0.1264 | 1.41 | 5300 | 0.3466 | 0.9333 |
| 0.1939 | 1.43 | 5350 | 0.3957 | 0.9133 |
| 0.1939 | 1.44 | 5400 | 0.4007 | 0.9 |
| 0.1239 | 1.45 | 5450 | 0.2924 | 0.9333 |
| 0.1588 | 1.47 | 5500 | 0.2687 | 0.9333 |
| 0.1516 | 1.48 | 5550 | 0.3668 | 0.92 |
| 0.1623 | 1.49 | 5600 | 0.3141 | 0.94 |
| 0.2632 | 1.51 | 5650 | 0.2714 | 0.9333 |
| 0.1674 | 1.52 | 5700 | 0.3188 | 0.94 |
| 0.1854 | 1.53 | 5750 | 0.2818 | 0.9267 |
| 0.1282 | 1.55 | 5800 | 0.2918 | 0.9333 |
| 0.228 | 1.56 | 5850 | 0.2802 | 0.9133 |
| 0.2349 | 1.57 | 5900 | 0.1803 | 0.9467 |
| 0.1608 | 1.59 | 5950 | 0.3112 | 0.92 |
| 0.1493 | 1.6 | 6000 | 0.3018 | 0.9267 |
| 0.2182 | 1.61 | 6050 | 0.3419 | 0.9333 |
| 0.2408 | 1.63 | 6100 | 0.2887 | 0.9267 |
| 0.1872 | 1.64 | 6150 | 0.2408 | 0.9267 |
| 0.1246 | 1.65 | 6200 | 0.3752 | 0.9 |
| 0.2098 | 1.67 | 6250 | 0.2622 | 0.9333 |
| 0.1916 | 1.68 | 6300 | 0.2245 | 0.9467 |
| 0.2069 | 1.69 | 6350 | 0.2151 | 0.9467 |
| 0.1446 | 1.71 | 6400 | 0.2186 | 0.9533 |
| 0.1528 | 1.72 | 6450 | 0.1863 | 0.9533 |
| 0.1352 | 1.73 | 6500 | 0.2660 | 0.9467 |
| 0.2398 | 1.75 | 6550 | 0.1912 | 0.9533 |
| 0.1485 | 1.76 | 6600 | 0.2492 | 0.9467 |
| 0.2006 | 1.77 | 6650 | 0.2495 | 0.9267 |
| 0.2036 | 1.79 | 6700 | 0.3885 | 0.9067 |
| 0.1725 | 1.8 | 6750 | 0.2359 | 0.9533 |
| 0.1864 | 1.81 | 6800 | 0.2271 | 0.9533 |
| 0.1465 | 1.83 | 6850 | 0.2669 | 0.9333 |
| 0.197 | 1.84 | 6900 | 0.2290 | 0.96 |
| 0.1382 | 1.85 | 6950 | 0.2322 | 0.9467 |
| 0.1206 | 1.87 | 7000 | 0.3117 | 0.9333 |
| 0.157 | 1.88 | 7050 | 0.2163 | 0.9533 |
| 0.1686 | 1.89 | 7100 | 0.2239 | 0.9533 |
| 0.1953 | 1.91 | 7150 | 0.3064 | 0.9333 |
| 0.1638 | 1.92 | 7200 | 0.2821 | 0.9533 |
| 0.1605 | 1.93 | 7250 | 0.2413 | 0.9467 |
| 0.1736 | 1.95 | 7300 | 0.2430 | 0.94 |
| 0.2372 | 1.96 | 7350 | 0.2306 | 0.94 |
| 0.1549 | 1.97 | 7400 | 0.2730 | 0.94 |
| 0.1824 | 1.99 | 7450 | 0.3443 | 0.94 |
| 0.2263 | 2.0 | 7500 | 0.2695 | 0.9267 |
| 0.088 | 2.01 | 7550 | 0.2305 | 0.96 |
| 0.0376 | 2.03 | 7600 | 0.3380 | 0.94 |
| 0.072 | 2.04 | 7650 | 0.3349 | 0.9467 |
| 0.0491 | 2.05 | 7700 | 0.3397 | 0.94 |
| 0.0509 | 2.07 | 7750 | 0.3496 | 0.9467 |
| 0.1033 | 2.08 | 7800 | 0.3364 | 0.94 |
| 0.0549 | 2.09 | 7850 | 0.3520 | 0.94 |
| 0.0627 | 2.11 | 7900 | 0.4510 | 0.9267 |
| 0.0283 | 2.12 | 7950 | 0.3733 | 0.94 |
| 0.1215 | 2.13 | 8000 | 0.3892 | 0.9267 |
| 0.0856 | 2.15 | 8050 | 0.3114 | 0.9533 |
| 0.0945 | 2.16 | 8100 | 0.3626 | 0.9333 |
| 0.0901 | 2.17 | 8150 | 0.3116 | 0.94 |
| 0.0688 | 2.19 | 8200 | 0.3515 | 0.9267 |
| 0.1286 | 2.2 | 8250 | 0.3255 | 0.9333 |
| 0.1043 | 2.21 | 8300 | 0.4395 | 0.9133 |
| 0.1199 | 2.23 | 8350 | 0.3307 | 0.94 |
| 0.0608 | 2.24 | 8400 | 0.2992 | 0.9533 |
| 0.0827 | 2.25 | 8450 | 0.3500 | 0.94 |
| 0.047 | 2.27 | 8500 | 0.3982 | 0.94 |
| 0.1154 | 2.28 | 8550 | 0.3851 | 0.94 |
| 0.1158 | 2.29 | 8600 | 0.3820 | 0.9133 |
| 0.1053 | 2.31 | 8650 | 0.4414 | 0.92 |
| 0.1336 | 2.32 | 8700 | 0.3680 | 0.92 |
| 0.0853 | 2.33 | 8750 | 0.3732 | 0.9333 |
| 0.0496 | 2.35 | 8800 | 0.3450 | 0.94 |
| 0.0552 | 2.36 | 8850 | 0.4310 | 0.9267 |
| 0.1054 | 2.37 | 8900 | 0.4174 | 0.92 |
| 0.0951 | 2.39 | 8950 | 0.3815 | 0.9333 |
| 0.1235 | 2.4 | 9000 | 0.4119 | 0.9267 |
| 0.1094 | 2.41 | 9050 | 0.4282 | 0.9133 |
| 0.0897 | 2.43 | 9100 | 0.4766 | 0.9133 |
| 0.0925 | 2.44 | 9150 | 0.3303 | 0.94 |
| 0.1487 | 2.45 | 9200 | 0.2948 | 0.94 |
| 0.0963 | 2.47 | 9250 | 0.2911 | 0.94 |
| 0.0836 | 2.48 | 9300 | 0.3379 | 0.94 |
| 0.1594 | 2.49 | 9350 | 0.3841 | 0.9267 |
| 0.0846 | 2.51 | 9400 | 0.4128 | 0.9267 |
| 0.0984 | 2.52 | 9450 | 0.4131 | 0.9333 |
| 0.1042 | 2.53 | 9500 | 0.4048 | 0.9267 |
| 0.0633 | 2.55 | 9550 | 0.3776 | 0.94 |
| 0.1266 | 2.56 | 9600 | 0.3247 | 0.9333 |
| 0.1084 | 2.57 | 9650 | 0.3174 | 0.9467 |
| 0.0714 | 2.59 | 9700 | 0.3597 | 0.94 |
| 0.0826 | 2.6 | 9750 | 0.3261 | 0.9467 |
| 0.1527 | 2.61 | 9800 | 0.2531 | 0.9533 |
| 0.0506 | 2.63 | 9850 | 0.2994 | 0.9533 |
| 0.1043 | 2.64 | 9900 | 0.3345 | 0.9467 |
| 0.0229 | 2.65 | 9950 | 0.4318 | 0.9333 |
| 0.1247 | 2.67 | 10000 | 0.2951 | 0.9533 |
| 0.1285 | 2.68 | 10050 | 0.3036 | 0.9533 |
| 0.081 | 2.69 | 10100 | 0.3541 | 0.94 |
| 0.0829 | 2.71 | 10150 | 0.3757 | 0.9467 |
| 0.0702 | 2.72 | 10200 | 0.3307 | 0.9533 |
| 0.07 | 2.73 | 10250 | 0.3638 | 0.94 |
| 0.1563 | 2.75 | 10300 | 0.3283 | 0.94 |
| 0.1223 | 2.76 | 10350 | 0.3441 | 0.92 |
| 0.0954 | 2.77 | 10400 | 0.3049 | 0.94 |
| 0.0438 | 2.79 | 10450 | 0.3675 | 0.9467 |
| 0.0796 | 2.8 | 10500 | 0.3364 | 0.94 |
| 0.0803 | 2.81 | 10550 | 0.2970 | 0.94 |
| 0.0324 | 2.83 | 10600 | 0.3941 | 0.9267 |
| 0.083 | 2.84 | 10650 | 0.3439 | 0.94 |
| 0.1263 | 2.85 | 10700 | 0.3759 | 0.9267 |
| 0.1044 | 2.87 | 10750 | 1.0700 | 0.58 |
| 0.1182 | 2.88 | 10800 | 0.4409 | 0.9333 |
| 0.126 | 2.89 | 10850 | 0.6467 | 0.5933 |
| 0.094 | 2.91 | 10900 | 0.3741 | 0.9333 |
| 0.1405 | 2.92 | 10950 | 0.3458 | 0.9267 |
| 0.1024 | 2.93 | 11000 | 0.2946 | 0.9333 |
| 0.0812 | 2.95 | 11050 | 0.2850 | 0.9333 |
| 0.1132 | 2.96 | 11100 | 0.3093 | 0.9267 |
| 0.0775 | 2.97 | 11150 | 0.3938 | 0.9067 |
| 0.1179 | 2.99 | 11200 | 0.3528 | 0.9267 |
| 0.1413 | 3.0 | 11250 | 0.2984 | 0.9333 |
| 0.0528 | 3.01 | 11300 | 0.3387 | 0.9333 |
| 0.0214 | 3.03 | 11350 | 0.4108 | 0.92 |
| 0.0408 | 3.04 | 11400 | 0.4174 | 0.9267 |
| 0.0808 | 3.05 | 11450 | 0.4283 | 0.9267 |
| 0.0535 | 3.07 | 11500 | 0.3719 | 0.9333 |
| 0.0344 | 3.08 | 11550 | 0.4382 | 0.9333 |
| 0.0364 | 3.09 | 11600 | 0.4195 | 0.9333 |
| 0.0524 | 3.11 | 11650 | 0.4607 | 0.92 |
| 0.0682 | 3.12 | 11700 | 0.4503 | 0.92 |
| 0.0554 | 3.13 | 11750 | 0.4563 | 0.92 |
| 0.0401 | 3.15 | 11800 | 0.4668 | 0.9133 |
| 0.0782 | 3.16 | 11850 | 0.4468 | 0.9133 |
| 0.0605 | 3.17 | 11900 | 0.4239 | 0.92 |
| 0.0599 | 3.19 | 11950 | 0.4019 | 0.92 |
| 0.0364 | 3.2 | 12000 | 0.3988 | 0.9267 |
| 0.0357 | 3.21 | 12050 | 0.4168 | 0.9267 |
| 0.072 | 3.23 | 12100 | 0.3889 | 0.9333 |
| 0.0931 | 3.24 | 12150 | 0.3368 | 0.9333 |
| 0.0724 | 3.25 | 12200 | 0.3209 | 0.9333 |
| 0.0653 | 3.27 | 12250 | 0.3615 | 0.9333 |
| 0.0173 | 3.28 | 12300 | 0.3946 | 0.9333 |
| 0.0537 | 3.29 | 12350 | 0.3876 | 0.9333 |
| 0.0373 | 3.31 | 12400 | 0.4079 | 0.9267 |
| 0.0322 | 3.32 | 12450 | 0.3553 | 0.94 |
| 0.0585 | 3.33 | 12500 | 0.4276 | 0.92 |
| 0.0315 | 3.35 | 12550 | 0.4092 | 0.9267 |
| 0.0317 | 3.36 | 12600 | 0.4107 | 0.9267 |
| 0.082 | 3.37 | 12650 | 0.4170 | 0.9267 |
| 0.1101 | 3.39 | 12700 | 0.3801 | 0.9333 |
| 0.0392 | 3.4 | 12750 | 0.3802 | 0.9333 |
| 0.0382 | 3.41 | 12800 | 0.4194 | 0.9267 |
| 0.048 | 3.43 | 12850 | 0.3794 | 0.9333 |
| 0.0896 | 3.44 | 12900 | 0.3961 | 0.9267 |
| 0.0966 | 3.45 | 12950 | 0.3982 | 0.92 |
| 0.0165 | 3.47 | 13000 | 0.3819 | 0.92 |
| 0.0701 | 3.48 | 13050 | 0.3440 | 0.94 |
| 0.0104 | 3.49 | 13100 | 0.4132 | 0.9267 |
| 0.0991 | 3.51 | 13150 | 0.3477 | 0.9333 |
| 0.0554 | 3.52 | 13200 | 0.3255 | 0.94 |
| 0.0476 | 3.53 | 13250 | 0.4343 | 0.92 |
| 0.0213 | 3.55 | 13300 | 0.4601 | 0.92 |
| 0.0465 | 3.56 | 13350 | 0.4141 | 0.9267 |
| 0.1246 | 3.57 | 13400 | 0.3473 | 0.94 |
| 0.1112 | 3.59 | 13450 | 0.3679 | 0.92 |
| 0.0323 | 3.6 | 13500 | 0.3508 | 0.9267 |
| 0.0423 | 3.61 | 13550 | 0.3475 | 0.94 |
| 0.0498 | 3.63 | 13600 | 0.4095 | 0.92 |
| 0.0531 | 3.64 | 13650 | 0.3544 | 0.9333 |
| 0.0365 | 3.65 | 13700 | 0.4403 | 0.9133 |
| 0.058 | 3.67 | 13750 | 0.4284 | 0.9133 |
| 0.0191 | 3.68 | 13800 | 0.4466 | 0.92 |
| 0.0838 | 3.69 | 13850 | 0.5128 | 0.9067 |
| 0.1561 | 3.71 | 13900 | 0.3588 | 0.9267 |
| 0.0464 | 3.72 | 13950 | 0.3867 | 0.92 |
| 0.037 | 3.73 | 14000 | 0.3961 | 0.92 |
| 0.0288 | 3.75 | 14050 | 0.4274 | 0.92 |
| 0.0928 | 3.76 | 14100 | 0.3524 | 0.94 |
| 0.0696 | 3.77 | 14150 | 0.3555 | 0.9333 |
| 0.0318 | 3.79 | 14200 | 0.3457 | 0.9467 |
| 0.0417 | 3.8 | 14250 | 0.3412 | 0.94 |
| 0.0283 | 3.81 | 14300 | 0.3845 | 0.9333 |
| 0.058 | 3.83 | 14350 | 0.3765 | 0.9333 |
| 0.0589 | 3.84 | 14400 | 0.4085 | 0.9267 |
| 0.0432 | 3.85 | 14450 | 0.4103 | 0.9267 |
| 0.0365 | 3.87 | 14500 | 0.4000 | 0.9267 |
| 0.0858 | 3.88 | 14550 | 0.3905 | 0.9267 |
| 0.0494 | 3.89 | 14600 | 0.3739 | 0.9267 |
| 0.0503 | 3.91 | 14650 | 0.3203 | 0.94 |
| 0.0349 | 3.92 | 14700 | 0.3268 | 0.9467 |
| 0.0328 | 3.93 | 14750 | 0.3259 | 0.9467 |
| 0.0347 | 3.95 | 14800 | 0.3588 | 0.94 |
| 0.0233 | 3.96 | 14850 | 0.3456 | 0.9467 |
| 0.0602 | 3.97 | 14900 | 0.3819 | 0.94 |
| 0.0766 | 3.99 | 14950 | 0.3813 | 0.9333 |
| 0.0562 | 4.0 | 15000 | 0.3669 | 0.9333 |
| 0.0163 | 4.01 | 15050 | 0.4176 | 0.92 |
| 0.007 | 4.03 | 15100 | 0.3694 | 0.9333 |
| 0.0005 | 4.04 | 15150 | 0.3915 | 0.9333 |
| 0.021 | 4.05 | 15200 | 0.4334 | 0.9333 |
| 0.0823 | 4.07 | 15250 | 0.4155 | 0.9333 |
| 0.0509 | 4.08 | 15300 | 0.4056 | 0.9333 |
| 0.0381 | 4.09 | 15350 | 0.3729 | 0.94 |
| 0.045 | 4.11 | 15400 | 0.3940 | 0.9333 |
| 0.0379 | 4.12 | 15450 | 0.4276 | 0.9267 |
| 0.0661 | 4.13 | 15500 | 0.3797 | 0.94 |
| 0.0522 | 4.15 | 15550 | 0.4029 | 0.9333 |
| 0.0189 | 4.16 | 15600 | 0.4424 | 0.9267 |
| 0.0191 | 4.17 | 15650 | 0.4711 | 0.92 |
| 0.031 | 4.19 | 15700 | 0.4344 | 0.9333 |
| 0.0837 | 4.2 | 15750 | 0.3703 | 0.94 |
| 0.0397 | 4.21 | 15800 | 0.3976 | 0.9333 |
| 0.034 | 4.23 | 15850 | 0.4021 | 0.9333 |
| 0.0199 | 4.24 | 15900 | 0.4015 | 0.9333 |
| 0.0315 | 4.25 | 15950 | 0.3652 | 0.94 |
| 0.076 | 4.27 | 16000 | 0.3421 | 0.94 |
| 0.0478 | 4.28 | 16050 | 0.3122 | 0.9533 |
| 0.0203 | 4.29 | 16100 | 0.3436 | 0.9467 |
| 0.0706 | 4.31 | 16150 | 0.3544 | 0.94 |
| 0.0086 | 4.32 | 16200 | 0.3730 | 0.94 |
| 0.05 | 4.33 | 16250 | 0.3761 | 0.94 |
| 0.048 | 4.35 | 16300 | 0.3583 | 0.94 |
| 0.0715 | 4.36 | 16350 | 0.3459 | 0.94 |
| 0.0316 | 4.37 | 16400 | 0.3355 | 0.94 |
| 0.0356 | 4.39 | 16450 | 0.3278 | 0.9467 |
| 0.0176 | 4.4 | 16500 | 0.3177 | 0.9467 |
| 0.0817 | 4.41 | 16550 | 0.3705 | 0.9333 |
| 0.0414 | 4.43 | 16600 | 0.3919 | 0.9333 |
| 0.0198 | 4.44 | 16650 | 0.3435 | 0.9467 |
| 0.0203 | 4.45 | 16700 | 0.3708 | 0.94 |
| 0.0391 | 4.47 | 16750 | 0.3615 | 0.94 |
| 0.0132 | 4.48 | 16800 | 0.3827 | 0.94 |
| 0.0385 | 4.49 | 16850 | 0.3837 | 0.94 |
| 0.0366 | 4.51 | 16900 | 0.3633 | 0.94 |
| 0.0779 | 4.52 | 16950 | 0.3403 | 0.9467 |
| 0.0168 | 4.53 | 17000 | 0.4592 | 0.92 |
| 0.0517 | 4.55 | 17050 | 0.4063 | 0.9333 |
| 0.0138 | 4.56 | 17100 | 0.4335 | 0.9267 |
| 0.0123 | 4.57 | 17150 | 0.3777 | 0.9333 |
| 0.0324 | 4.59 | 17200 | 0.4657 | 0.92 |
| 0.0202 | 4.6 | 17250 | 0.4791 | 0.92 |
| 0.001 | 4.61 | 17300 | 0.4761 | 0.92 |
| 0.0364 | 4.63 | 17350 | 0.4663 | 0.92 |
| 0.0154 | 4.64 | 17400 | 0.4611 | 0.92 |
| 0.0184 | 4.65 | 17450 | 0.4616 | 0.92 |
| 0.0004 | 4.67 | 17500 | 0.4650 | 0.92 |
| 0.0192 | 4.68 | 17550 | 0.4649 | 0.92 |
| 0.0185 | 4.69 | 17600 | 0.4654 | 0.92 |
| 0.0196 | 4.71 | 17650 | 0.4643 | 0.92 |
| 0.0386 | 4.72 | 17700 | 0.4660 | 0.92 |
| 0.0236 | 4.73 | 17750 | 0.4499 | 0.9267 |
| 0.0383 | 4.75 | 17800 | 0.4479 | 0.9267 |
| 0.0398 | 4.76 | 17850 | 0.4483 | 0.9267 |
| 0.0004 | 4.77 | 17900 | 0.4541 | 0.9267 |
| 0.023 | 4.79 | 17950 | 0.4387 | 0.9267 |
| 0.0361 | 4.8 | 18000 | 0.4409 | 0.9267 |
| 0.0409 | 4.81 | 18050 | 0.4384 | 0.9267 |
| 0.0004 | 4.83 | 18100 | 0.4376 | 0.9267 |
| 0.0171 | 4.84 | 18150 | 0.4421 | 0.9267 |
| 0.0589 | 4.85 | 18200 | 0.4373 | 0.9267 |
| 0.0004 | 4.87 | 18250 | 0.4492 | 0.9267 |
| 0.0142 | 4.88 | 18300 | 0.4585 | 0.9267 |
| 0.0561 | 4.89 | 18350 | 0.4681 | 0.9267 |
| 0.0204 | 4.91 | 18400 | 0.4608 | 0.9267 |
| 0.0248 | 4.92 | 18450 | 0.4641 | 0.9267 |
| 0.0404 | 4.93 | 18500 | 0.4567 | 0.9267 |
| 0.0608 | 4.95 | 18550 | 0.4518 | 0.9267 |
| 0.0412 | 4.96 | 18600 | 0.4510 | 0.9267 |
| 0.0183 | 4.97 | 18650 | 0.4522 | 0.9267 |
| 0.0567 | 4.99 | 18700 | 0.4492 | 0.9267 |
| 0.0173 | 5.0 | 18750 | 0.4490 | 0.9267 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0
- Datasets 2.6.1
- Tokenizers 0.13.1
|
yfzhoucs/TinyLanguageRobots
|
yfzhoucs
| 2022-11-01T22:57:47Z | 11 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2022-11-01T22:51:50Z |
---
license: mit
task: robotics
---
|
dumitrescustefan/t5-v1_1-base-romanian
|
dumitrescustefan
| 2022-11-01T22:29:39Z | 448 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"mt5",
"text2text-generation",
"ro",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] |
text2text-generation
| 2022-11-01T21:55:48Z |
---
language: ro
inference: false
license: apache-2.0
---
This is a pretrained-from-scratch **T5v1.1 base** model (**247M** parameters) on the [t5x](https://github.com/google-research/t5x) platform.
Training was performed on a clean 80GB Romanian text corpus for 4M steps with these [scripts](https://github.com/dumitrescustefan/t5x_models). The model was trained with an encoder sequence length of 512 and a decoder sequence length of 256.
**!! IMPORTANT !!** This model was pretrained on the span corruption MLM task, meaning this model is **not usable** in any downstream task **without finetuning** first!
### How to load a t5x model
```python
from transformers import T5Tokenizer, T5Model
tokenizer = T5Tokenizer.from_pretrained('dumitrescustefan/t5-v1_1-base-romanian')
model = T5Model.from_pretrained('dumitrescustefan/t5-v1_1-base-romanian')
input_ids = tokenizer("Acesta este un test", return_tensors="pt").input_ids # Batch size 1
decoder_input_ids = tokenizer("Acesta este", return_tensors="pt").input_ids # Batch size 1
# preprocess: Prepend decoder_input_ids with start token which is pad token for T5Model.
# This is not needed for torch's T5ForConditionalGeneration as it does this internally using labels arg.
decoder_input_ids = model._shift_right(decoder_input_ids)
# forward pass
outputs = model(input_ids=input_ids, decoder_input_ids=decoder_input_ids)
last_hidden_states = outputs.last_hidden_state
print(last_hidden_states.shape) # this will print [1, 3, 768]
```
Remember to always sanitize your text! Replace ``ş`` and ``ţ`` cedilla-letters to comma-letters with :
```python
text = text.replace("ţ", "ț").replace("ş", "ș").replace("Ţ", "Ț").replace("Ş", "Ș")
```
because the model was **not** trained on cedilla ``ş`` and ``ţ``s. If you don't, you will have decreased performance due to ``<UNK>``s and increased number of tokens per word.
### Acknowledgements
We'd like to thank [TPU Research Cloud](https://sites.research.google/trc/about/) for providing the TPUv4 cores we used to train these models!
### Authors
Yours truly,
_[Stefan Dumitrescu](https://github.com/dumitrescustefan), [Mihai Ilie](https://github.com/iliemihai) and [Per Egil Kummervold](https://huggingface.co/north)_
|
dumitrescustefan/t5-v1_1-large-romanian
|
dumitrescustefan
| 2022-11-01T22:29:19Z | 21 | 1 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"mt5",
"text2text-generation",
"ro",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] |
text2text-generation
| 2022-11-01T22:00:14Z |
---
language: ro
inference: false
license: apache-2.0
---
This is a pretrained-from-scratch **T5v1.1 large** model (**783M** parameters) on the [t5x](https://github.com/google-research/t5x) platform.
Training was performed on a clean 80GB Romanian text corpus for 4M steps with these [scripts](https://github.com/dumitrescustefan/t5x_models). The model was trained with an encoder and decoder sequence length of 512.
**!! IMPORTANT !!** This model was pretrained on the span corruption MLM task, meaning this model is **not usable** in any downstream task **without finetuning** first!
### How to load a t5x model
```python
from transformers import T5Tokenizer, T5Model
tokenizer = T5Tokenizer.from_pretrained('dumitrescustefan/t5-v1_1-large-romanian')
model = T5Model.from_pretrained('dumitrescustefan/t5-v1_1-large-romanian')
input_ids = tokenizer("Acesta este un test", return_tensors="pt").input_ids # Batch size 1
decoder_input_ids = tokenizer("Acesta este", return_tensors="pt").input_ids # Batch size 1
# preprocess: Prepend decoder_input_ids with start token which is pad token for T5Model.
# This is not needed for torch's T5ForConditionalGeneration as it does this internally using labels arg.
decoder_input_ids = model._shift_right(decoder_input_ids)
# forward pass
outputs = model(input_ids=input_ids, decoder_input_ids=decoder_input_ids)
last_hidden_states = outputs.last_hidden_state
print(last_hidden_states.shape) # this will print [1, 3, 1024]
```
Remember to always sanitize your text! Replace ``ş`` and ``ţ`` cedilla-letters to comma-letters with :
```python
text = text.replace("ţ", "ț").replace("ş", "ș").replace("Ţ", "Ț").replace("Ş", "Ș")
```
because the model was **not** trained on cedilla ``ş`` and ``ţ``s. If you don't, you will have decreased performance due to ``<UNK>``s and increased number of tokens per word.
### Acknowledgements
We'd like to thank [TPU Research Cloud](https://sites.research.google/trc/about/) for providing the TPUv4 cores we used to train these models!
### Authors
Yours truly,
_[Stefan Dumitrescu](https://github.com/dumitrescustefan), [Mihai Ilie](https://github.com/iliemihai) and [Per Egil Kummervold](https://huggingface.co/north)_
|
pig4431/amazonPolarity_BERT_5E
|
pig4431
| 2022-11-01T22:23:46Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:amazon_polarity",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-01T22:23:02Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- amazon_polarity
metrics:
- accuracy
model-index:
- name: amazonPolarity_BERT_5E
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_polarity
type: amazon_polarity
config: amazon_polarity
split: train
args: amazon_polarity
metrics:
- name: Accuracy
type: accuracy
value: 0.9066666666666666
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# amazonPolarity_BERT_5E
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the amazon_polarity dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4402
- Accuracy: 0.9067
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7011 | 0.03 | 50 | 0.6199 | 0.7 |
| 0.6238 | 0.05 | 100 | 0.4710 | 0.8133 |
| 0.4478 | 0.08 | 150 | 0.3249 | 0.8733 |
| 0.3646 | 0.11 | 200 | 0.3044 | 0.86 |
| 0.3244 | 0.13 | 250 | 0.2548 | 0.86 |
| 0.2734 | 0.16 | 300 | 0.2666 | 0.88 |
| 0.2784 | 0.19 | 350 | 0.2416 | 0.88 |
| 0.2706 | 0.21 | 400 | 0.2660 | 0.88 |
| 0.2368 | 0.24 | 450 | 0.2522 | 0.8867 |
| 0.2449 | 0.27 | 500 | 0.3135 | 0.88 |
| 0.262 | 0.29 | 550 | 0.2718 | 0.8733 |
| 0.2111 | 0.32 | 600 | 0.2494 | 0.8933 |
| 0.2459 | 0.35 | 650 | 0.2468 | 0.8867 |
| 0.2264 | 0.37 | 700 | 0.3049 | 0.8667 |
| 0.2572 | 0.4 | 750 | 0.2054 | 0.8933 |
| 0.1749 | 0.43 | 800 | 0.3489 | 0.86 |
| 0.2423 | 0.45 | 850 | 0.2142 | 0.8933 |
| 0.1931 | 0.48 | 900 | 0.2096 | 0.9067 |
| 0.2444 | 0.51 | 950 | 0.3404 | 0.8733 |
| 0.2666 | 0.53 | 1000 | 0.2378 | 0.9067 |
| 0.2311 | 0.56 | 1050 | 0.2416 | 0.9067 |
| 0.2269 | 0.59 | 1100 | 0.3188 | 0.8733 |
| 0.2143 | 0.61 | 1150 | 0.2343 | 0.9 |
| 0.2181 | 0.64 | 1200 | 0.2606 | 0.8667 |
| 0.2151 | 0.67 | 1250 | 0.1888 | 0.9133 |
| 0.2694 | 0.69 | 1300 | 0.3982 | 0.8467 |
| 0.2408 | 0.72 | 1350 | 0.1978 | 0.9067 |
| 0.2043 | 0.75 | 1400 | 0.2125 | 0.9 |
| 0.2081 | 0.77 | 1450 | 0.2680 | 0.8933 |
| 0.2361 | 0.8 | 1500 | 0.3723 | 0.8467 |
| 0.2503 | 0.83 | 1550 | 0.3427 | 0.8733 |
| 0.1983 | 0.85 | 1600 | 0.2525 | 0.9067 |
| 0.1947 | 0.88 | 1650 | 0.2427 | 0.9133 |
| 0.2411 | 0.91 | 1700 | 0.2448 | 0.9 |
| 0.2381 | 0.93 | 1750 | 0.3354 | 0.88 |
| 0.1852 | 0.96 | 1800 | 0.3078 | 0.8667 |
| 0.2427 | 0.99 | 1850 | 0.2408 | 0.9 |
| 0.1582 | 1.01 | 1900 | 0.2698 | 0.9133 |
| 0.159 | 1.04 | 1950 | 0.3383 | 0.9 |
| 0.1833 | 1.07 | 2000 | 0.2849 | 0.9 |
| 0.1257 | 1.09 | 2050 | 0.5376 | 0.8667 |
| 0.1513 | 1.12 | 2100 | 0.4469 | 0.88 |
| 0.1869 | 1.15 | 2150 | 0.3415 | 0.8933 |
| 0.1342 | 1.17 | 2200 | 0.3021 | 0.8867 |
| 0.1404 | 1.2 | 2250 | 0.3619 | 0.88 |
| 0.1576 | 1.23 | 2300 | 0.2815 | 0.9 |
| 0.1419 | 1.25 | 2350 | 0.4351 | 0.8867 |
| 0.1491 | 1.28 | 2400 | 0.3025 | 0.9133 |
| 0.1914 | 1.31 | 2450 | 0.3011 | 0.9067 |
| 0.1265 | 1.33 | 2500 | 0.3953 | 0.88 |
| 0.128 | 1.36 | 2550 | 0.2557 | 0.9333 |
| 0.1631 | 1.39 | 2600 | 0.2226 | 0.9333 |
| 0.1019 | 1.41 | 2650 | 0.3638 | 0.9133 |
| 0.1551 | 1.44 | 2700 | 0.3591 | 0.9 |
| 0.1853 | 1.47 | 2750 | 0.5005 | 0.8733 |
| 0.1578 | 1.49 | 2800 | 0.2662 | 0.92 |
| 0.1522 | 1.52 | 2850 | 0.2545 | 0.9267 |
| 0.1188 | 1.55 | 2900 | 0.3874 | 0.88 |
| 0.1638 | 1.57 | 2950 | 0.3003 | 0.92 |
| 0.1583 | 1.6 | 3000 | 0.2702 | 0.92 |
| 0.1844 | 1.63 | 3050 | 0.2183 | 0.9333 |
| 0.1365 | 1.65 | 3100 | 0.3322 | 0.8933 |
| 0.1683 | 1.68 | 3150 | 0.2069 | 0.9467 |
| 0.168 | 1.71 | 3200 | 0.4046 | 0.8667 |
| 0.1907 | 1.73 | 3250 | 0.3411 | 0.8933 |
| 0.1695 | 1.76 | 3300 | 0.1992 | 0.9333 |
| 0.1851 | 1.79 | 3350 | 0.2370 | 0.92 |
| 0.1302 | 1.81 | 3400 | 0.3058 | 0.9133 |
| 0.1353 | 1.84 | 3450 | 0.3134 | 0.9067 |
| 0.1428 | 1.87 | 3500 | 0.3767 | 0.8667 |
| 0.1642 | 1.89 | 3550 | 0.3239 | 0.8867 |
| 0.1319 | 1.92 | 3600 | 0.4725 | 0.86 |
| 0.1714 | 1.95 | 3650 | 0.3115 | 0.8867 |
| 0.1265 | 1.97 | 3700 | 0.3621 | 0.8867 |
| 0.1222 | 2.0 | 3750 | 0.3665 | 0.8933 |
| 0.0821 | 2.03 | 3800 | 0.2482 | 0.9133 |
| 0.1136 | 2.05 | 3850 | 0.3244 | 0.9 |
| 0.0915 | 2.08 | 3900 | 0.4745 | 0.8733 |
| 0.0967 | 2.11 | 3950 | 0.2346 | 0.94 |
| 0.0962 | 2.13 | 4000 | 0.3139 | 0.92 |
| 0.1001 | 2.16 | 4050 | 0.2944 | 0.9267 |
| 0.086 | 2.19 | 4100 | 0.5542 | 0.86 |
| 0.0588 | 2.21 | 4150 | 0.4377 | 0.9 |
| 0.1056 | 2.24 | 4200 | 0.3540 | 0.9133 |
| 0.0899 | 2.27 | 4250 | 0.5661 | 0.8733 |
| 0.0737 | 2.29 | 4300 | 0.5683 | 0.8733 |
| 0.1152 | 2.32 | 4350 | 0.2997 | 0.9333 |
| 0.0852 | 2.35 | 4400 | 0.5055 | 0.8933 |
| 0.1114 | 2.37 | 4450 | 0.3099 | 0.92 |
| 0.0821 | 2.4 | 4500 | 0.3026 | 0.9267 |
| 0.0698 | 2.43 | 4550 | 0.3250 | 0.92 |
| 0.1123 | 2.45 | 4600 | 0.3674 | 0.9 |
| 0.1196 | 2.48 | 4650 | 0.4539 | 0.8733 |
| 0.0617 | 2.51 | 4700 | 0.3446 | 0.92 |
| 0.0939 | 2.53 | 4750 | 0.3302 | 0.92 |
| 0.1114 | 2.56 | 4800 | 0.5149 | 0.8733 |
| 0.1154 | 2.59 | 4850 | 0.4935 | 0.8867 |
| 0.1495 | 2.61 | 4900 | 0.4706 | 0.8933 |
| 0.0858 | 2.64 | 4950 | 0.4048 | 0.9 |
| 0.0767 | 2.67 | 5000 | 0.3849 | 0.9133 |
| 0.0569 | 2.69 | 5050 | 0.5491 | 0.8867 |
| 0.1058 | 2.72 | 5100 | 0.5872 | 0.8733 |
| 0.0899 | 2.75 | 5150 | 0.3159 | 0.92 |
| 0.0757 | 2.77 | 5200 | 0.5861 | 0.8733 |
| 0.1305 | 2.8 | 5250 | 0.3633 | 0.9133 |
| 0.1027 | 2.83 | 5300 | 0.3972 | 0.9133 |
| 0.1259 | 2.85 | 5350 | 0.4197 | 0.8933 |
| 0.1255 | 2.88 | 5400 | 0.4583 | 0.8867 |
| 0.0981 | 2.91 | 5450 | 0.4657 | 0.8933 |
| 0.0736 | 2.93 | 5500 | 0.4036 | 0.9133 |
| 0.116 | 2.96 | 5550 | 0.3026 | 0.9067 |
| 0.0692 | 2.99 | 5600 | 0.3409 | 0.9133 |
| 0.0721 | 3.01 | 5650 | 0.5598 | 0.8733 |
| 0.052 | 3.04 | 5700 | 0.4130 | 0.9133 |
| 0.0661 | 3.07 | 5750 | 0.2589 | 0.9333 |
| 0.0667 | 3.09 | 5800 | 0.4484 | 0.9067 |
| 0.0599 | 3.12 | 5850 | 0.4883 | 0.9 |
| 0.0406 | 3.15 | 5900 | 0.4516 | 0.9067 |
| 0.0837 | 3.17 | 5950 | 0.3394 | 0.9267 |
| 0.0636 | 3.2 | 6000 | 0.4649 | 0.8867 |
| 0.0861 | 3.23 | 6050 | 0.5046 | 0.8933 |
| 0.0667 | 3.25 | 6100 | 0.3252 | 0.92 |
| 0.0401 | 3.28 | 6150 | 0.2771 | 0.94 |
| 0.0998 | 3.31 | 6200 | 0.4509 | 0.9 |
| 0.0209 | 3.33 | 6250 | 0.4666 | 0.8933 |
| 0.0747 | 3.36 | 6300 | 0.5430 | 0.8867 |
| 0.0678 | 3.39 | 6350 | 0.4050 | 0.9067 |
| 0.0685 | 3.41 | 6400 | 0.3738 | 0.92 |
| 0.0654 | 3.44 | 6450 | 0.4486 | 0.9 |
| 0.0496 | 3.47 | 6500 | 0.4386 | 0.9067 |
| 0.0379 | 3.49 | 6550 | 0.4547 | 0.9067 |
| 0.0897 | 3.52 | 6600 | 0.4197 | 0.9133 |
| 0.0729 | 3.55 | 6650 | 0.2855 | 0.9333 |
| 0.0515 | 3.57 | 6700 | 0.4459 | 0.9067 |
| 0.0588 | 3.6 | 6750 | 0.3627 | 0.92 |
| 0.0724 | 3.63 | 6800 | 0.4060 | 0.9267 |
| 0.0607 | 3.65 | 6850 | 0.4505 | 0.9133 |
| 0.0252 | 3.68 | 6900 | 0.5465 | 0.8933 |
| 0.0594 | 3.71 | 6950 | 0.4786 | 0.9067 |
| 0.0743 | 3.73 | 7000 | 0.4163 | 0.9267 |
| 0.0506 | 3.76 | 7050 | 0.3801 | 0.92 |
| 0.0548 | 3.79 | 7100 | 0.3557 | 0.9267 |
| 0.0932 | 3.81 | 7150 | 0.4278 | 0.9133 |
| 0.0643 | 3.84 | 7200 | 0.4673 | 0.9 |
| 0.0631 | 3.87 | 7250 | 0.3611 | 0.92 |
| 0.0793 | 3.89 | 7300 | 0.3956 | 0.9067 |
| 0.0729 | 3.92 | 7350 | 0.6630 | 0.8733 |
| 0.0552 | 3.95 | 7400 | 0.4259 | 0.8867 |
| 0.0432 | 3.97 | 7450 | 0.3615 | 0.92 |
| 0.0697 | 4.0 | 7500 | 0.5116 | 0.88 |
| 0.0463 | 4.03 | 7550 | 0.3334 | 0.94 |
| 0.046 | 4.05 | 7600 | 0.4704 | 0.8867 |
| 0.0371 | 4.08 | 7650 | 0.3323 | 0.94 |
| 0.0809 | 4.11 | 7700 | 0.3503 | 0.92 |
| 0.0285 | 4.13 | 7750 | 0.3360 | 0.92 |
| 0.0469 | 4.16 | 7800 | 0.3365 | 0.9333 |
| 0.041 | 4.19 | 7850 | 0.5726 | 0.88 |
| 0.0447 | 4.21 | 7900 | 0.4564 | 0.9067 |
| 0.0144 | 4.24 | 7950 | 0.5521 | 0.8867 |
| 0.0511 | 4.27 | 8000 | 0.5661 | 0.88 |
| 0.0481 | 4.29 | 8050 | 0.3445 | 0.94 |
| 0.036 | 4.32 | 8100 | 0.3247 | 0.94 |
| 0.0662 | 4.35 | 8150 | 0.3647 | 0.9333 |
| 0.051 | 4.37 | 8200 | 0.5024 | 0.9 |
| 0.0546 | 4.4 | 8250 | 0.4737 | 0.8933 |
| 0.0526 | 4.43 | 8300 | 0.4067 | 0.92 |
| 0.0291 | 4.45 | 8350 | 0.3862 | 0.9267 |
| 0.0292 | 4.48 | 8400 | 0.5101 | 0.9 |
| 0.0426 | 4.51 | 8450 | 0.4207 | 0.92 |
| 0.0771 | 4.53 | 8500 | 0.5525 | 0.8867 |
| 0.0668 | 4.56 | 8550 | 0.4487 | 0.9067 |
| 0.0585 | 4.59 | 8600 | 0.3574 | 0.9267 |
| 0.0375 | 4.61 | 8650 | 0.3980 | 0.92 |
| 0.0508 | 4.64 | 8700 | 0.4064 | 0.92 |
| 0.0334 | 4.67 | 8750 | 0.3031 | 0.94 |
| 0.0257 | 4.69 | 8800 | 0.3340 | 0.9333 |
| 0.0165 | 4.72 | 8850 | 0.4011 | 0.92 |
| 0.0553 | 4.75 | 8900 | 0.4243 | 0.9133 |
| 0.0597 | 4.77 | 8950 | 0.3685 | 0.9267 |
| 0.0407 | 4.8 | 9000 | 0.4262 | 0.9133 |
| 0.032 | 4.83 | 9050 | 0.4080 | 0.9133 |
| 0.0573 | 4.85 | 9100 | 0.4416 | 0.9133 |
| 0.0308 | 4.88 | 9150 | 0.4397 | 0.9133 |
| 0.0494 | 4.91 | 9200 | 0.4476 | 0.9067 |
| 0.015 | 4.93 | 9250 | 0.4419 | 0.9067 |
| 0.0443 | 4.96 | 9300 | 0.4347 | 0.9133 |
| 0.0479 | 4.99 | 9350 | 0.4402 | 0.9067 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0
- Datasets 2.6.1
- Tokenizers 0.13.1
|
rikkar/dsd_futurism
|
rikkar
| 2022-11-01T21:59:19Z | 0 | 0 | null |
[
"license:cc0-1.0",
"region:us"
] | null | 2022-11-01T13:51:42Z |
---
license: cc0-1.0
---
Stable Diffusion model trained for 5k steps on the art style "futurism".
Invoke the style with "in the style of futtt". Play with weights, it's a strong style so prompt accordingly.
Sample images:



Sample training images:





|
AlekseyKorshuk/retriever-coding-guru-adapted
|
AlekseyKorshuk
| 2022-11-01T21:53:11Z | 6 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-11-01T21:42:16Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# AlekseyKorshuk/retriever-coding-guru-adapted
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('AlekseyKorshuk/retriever-coding-guru-adapted')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
def cls_pooling(model_output, attention_mask):
return model_output[0][:,0]
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('AlekseyKorshuk/retriever-coding-guru-adapted')
model = AutoModel.from_pretrained('AlekseyKorshuk/retriever-coding-guru-adapted')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, cls pooling.
sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=AlekseyKorshuk/retriever-coding-guru-adapted)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 317 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 31,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
troesy/spanbert-base-cased-LAT-True-added-tokenizer
|
troesy
| 2022-11-01T21:19:53Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-11-01T20:55:41Z |
---
tags:
- generated_from_trainer
model-index:
- name: spanbert-base-cased-LAT-True-added-tokenizer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spanbert-base-cased-LAT-True-added-tokenizer
This model is a fine-tuned version of [SpanBERT/spanbert-base-cased](https://huggingface.co/SpanBERT/spanbert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2767
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 174 | 0.3422 |
| No log | 2.0 | 348 | 0.2893 |
| 0.3406 | 3.0 | 522 | 0.2767 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
pig4431/amazonPolarity_roBERTa_5E
|
pig4431
| 2022-11-01T21:13:15Z | 7 | 1 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:amazon_polarity",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-01T21:12:18Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- amazon_polarity
metrics:
- accuracy
model-index:
- name: amazonPolarity_roBERTa_5E
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_polarity
type: amazon_polarity
config: amazon_polarity
split: train
args: amazon_polarity
metrics:
- name: Accuracy
type: accuracy
value: 0.96
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# amazonPolarity_roBERTa_5E
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the amazon_polarity dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2201
- Accuracy: 0.96
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5785 | 0.05 | 50 | 0.2706 | 0.9133 |
| 0.2731 | 0.11 | 100 | 0.2379 | 0.9267 |
| 0.2223 | 0.16 | 150 | 0.1731 | 0.92 |
| 0.1887 | 0.21 | 200 | 0.1672 | 0.9267 |
| 0.1915 | 0.27 | 250 | 0.2946 | 0.9067 |
| 0.1981 | 0.32 | 300 | 0.1744 | 0.9267 |
| 0.1617 | 0.37 | 350 | 0.2349 | 0.92 |
| 0.1919 | 0.43 | 400 | 0.1605 | 0.9333 |
| 0.1713 | 0.48 | 450 | 0.1626 | 0.94 |
| 0.1961 | 0.53 | 500 | 0.1555 | 0.9467 |
| 0.1652 | 0.59 | 550 | 0.1996 | 0.94 |
| 0.1719 | 0.64 | 600 | 0.1848 | 0.9333 |
| 0.159 | 0.69 | 650 | 0.1783 | 0.9467 |
| 0.1533 | 0.75 | 700 | 0.2016 | 0.9467 |
| 0.1749 | 0.8 | 750 | 0.3943 | 0.8733 |
| 0.1675 | 0.85 | 800 | 0.1948 | 0.9133 |
| 0.1601 | 0.91 | 850 | 0.2044 | 0.92 |
| 0.1424 | 0.96 | 900 | 0.1061 | 0.9533 |
| 0.1447 | 1.01 | 950 | 0.2195 | 0.9267 |
| 0.0997 | 1.07 | 1000 | 0.2102 | 0.9333 |
| 0.1454 | 1.12 | 1050 | 0.1648 | 0.9467 |
| 0.1326 | 1.17 | 1100 | 0.2774 | 0.9 |
| 0.1192 | 1.23 | 1150 | 0.1337 | 0.96 |
| 0.1429 | 1.28 | 1200 | 0.1451 | 0.96 |
| 0.1227 | 1.33 | 1250 | 0.1995 | 0.94 |
| 0.1343 | 1.39 | 1300 | 0.2115 | 0.92 |
| 0.1208 | 1.44 | 1350 | 0.1832 | 0.9467 |
| 0.1314 | 1.49 | 1400 | 0.1298 | 0.96 |
| 0.1069 | 1.55 | 1450 | 0.1778 | 0.94 |
| 0.126 | 1.6 | 1500 | 0.1205 | 0.9667 |
| 0.1162 | 1.65 | 1550 | 0.1569 | 0.9533 |
| 0.0961 | 1.71 | 1600 | 0.1865 | 0.9467 |
| 0.13 | 1.76 | 1650 | 0.1458 | 0.96 |
| 0.1206 | 1.81 | 1700 | 0.1648 | 0.96 |
| 0.1096 | 1.87 | 1750 | 0.2221 | 0.9333 |
| 0.1138 | 1.92 | 1800 | 0.1727 | 0.9533 |
| 0.1258 | 1.97 | 1850 | 0.2036 | 0.9467 |
| 0.1032 | 2.03 | 1900 | 0.1710 | 0.9667 |
| 0.082 | 2.08 | 1950 | 0.2380 | 0.9467 |
| 0.101 | 2.13 | 2000 | 0.1868 | 0.9533 |
| 0.0913 | 2.19 | 2050 | 0.2934 | 0.9267 |
| 0.0859 | 2.24 | 2100 | 0.2385 | 0.9333 |
| 0.1019 | 2.29 | 2150 | 0.1697 | 0.9667 |
| 0.1069 | 2.35 | 2200 | 0.1815 | 0.94 |
| 0.0805 | 2.4 | 2250 | 0.2185 | 0.9467 |
| 0.0906 | 2.45 | 2300 | 0.1923 | 0.96 |
| 0.105 | 2.51 | 2350 | 0.1720 | 0.96 |
| 0.0866 | 2.56 | 2400 | 0.1710 | 0.96 |
| 0.0821 | 2.61 | 2450 | 0.2267 | 0.9533 |
| 0.107 | 2.67 | 2500 | 0.2203 | 0.9467 |
| 0.0841 | 2.72 | 2550 | 0.1621 | 0.9533 |
| 0.0811 | 2.77 | 2600 | 0.1954 | 0.9533 |
| 0.1077 | 2.83 | 2650 | 0.2107 | 0.9533 |
| 0.0771 | 2.88 | 2700 | 0.2398 | 0.9467 |
| 0.08 | 2.93 | 2750 | 0.1816 | 0.96 |
| 0.0827 | 2.99 | 2800 | 0.2311 | 0.9467 |
| 0.1118 | 3.04 | 2850 | 0.1825 | 0.96 |
| 0.0626 | 3.09 | 2900 | 0.2876 | 0.9333 |
| 0.0733 | 3.14 | 2950 | 0.2045 | 0.9467 |
| 0.0554 | 3.2 | 3000 | 0.1775 | 0.96 |
| 0.0569 | 3.25 | 3050 | 0.2208 | 0.9467 |
| 0.0566 | 3.3 | 3100 | 0.2113 | 0.9533 |
| 0.063 | 3.36 | 3150 | 0.2013 | 0.96 |
| 0.056 | 3.41 | 3200 | 0.2229 | 0.96 |
| 0.0791 | 3.46 | 3250 | 0.2472 | 0.9467 |
| 0.0867 | 3.52 | 3300 | 0.1630 | 0.9667 |
| 0.0749 | 3.57 | 3350 | 0.2066 | 0.9533 |
| 0.0653 | 3.62 | 3400 | 0.2085 | 0.96 |
| 0.0784 | 3.68 | 3450 | 0.2068 | 0.9467 |
| 0.074 | 3.73 | 3500 | 0.1976 | 0.96 |
| 0.076 | 3.78 | 3550 | 0.1953 | 0.9533 |
| 0.0807 | 3.84 | 3600 | 0.2246 | 0.9467 |
| 0.077 | 3.89 | 3650 | 0.1867 | 0.9533 |
| 0.0771 | 3.94 | 3700 | 0.2035 | 0.9533 |
| 0.0658 | 4.0 | 3750 | 0.1754 | 0.9667 |
| 0.0711 | 4.05 | 3800 | 0.1977 | 0.9667 |
| 0.066 | 4.1 | 3850 | 0.1806 | 0.9667 |
| 0.0627 | 4.16 | 3900 | 0.1819 | 0.96 |
| 0.0671 | 4.21 | 3950 | 0.2247 | 0.9533 |
| 0.0245 | 4.26 | 4000 | 0.2482 | 0.9467 |
| 0.0372 | 4.32 | 4050 | 0.2201 | 0.96 |
| 0.0607 | 4.37 | 4100 | 0.2381 | 0.9467 |
| 0.0689 | 4.42 | 4150 | 0.2159 | 0.96 |
| 0.0383 | 4.48 | 4200 | 0.2278 | 0.9533 |
| 0.0382 | 4.53 | 4250 | 0.2277 | 0.96 |
| 0.0626 | 4.58 | 4300 | 0.2325 | 0.96 |
| 0.0595 | 4.64 | 4350 | 0.2315 | 0.96 |
| 0.0578 | 4.69 | 4400 | 0.2284 | 0.96 |
| 0.0324 | 4.74 | 4450 | 0.2297 | 0.96 |
| 0.0476 | 4.8 | 4500 | 0.2154 | 0.96 |
| 0.0309 | 4.85 | 4550 | 0.2258 | 0.96 |
| 0.0748 | 4.9 | 4600 | 0.2131 | 0.96 |
| 0.0731 | 4.96 | 4650 | 0.2201 | 0.96 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0
- Datasets 2.6.1
- Tokenizers 0.13.1
|
jayantapaul888/xtremedistil-l6-h256-uncased-eng-only-sentiment-single-finetuned-memes
|
jayantapaul888
| 2022-11-01T20:53:27Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-01T20:35:37Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: xtremedistil-l6-h256-uncased-eng-only-sentiment-single-finetuned-memes
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xtremedistil-l6-h256-uncased-eng-only-sentiment-single-finetuned-memes
This model is a fine-tuned version of [microsoft/xtremedistil-l6-h256-uncased](https://huggingface.co/microsoft/xtremedistil-l6-h256-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3513
- Accuracy: 0.8555
- Precision: 0.8706
- Recall: 0.8697
- F1: 0.8699
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| No log | 1.0 | 378 | 0.3579 | 0.8526 | 0.8685 | 0.8667 | 0.8674 |
| 0.4989 | 2.0 | 756 | 0.3368 | 0.8503 | 0.8665 | 0.8650 | 0.8645 |
| 0.3318 | 3.0 | 1134 | 0.3379 | 0.8533 | 0.8693 | 0.8675 | 0.8678 |
| 0.279 | 4.0 | 1512 | 0.3426 | 0.8555 | 0.8712 | 0.8698 | 0.8696 |
| 0.279 | 5.0 | 1890 | 0.3495 | 0.8555 | 0.8717 | 0.8698 | 0.8698 |
| 0.2471 | 6.0 | 2268 | 0.3513 | 0.8555 | 0.8706 | 0.8697 | 0.8699 |
### Framework versions
- Transformers 4.24.0.dev0
- Pytorch 1.11.0+cu102
- Datasets 2.6.1
- Tokenizers 0.13.1
|
jayantapaul888/xlm-roberta-base-eng-only-sentiment-single-finetuned-memes
|
jayantapaul888
| 2022-11-01T20:15:39Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-01T19:34:41Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: xlm-roberta-base-eng-only-sentiment-single-finetuned-memes
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-eng-only-sentiment-single-finetuned-memes
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5629
- Accuracy: 0.8652
- Precision: 0.8794
- Recall: 0.8786
- F1: 0.8789
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| No log | 1.0 | 378 | 0.3506 | 0.8459 | 0.8647 | 0.8584 | 0.8605 |
| 0.4424 | 2.0 | 756 | 0.3264 | 0.8563 | 0.8818 | 0.8696 | 0.8689 |
| 0.2888 | 3.0 | 1134 | 0.3563 | 0.8578 | 0.8759 | 0.8701 | 0.8714 |
| 0.1889 | 4.0 | 1512 | 0.3939 | 0.8585 | 0.8733 | 0.8729 | 0.8730 |
| 0.1889 | 5.0 | 1890 | 0.4698 | 0.8622 | 0.8765 | 0.8761 | 0.8763 |
| 0.1136 | 6.0 | 2268 | 0.5629 | 0.8652 | 0.8794 | 0.8786 | 0.8789 |
### Framework versions
- Transformers 4.24.0.dev0
- Pytorch 1.11.0+cu102
- Datasets 2.6.1
- Tokenizers 0.13.1
|
pig4431/IMDB_BERT_5E
|
pig4431
| 2022-11-01T19:39:15Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-01T19:38:25Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
model-index:
- name: IMDB_BERT_5E
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.9533333333333334
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IMDB_BERT_5E
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2316
- Accuracy: 0.9533
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7094 | 0.03 | 50 | 0.6527 | 0.6467 |
| 0.5867 | 0.06 | 100 | 0.3681 | 0.8533 |
| 0.3441 | 0.1 | 150 | 0.2455 | 0.9 |
| 0.3052 | 0.13 | 200 | 0.3143 | 0.88 |
| 0.2991 | 0.16 | 250 | 0.1890 | 0.92 |
| 0.2954 | 0.19 | 300 | 0.2012 | 0.9267 |
| 0.2723 | 0.22 | 350 | 0.2178 | 0.9333 |
| 0.255 | 0.26 | 400 | 0.1740 | 0.9267 |
| 0.2675 | 0.29 | 450 | 0.1667 | 0.9467 |
| 0.3071 | 0.32 | 500 | 0.1766 | 0.9333 |
| 0.2498 | 0.35 | 550 | 0.1928 | 0.9267 |
| 0.2402 | 0.38 | 600 | 0.1334 | 0.94 |
| 0.2449 | 0.42 | 650 | 0.1332 | 0.9467 |
| 0.2298 | 0.45 | 700 | 0.1375 | 0.9333 |
| 0.2625 | 0.48 | 750 | 0.1529 | 0.9467 |
| 0.2459 | 0.51 | 800 | 0.1621 | 0.94 |
| 0.2499 | 0.54 | 850 | 0.1606 | 0.92 |
| 0.2405 | 0.58 | 900 | 0.1375 | 0.94 |
| 0.208 | 0.61 | 950 | 0.1697 | 0.94 |
| 0.2642 | 0.64 | 1000 | 0.1507 | 0.9467 |
| 0.2272 | 0.67 | 1050 | 0.1478 | 0.94 |
| 0.2769 | 0.7 | 1100 | 0.1423 | 0.9467 |
| 0.2293 | 0.74 | 1150 | 0.1434 | 0.9467 |
| 0.2212 | 0.77 | 1200 | 0.1371 | 0.9533 |
| 0.2176 | 0.8 | 1250 | 0.1380 | 0.9533 |
| 0.2269 | 0.83 | 1300 | 0.1453 | 0.9467 |
| 0.2422 | 0.86 | 1350 | 0.1450 | 0.9467 |
| 0.2141 | 0.9 | 1400 | 0.1775 | 0.9467 |
| 0.235 | 0.93 | 1450 | 0.1302 | 0.9467 |
| 0.2275 | 0.96 | 1500 | 0.1304 | 0.9467 |
| 0.2282 | 0.99 | 1550 | 0.1620 | 0.9533 |
| 0.1898 | 1.02 | 1600 | 0.1482 | 0.9333 |
| 0.1677 | 1.06 | 1650 | 0.1304 | 0.9533 |
| 0.1533 | 1.09 | 1700 | 0.1270 | 0.96 |
| 0.1915 | 1.12 | 1750 | 0.1601 | 0.9533 |
| 0.1687 | 1.15 | 1800 | 0.1515 | 0.9467 |
| 0.1605 | 1.18 | 1850 | 0.1729 | 0.9467 |
| 0.1731 | 1.22 | 1900 | 0.1529 | 0.94 |
| 0.1308 | 1.25 | 1950 | 0.1577 | 0.96 |
| 0.1792 | 1.28 | 2000 | 0.1668 | 0.9333 |
| 0.1987 | 1.31 | 2050 | 0.1613 | 0.9533 |
| 0.1782 | 1.34 | 2100 | 0.1542 | 0.96 |
| 0.199 | 1.38 | 2150 | 0.1437 | 0.9533 |
| 0.1224 | 1.41 | 2200 | 0.1674 | 0.96 |
| 0.1854 | 1.44 | 2250 | 0.1831 | 0.9533 |
| 0.1622 | 1.47 | 2300 | 0.1403 | 0.9533 |
| 0.1586 | 1.5 | 2350 | 0.1417 | 0.96 |
| 0.1375 | 1.54 | 2400 | 0.1409 | 0.9533 |
| 0.1401 | 1.57 | 2450 | 0.1759 | 0.96 |
| 0.1999 | 1.6 | 2500 | 0.1172 | 0.96 |
| 0.1746 | 1.63 | 2550 | 0.1479 | 0.96 |
| 0.1983 | 1.66 | 2600 | 0.1498 | 0.9467 |
| 0.1658 | 1.7 | 2650 | 0.1375 | 0.9533 |
| 0.1492 | 1.73 | 2700 | 0.1504 | 0.9667 |
| 0.1435 | 1.76 | 2750 | 0.1340 | 0.9667 |
| 0.1473 | 1.79 | 2800 | 0.1262 | 0.9667 |
| 0.1692 | 1.82 | 2850 | 0.1323 | 0.9533 |
| 0.1567 | 1.86 | 2900 | 0.1339 | 0.96 |
| 0.1615 | 1.89 | 2950 | 0.1204 | 0.9667 |
| 0.1677 | 1.92 | 3000 | 0.1202 | 0.9667 |
| 0.1426 | 1.95 | 3050 | 0.1310 | 0.96 |
| 0.1754 | 1.98 | 3100 | 0.1469 | 0.9533 |
| 0.1395 | 2.02 | 3150 | 0.1663 | 0.96 |
| 0.0702 | 2.05 | 3200 | 0.1399 | 0.9733 |
| 0.1351 | 2.08 | 3250 | 0.1520 | 0.9667 |
| 0.1194 | 2.11 | 3300 | 0.1410 | 0.9667 |
| 0.1087 | 2.14 | 3350 | 0.1361 | 0.9733 |
| 0.1245 | 2.18 | 3400 | 0.1490 | 0.9533 |
| 0.1285 | 2.21 | 3450 | 0.1799 | 0.96 |
| 0.0801 | 2.24 | 3500 | 0.1776 | 0.9533 |
| 0.117 | 2.27 | 3550 | 0.1756 | 0.9667 |
| 0.1105 | 2.3 | 3600 | 0.1749 | 0.9533 |
| 0.1359 | 2.34 | 3650 | 0.1750 | 0.96 |
| 0.1328 | 2.37 | 3700 | 0.1857 | 0.9533 |
| 0.1201 | 2.4 | 3750 | 0.1834 | 0.9533 |
| 0.1239 | 2.43 | 3800 | 0.1923 | 0.9533 |
| 0.0998 | 2.46 | 3850 | 0.1882 | 0.9533 |
| 0.0907 | 2.5 | 3900 | 0.1722 | 0.96 |
| 0.1214 | 2.53 | 3950 | 0.1787 | 0.96 |
| 0.0858 | 2.56 | 4000 | 0.1927 | 0.96 |
| 0.1384 | 2.59 | 4050 | 0.1312 | 0.96 |
| 0.0951 | 2.62 | 4100 | 0.1348 | 0.96 |
| 0.1325 | 2.66 | 4150 | 0.1652 | 0.9533 |
| 0.1429 | 2.69 | 4200 | 0.1603 | 0.9533 |
| 0.0923 | 2.72 | 4250 | 0.2141 | 0.94 |
| 0.1336 | 2.75 | 4300 | 0.1348 | 0.9733 |
| 0.0893 | 2.78 | 4350 | 0.1356 | 0.9667 |
| 0.1057 | 2.82 | 4400 | 0.1932 | 0.9533 |
| 0.0928 | 2.85 | 4450 | 0.1868 | 0.9533 |
| 0.0586 | 2.88 | 4500 | 0.1620 | 0.96 |
| 0.1426 | 2.91 | 4550 | 0.1944 | 0.9533 |
| 0.1394 | 2.94 | 4600 | 0.1630 | 0.96 |
| 0.0785 | 2.98 | 4650 | 0.1560 | 0.9667 |
| 0.0772 | 3.01 | 4700 | 0.2093 | 0.9467 |
| 0.0565 | 3.04 | 4750 | 0.1785 | 0.96 |
| 0.0771 | 3.07 | 4800 | 0.2361 | 0.9467 |
| 0.0634 | 3.1 | 4850 | 0.1809 | 0.96 |
| 0.0847 | 3.13 | 4900 | 0.1496 | 0.9733 |
| 0.0526 | 3.17 | 4950 | 0.1620 | 0.9667 |
| 0.0796 | 3.2 | 5000 | 0.1764 | 0.9667 |
| 0.0786 | 3.23 | 5050 | 0.1798 | 0.9667 |
| 0.0531 | 3.26 | 5100 | 0.1698 | 0.9667 |
| 0.0445 | 3.29 | 5150 | 0.2088 | 0.96 |
| 0.1212 | 3.33 | 5200 | 0.1842 | 0.9533 |
| 0.0825 | 3.36 | 5250 | 0.2016 | 0.9533 |
| 0.0782 | 3.39 | 5300 | 0.1775 | 0.9533 |
| 0.0627 | 3.42 | 5350 | 0.1656 | 0.96 |
| 0.0898 | 3.45 | 5400 | 0.2331 | 0.9533 |
| 0.0882 | 3.49 | 5450 | 0.2514 | 0.9467 |
| 0.0798 | 3.52 | 5500 | 0.2090 | 0.9533 |
| 0.0474 | 3.55 | 5550 | 0.2322 | 0.96 |
| 0.0773 | 3.58 | 5600 | 0.2023 | 0.96 |
| 0.0862 | 3.61 | 5650 | 0.2247 | 0.96 |
| 0.0723 | 3.65 | 5700 | 0.2001 | 0.96 |
| 0.0549 | 3.68 | 5750 | 0.2031 | 0.9533 |
| 0.044 | 3.71 | 5800 | 0.2133 | 0.96 |
| 0.0644 | 3.74 | 5850 | 0.1876 | 0.9667 |
| 0.0868 | 3.77 | 5900 | 0.2182 | 0.9533 |
| 0.072 | 3.81 | 5950 | 0.1856 | 0.9667 |
| 0.092 | 3.84 | 6000 | 0.2120 | 0.96 |
| 0.0806 | 3.87 | 6050 | 0.2006 | 0.9533 |
| 0.0627 | 3.9 | 6100 | 0.1900 | 0.9533 |
| 0.0738 | 3.93 | 6150 | 0.1869 | 0.96 |
| 0.0667 | 3.97 | 6200 | 0.2216 | 0.96 |
| 0.0551 | 4.0 | 6250 | 0.2147 | 0.9533 |
| 0.0271 | 4.03 | 6300 | 0.2038 | 0.96 |
| 0.0763 | 4.06 | 6350 | 0.2058 | 0.96 |
| 0.0612 | 4.09 | 6400 | 0.2037 | 0.9533 |
| 0.0351 | 4.13 | 6450 | 0.2081 | 0.96 |
| 0.0265 | 4.16 | 6500 | 0.2373 | 0.9533 |
| 0.0391 | 4.19 | 6550 | 0.2264 | 0.9533 |
| 0.0609 | 4.22 | 6600 | 0.2035 | 0.9533 |
| 0.0435 | 4.25 | 6650 | 0.1989 | 0.96 |
| 0.0309 | 4.29 | 6700 | 0.2096 | 0.9667 |
| 0.064 | 4.32 | 6750 | 0.2385 | 0.9533 |
| 0.0388 | 4.35 | 6800 | 0.2071 | 0.96 |
| 0.0267 | 4.38 | 6850 | 0.2336 | 0.96 |
| 0.0433 | 4.41 | 6900 | 0.2045 | 0.9667 |
| 0.0596 | 4.45 | 6950 | 0.2013 | 0.96 |
| 0.0273 | 4.48 | 7000 | 0.2122 | 0.96 |
| 0.0559 | 4.51 | 7050 | 0.2182 | 0.96 |
| 0.0504 | 4.54 | 7100 | 0.2172 | 0.96 |
| 0.0536 | 4.57 | 7150 | 0.2406 | 0.9533 |
| 0.0624 | 4.61 | 7200 | 0.2194 | 0.9533 |
| 0.0668 | 4.64 | 7250 | 0.2156 | 0.96 |
| 0.0208 | 4.67 | 7300 | 0.2150 | 0.96 |
| 0.0436 | 4.7 | 7350 | 0.2361 | 0.9533 |
| 0.0285 | 4.73 | 7400 | 0.2175 | 0.96 |
| 0.0604 | 4.77 | 7450 | 0.2241 | 0.9467 |
| 0.0502 | 4.8 | 7500 | 0.2201 | 0.96 |
| 0.0342 | 4.83 | 7550 | 0.2232 | 0.96 |
| 0.0467 | 4.86 | 7600 | 0.2247 | 0.9533 |
| 0.0615 | 4.89 | 7650 | 0.2235 | 0.96 |
| 0.0769 | 4.93 | 7700 | 0.2302 | 0.9533 |
| 0.0451 | 4.96 | 7750 | 0.2334 | 0.9467 |
| 0.0532 | 4.99 | 7800 | 0.2316 | 0.9533 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0
- Datasets 2.6.1
- Tokenizers 0.13.1
|
loremipsum3658/jur-v5-fsl-tuned-cla-assun
|
loremipsum3658
| 2022-11-01T19:27:51Z | 1 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-11-01T19:14:38Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 1099 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 1099,
"warmup_steps": 110,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
motmono/Reinforce-PixelCopterPLEv0
|
motmono
| 2022-11-01T18:33:04Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-11-01T18:30:11Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-PixelCopterPLEv0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 11.70 +/- 11.40
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
Hrimurr/bert-base-multilingual-cased-finetuned-multibert
|
Hrimurr
| 2022-11-01T17:59:23Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"fill-mask",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-11-01T17:16:20Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Hrimurr/bert-base-multilingual-cased-finetuned-multibert
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Hrimurr/bert-base-multilingual-cased-finetuned-multibert
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.8092
- Validation Loss: 1.5697
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -688, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.8092 | 1.5697 | 0 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Datasets 2.6.1
- Tokenizers 0.13.1
|
SiddharthaM/twitter-data-xlm-roberta-base-hindi-only-memes
|
SiddharthaM
| 2022-11-01T17:49:41Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-01T17:00:16Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: twitter-data-xlm-roberta-base-hindi-only-memes
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# twitter-data-xlm-roberta-base-hindi-only-memes
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4006
- Accuracy: 0.9240
- Precision: 0.9255
- Recall: 0.9263
- F1: 0.9259
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.7485 | 1.0 | 511 | 0.4062 | 0.8381 | 0.8520 | 0.8422 | 0.8417 |
| 0.4253 | 2.0 | 1022 | 0.3195 | 0.8822 | 0.8880 | 0.8853 | 0.8851 |
| 0.2899 | 3.0 | 1533 | 0.2994 | 0.9031 | 0.9068 | 0.9060 | 0.9049 |
| 0.2116 | 4.0 | 2044 | 0.3526 | 0.9163 | 0.9199 | 0.9185 | 0.9187 |
| 0.1582 | 5.0 | 2555 | 0.4031 | 0.9163 | 0.9193 | 0.9186 | 0.9187 |
| 0.103 | 6.0 | 3066 | 0.4006 | 0.9240 | 0.9255 | 0.9263 | 0.9259 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
SiddharthaM/twitter-data-bert-base-multilingual-uncased-hindi-only-memes
|
SiddharthaM
| 2022-11-01T17:43:39Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-01T17:18:46Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: twitter-data-bert-base-multilingual-uncased-hindi-only-memes
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# twitter-data-bert-base-multilingual-uncased-hindi-only-memes
This model is a fine-tuned version of [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5614
- Accuracy: 0.9031
- Precision: 0.9064
- Recall: 0.9057
- F1: 0.9060
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.5408 | 1.0 | 511 | 0.4798 | 0.7974 | 0.8447 | 0.8049 | 0.7940 |
| 0.3117 | 2.0 | 1022 | 0.3576 | 0.8844 | 0.8875 | 0.8882 | 0.8869 |
| 0.2019 | 3.0 | 1533 | 0.3401 | 0.9020 | 0.9076 | 0.9047 | 0.9052 |
| 0.1364 | 4.0 | 2044 | 0.4519 | 0.8888 | 0.8936 | 0.8921 | 0.8923 |
| 0.0767 | 5.0 | 2555 | 0.5251 | 0.8987 | 0.9024 | 0.9016 | 0.9019 |
| 0.0433 | 6.0 | 3066 | 0.5614 | 0.9031 | 0.9064 | 0.9057 | 0.9060 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
pig4431/IMDB_ALBERT_5E
|
pig4431
| 2022-11-01T17:39:50Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"albert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-01T17:39:24Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
model-index:
- name: IMDB_ALBERT_5E
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.9466666666666667
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IMDB_ALBERT_5E
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2220
- Accuracy: 0.9467
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5285 | 0.06 | 50 | 0.2692 | 0.9133 |
| 0.3515 | 0.13 | 100 | 0.2054 | 0.9267 |
| 0.2314 | 0.19 | 150 | 0.1669 | 0.94 |
| 0.2147 | 0.26 | 200 | 0.1660 | 0.92 |
| 0.2053 | 0.32 | 250 | 0.1546 | 0.94 |
| 0.2143 | 0.38 | 300 | 0.1636 | 0.9267 |
| 0.1943 | 0.45 | 350 | 0.2068 | 0.9467 |
| 0.2107 | 0.51 | 400 | 0.1655 | 0.9333 |
| 0.2059 | 0.58 | 450 | 0.1782 | 0.94 |
| 0.1839 | 0.64 | 500 | 0.1695 | 0.94 |
| 0.2014 | 0.7 | 550 | 0.1481 | 0.9333 |
| 0.2215 | 0.77 | 600 | 0.1588 | 0.9267 |
| 0.1837 | 0.83 | 650 | 0.1352 | 0.9333 |
| 0.1938 | 0.9 | 700 | 0.1389 | 0.94 |
| 0.221 | 0.96 | 750 | 0.1193 | 0.9467 |
| 0.1843 | 1.02 | 800 | 0.1294 | 0.9467 |
| 0.1293 | 1.09 | 850 | 0.1585 | 0.9467 |
| 0.1517 | 1.15 | 900 | 0.1353 | 0.9467 |
| 0.137 | 1.21 | 950 | 0.1391 | 0.9467 |
| 0.1858 | 1.28 | 1000 | 0.1547 | 0.9333 |
| 0.1478 | 1.34 | 1050 | 0.1019 | 0.9533 |
| 0.155 | 1.41 | 1100 | 0.1154 | 0.9667 |
| 0.1439 | 1.47 | 1150 | 0.1306 | 0.9467 |
| 0.1476 | 1.53 | 1200 | 0.2085 | 0.92 |
| 0.1702 | 1.6 | 1250 | 0.1190 | 0.9467 |
| 0.1517 | 1.66 | 1300 | 0.1303 | 0.9533 |
| 0.1551 | 1.73 | 1350 | 0.1200 | 0.9467 |
| 0.1554 | 1.79 | 1400 | 0.1297 | 0.9533 |
| 0.1543 | 1.85 | 1450 | 0.1222 | 0.96 |
| 0.1242 | 1.92 | 1500 | 0.1418 | 0.9467 |
| 0.1312 | 1.98 | 1550 | 0.1279 | 0.9467 |
| 0.1292 | 2.05 | 1600 | 0.1255 | 0.9533 |
| 0.0948 | 2.11 | 1650 | 0.1305 | 0.9667 |
| 0.088 | 2.17 | 1700 | 0.1912 | 0.9333 |
| 0.0949 | 2.24 | 1750 | 0.1594 | 0.9333 |
| 0.1094 | 2.3 | 1800 | 0.1958 | 0.9467 |
| 0.1179 | 2.37 | 1850 | 0.1427 | 0.94 |
| 0.1116 | 2.43 | 1900 | 0.1551 | 0.9333 |
| 0.0742 | 2.49 | 1950 | 0.1743 | 0.94 |
| 0.1016 | 2.56 | 2000 | 0.1603 | 0.9533 |
| 0.0835 | 2.62 | 2050 | 0.1866 | 0.9333 |
| 0.0882 | 2.69 | 2100 | 0.1191 | 0.9467 |
| 0.1032 | 2.75 | 2150 | 0.1420 | 0.96 |
| 0.0957 | 2.81 | 2200 | 0.1403 | 0.96 |
| 0.1234 | 2.88 | 2250 | 0.1232 | 0.96 |
| 0.0669 | 2.94 | 2300 | 0.1557 | 0.9467 |
| 0.0994 | 3.01 | 2350 | 0.1270 | 0.9533 |
| 0.0583 | 3.07 | 2400 | 0.1520 | 0.9533 |
| 0.0651 | 3.13 | 2450 | 0.1641 | 0.9467 |
| 0.0384 | 3.2 | 2500 | 0.2165 | 0.94 |
| 0.0839 | 3.26 | 2550 | 0.1755 | 0.9467 |
| 0.0546 | 3.32 | 2600 | 0.1782 | 0.9333 |
| 0.0703 | 3.39 | 2650 | 0.1945 | 0.94 |
| 0.0734 | 3.45 | 2700 | 0.2139 | 0.9467 |
| 0.0629 | 3.52 | 2750 | 0.1445 | 0.9467 |
| 0.0513 | 3.58 | 2800 | 0.1613 | 0.9667 |
| 0.0794 | 3.64 | 2850 | 0.1742 | 0.9333 |
| 0.0537 | 3.71 | 2900 | 0.1745 | 0.9467 |
| 0.0553 | 3.77 | 2950 | 0.1724 | 0.96 |
| 0.0483 | 3.84 | 3000 | 0.1638 | 0.9533 |
| 0.0647 | 3.9 | 3050 | 0.1986 | 0.9467 |
| 0.0443 | 3.96 | 3100 | 0.1926 | 0.9533 |
| 0.0418 | 4.03 | 3150 | 0.1879 | 0.94 |
| 0.0466 | 4.09 | 3200 | 0.2058 | 0.9333 |
| 0.0491 | 4.16 | 3250 | 0.2017 | 0.9467 |
| 0.0287 | 4.22 | 3300 | 0.2020 | 0.9533 |
| 0.0272 | 4.28 | 3350 | 0.1974 | 0.9533 |
| 0.0359 | 4.35 | 3400 | 0.2242 | 0.9333 |
| 0.0405 | 4.41 | 3450 | 0.2157 | 0.94 |
| 0.0309 | 4.48 | 3500 | 0.2142 | 0.9467 |
| 0.033 | 4.54 | 3550 | 0.2163 | 0.94 |
| 0.0408 | 4.6 | 3600 | 0.2368 | 0.94 |
| 0.0336 | 4.67 | 3650 | 0.2173 | 0.94 |
| 0.0356 | 4.73 | 3700 | 0.2230 | 0.94 |
| 0.0548 | 4.8 | 3750 | 0.2181 | 0.9533 |
| 0.042 | 4.86 | 3800 | 0.2240 | 0.9333 |
| 0.0292 | 4.92 | 3850 | 0.2259 | 0.9267 |
| 0.0196 | 4.99 | 3900 | 0.2220 | 0.9467 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0
- Datasets 2.6.1
- Tokenizers 0.13.1
|
musika/musika-irish-jigs
|
musika
| 2022-11-01T15:03:23Z | 0 | 1 | null |
[
"audio",
"music",
"generation",
"tensorflow",
"arxiv:2208.08706",
"license:mit",
"region:us"
] | null | 2022-11-01T15:03:05Z |
---
license: mit
tags:
- audio
- music
- generation
- tensorflow
---
# Musika Model: musika_irish_jigs
## Model provided by: rjadr
Pretrained musika_irish_jigs model for the [Musika system](https://github.com/marcoppasini/musika) for fast infinite waveform music generation.
Introduced in [this paper](https://arxiv.org/abs/2208.08706).
## How to use
You can generate music from this pretrained musika_irish_jigs model using the notebook available [here](https://colab.research.google.com/drive/1HJWliBXPi-Xlx3gY8cjFI5-xaZgrTD7r).
### Model description
This pretrained GAN system consists of a ResNet-style generator and discriminator. During training, stability is controlled by adapting the strength of gradient penalty regularization on-the-fly. The gradient penalty weighting term is contained in *switch.npy*. The generator is conditioned on a latent coordinate system to produce samples of arbitrary length. The latent representations produced by the generator are then passed to a decoder which converts them into waveform audio.
The generator has a context window of about 12 seconds of audio.
|
huggingtweets/oliverjumpertz
|
huggingtweets
| 2022-11-01T14:16:40Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-01T14:15:25Z |
---
language: en
thumbnail: http://www.huggingtweets.com/oliverjumpertz/1667312196044/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1458152201344495624/tW6dBUm6_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Oliver Jumpertz</div>
<div style="text-align: center; font-size: 14px;">@oliverjumpertz</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Oliver Jumpertz.
| Data | Oliver Jumpertz |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 239 |
| Short tweets | 835 |
| Tweets kept | 2176 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/238ero8e/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @oliverjumpertz's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1h1p9vyl) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1h1p9vyl/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/oliverjumpertz')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
gislei/meu
|
gislei
| 2022-11-01T14:13:15Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2022-11-01T14:10:45Z |
---
license: bigscience-openrail-m
---
|
fay/ddpm-butterflies-128
|
fay
| 2022-11-01T14:02:17Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"en",
"dataset:huggan/smithsonian_butterflies_subset",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-11-01T12:22:07Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: huggan/smithsonian_butterflies_subset
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/fay/ddpm-butterflies-128/tensorboard?#scalars)
|
emrevarol/dz_finetuning-medium-distillbert-95K
|
emrevarol
| 2022-11-01T13:40:20Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-01T13:17:56Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: dz_finetuning-medium-distillbert-95K
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dz_finetuning-medium-distillbert-95K
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0047
- Accuracy: 0.9991
- F1: 0.9991
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
pig4431/IMDB_ELECTRA_5E
|
pig4431
| 2022-11-01T13:37:25Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"electra",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-01T13:36:59Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
model-index:
- name: IMDB_ELECTRA_5E
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.9533333333333334
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IMDB_ELECTRA_5E
This model is a fine-tuned version of [google/electra-base-discriminator](https://huggingface.co/google/electra-base-discriminator) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2158
- Accuracy: 0.9533
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6784 | 0.03 | 50 | 0.6027 | 0.84 |
| 0.4378 | 0.06 | 100 | 0.2217 | 0.9533 |
| 0.3063 | 0.1 | 150 | 0.1879 | 0.94 |
| 0.2183 | 0.13 | 200 | 0.1868 | 0.9333 |
| 0.2058 | 0.16 | 250 | 0.1548 | 0.9467 |
| 0.2287 | 0.19 | 300 | 0.1572 | 0.9533 |
| 0.1677 | 0.22 | 350 | 0.1472 | 0.9533 |
| 0.1997 | 0.26 | 400 | 0.1400 | 0.9533 |
| 0.1966 | 0.29 | 450 | 0.1592 | 0.9333 |
| 0.184 | 0.32 | 500 | 0.1529 | 0.96 |
| 0.2084 | 0.35 | 550 | 0.2136 | 0.9067 |
| 0.2126 | 0.38 | 600 | 0.1508 | 0.9533 |
| 0.1695 | 0.42 | 650 | 0.1442 | 0.9667 |
| 0.2506 | 0.45 | 700 | 0.1811 | 0.9467 |
| 0.1755 | 0.48 | 750 | 0.1336 | 0.96 |
| 0.1874 | 0.51 | 800 | 0.1403 | 0.9533 |
| 0.1535 | 0.54 | 850 | 0.1239 | 0.96 |
| 0.1458 | 0.58 | 900 | 0.1198 | 0.9667 |
| 0.1649 | 0.61 | 950 | 0.1538 | 0.9533 |
| 0.2014 | 0.64 | 1000 | 0.1196 | 0.9667 |
| 0.1651 | 0.67 | 1050 | 0.1200 | 0.96 |
| 0.1595 | 0.7 | 1100 | 0.1155 | 0.96 |
| 0.1787 | 0.74 | 1150 | 0.1175 | 0.96 |
| 0.1666 | 0.77 | 1200 | 0.1264 | 0.9533 |
| 0.1412 | 0.8 | 1250 | 0.1655 | 0.9533 |
| 0.1949 | 0.83 | 1300 | 0.1363 | 0.9467 |
| 0.1485 | 0.86 | 1350 | 0.1434 | 0.9667 |
| 0.1801 | 0.9 | 1400 | 0.1379 | 0.9667 |
| 0.178 | 0.93 | 1450 | 0.1498 | 0.96 |
| 0.1767 | 0.96 | 1500 | 0.1507 | 0.9533 |
| 0.1452 | 0.99 | 1550 | 0.1340 | 0.94 |
| 0.1465 | 1.02 | 1600 | 0.1416 | 0.96 |
| 0.1115 | 1.06 | 1650 | 0.1435 | 0.96 |
| 0.1212 | 1.09 | 1700 | 0.1379 | 0.96 |
| 0.1534 | 1.12 | 1750 | 0.1198 | 0.96 |
| 0.1164 | 1.15 | 1800 | 0.1011 | 0.96 |
| 0.1383 | 1.18 | 1850 | 0.1043 | 0.96 |
| 0.1415 | 1.22 | 1900 | 0.0914 | 0.9533 |
| 0.136 | 1.25 | 1950 | 0.1341 | 0.9533 |
| 0.1301 | 1.28 | 2000 | 0.1303 | 0.9533 |
| 0.1486 | 1.31 | 2050 | 0.1027 | 0.9733 |
| 0.0844 | 1.34 | 2100 | 0.1410 | 0.9667 |
| 0.1388 | 1.38 | 2150 | 0.1265 | 0.9667 |
| 0.12 | 1.41 | 2200 | 0.1139 | 0.9667 |
| 0.1329 | 1.44 | 2250 | 0.1259 | 0.9667 |
| 0.0982 | 1.47 | 2300 | 0.1349 | 0.9667 |
| 0.1271 | 1.5 | 2350 | 0.1176 | 0.9667 |
| 0.1286 | 1.54 | 2400 | 0.1349 | 0.9533 |
| 0.1079 | 1.57 | 2450 | 0.1335 | 0.96 |
| 0.1236 | 1.6 | 2500 | 0.1393 | 0.96 |
| 0.1285 | 1.63 | 2550 | 0.1635 | 0.96 |
| 0.0932 | 1.66 | 2600 | 0.1571 | 0.9533 |
| 0.1222 | 1.7 | 2650 | 0.1610 | 0.9533 |
| 0.1421 | 1.73 | 2700 | 0.1296 | 0.9533 |
| 0.1581 | 1.76 | 2750 | 0.1289 | 0.96 |
| 0.1245 | 1.79 | 2800 | 0.1180 | 0.9667 |
| 0.1196 | 1.82 | 2850 | 0.1371 | 0.96 |
| 0.1062 | 1.86 | 2900 | 0.1269 | 0.96 |
| 0.1188 | 1.89 | 2950 | 0.1259 | 0.9667 |
| 0.1183 | 1.92 | 3000 | 0.1164 | 0.9667 |
| 0.1173 | 1.95 | 3050 | 0.1280 | 0.9667 |
| 0.1344 | 1.98 | 3100 | 0.1439 | 0.96 |
| 0.1166 | 2.02 | 3150 | 0.1442 | 0.96 |
| 0.0746 | 2.05 | 3200 | 0.1562 | 0.96 |
| 0.0813 | 2.08 | 3250 | 0.1760 | 0.96 |
| 0.0991 | 2.11 | 3300 | 0.1485 | 0.9667 |
| 0.076 | 2.14 | 3350 | 0.1530 | 0.9533 |
| 0.087 | 2.18 | 3400 | 0.1441 | 0.96 |
| 0.0754 | 2.21 | 3450 | 0.1401 | 0.9667 |
| 0.0878 | 2.24 | 3500 | 0.1480 | 0.96 |
| 0.0605 | 2.27 | 3550 | 0.1579 | 0.9667 |
| 0.0424 | 2.3 | 3600 | 0.1897 | 0.9667 |
| 0.0541 | 2.34 | 3650 | 0.1784 | 0.96 |
| 0.0755 | 2.37 | 3700 | 0.1527 | 0.9733 |
| 0.1089 | 2.4 | 3750 | 0.1376 | 0.9733 |
| 0.1061 | 2.43 | 3800 | 0.1329 | 0.9667 |
| 0.0858 | 2.46 | 3850 | 0.1539 | 0.9667 |
| 0.1424 | 2.5 | 3900 | 0.1296 | 0.9667 |
| 0.0928 | 2.53 | 3950 | 0.1324 | 0.9667 |
| 0.0669 | 2.56 | 4000 | 0.1371 | 0.9667 |
| 0.0797 | 2.59 | 4050 | 0.1493 | 0.9667 |
| 0.0563 | 2.62 | 4100 | 0.1657 | 0.96 |
| 0.0579 | 2.66 | 4150 | 0.1799 | 0.9533 |
| 0.1014 | 2.69 | 4200 | 0.1625 | 0.96 |
| 0.0629 | 2.72 | 4250 | 0.1388 | 0.9733 |
| 0.1331 | 2.75 | 4300 | 0.1522 | 0.9667 |
| 0.0535 | 2.78 | 4350 | 0.1449 | 0.9667 |
| 0.1103 | 2.82 | 4400 | 0.1394 | 0.9733 |
| 0.0691 | 2.85 | 4450 | 0.1324 | 0.9733 |
| 0.0869 | 2.88 | 4500 | 0.1146 | 0.9667 |
| 0.068 | 2.91 | 4550 | 0.1621 | 0.9667 |
| 0.0854 | 2.94 | 4600 | 0.1995 | 0.96 |
| 0.0907 | 2.98 | 4650 | 0.1819 | 0.96 |
| 0.0679 | 3.01 | 4700 | 0.1771 | 0.9533 |
| 0.0632 | 3.04 | 4750 | 0.1388 | 0.9667 |
| 0.0653 | 3.07 | 4800 | 0.1652 | 0.9667 |
| 0.0305 | 3.1 | 4850 | 0.1474 | 0.9733 |
| 0.065 | 3.13 | 4900 | 0.1741 | 0.9667 |
| 0.0909 | 3.17 | 4950 | 0.1417 | 0.9733 |
| 0.0663 | 3.2 | 5000 | 0.1578 | 0.9667 |
| 0.0204 | 3.23 | 5050 | 0.1801 | 0.9667 |
| 0.0478 | 3.26 | 5100 | 0.1892 | 0.9667 |
| 0.0809 | 3.29 | 5150 | 0.1724 | 0.9667 |
| 0.0454 | 3.33 | 5200 | 0.2045 | 0.96 |
| 0.0958 | 3.36 | 5250 | 0.1635 | 0.9667 |
| 0.0258 | 3.39 | 5300 | 0.1831 | 0.9667 |
| 0.0621 | 3.42 | 5350 | 0.1663 | 0.9667 |
| 0.064 | 3.45 | 5400 | 0.1794 | 0.9667 |
| 0.0629 | 3.49 | 5450 | 0.1737 | 0.9667 |
| 0.0436 | 3.52 | 5500 | 0.1815 | 0.9667 |
| 0.0378 | 3.55 | 5550 | 0.1903 | 0.9667 |
| 0.0149 | 3.58 | 5600 | 0.1876 | 0.9667 |
| 0.0698 | 3.61 | 5650 | 0.1861 | 0.9667 |
| 0.047 | 3.65 | 5700 | 0.1764 | 0.9667 |
| 0.0739 | 3.68 | 5750 | 0.1510 | 0.9667 |
| 0.0363 | 3.71 | 5800 | 0.1802 | 0.96 |
| 0.031 | 3.74 | 5850 | 0.1688 | 0.9733 |
| 0.1034 | 3.77 | 5900 | 0.1764 | 0.9667 |
| 0.0588 | 3.81 | 5950 | 0.1840 | 0.9667 |
| 0.0433 | 3.84 | 6000 | 0.1743 | 0.9667 |
| 0.057 | 3.87 | 6050 | 0.1896 | 0.9667 |
| 0.0385 | 3.9 | 6100 | 0.1959 | 0.9667 |
| 0.0483 | 3.93 | 6150 | 0.1982 | 0.9667 |
| 0.0292 | 3.97 | 6200 | 0.2016 | 0.9667 |
| 0.0456 | 4.0 | 6250 | 0.1981 | 0.9667 |
| 0.0563 | 4.03 | 6300 | 0.1915 | 0.9667 |
| 0.0346 | 4.06 | 6350 | 0.1967 | 0.9667 |
| 0.038 | 4.09 | 6400 | 0.2035 | 0.9667 |
| 0.0341 | 4.13 | 6450 | 0.2356 | 0.96 |
| 0.0425 | 4.16 | 6500 | 0.1913 | 0.9667 |
| 0.0282 | 4.19 | 6550 | 0.2091 | 0.96 |
| 0.0543 | 4.22 | 6600 | 0.2311 | 0.9533 |
| 0.0139 | 4.25 | 6650 | 0.2260 | 0.96 |
| 0.0587 | 4.29 | 6700 | 0.2257 | 0.96 |
| 0.0446 | 4.32 | 6750 | 0.2439 | 0.9533 |
| 0.0447 | 4.35 | 6800 | 0.2444 | 0.9533 |
| 0.0199 | 4.38 | 6850 | 0.2327 | 0.96 |
| 0.0392 | 4.41 | 6900 | 0.2476 | 0.9533 |
| 0.0596 | 4.45 | 6950 | 0.2443 | 0.9533 |
| 0.0292 | 4.48 | 7000 | 0.2499 | 0.9533 |
| 0.0325 | 4.51 | 7050 | 0.2430 | 0.9533 |
| 0.0316 | 4.54 | 7100 | 0.2272 | 0.96 |
| 0.0259 | 4.57 | 7150 | 0.2275 | 0.96 |
| 0.0294 | 4.61 | 7200 | 0.2339 | 0.9533 |
| 0.0292 | 4.64 | 7250 | 0.2304 | 0.96 |
| 0.0258 | 4.67 | 7300 | 0.2258 | 0.96 |
| 0.0221 | 4.7 | 7350 | 0.2164 | 0.96 |
| 0.0407 | 4.73 | 7400 | 0.2212 | 0.96 |
| 0.0344 | 4.77 | 7450 | 0.2246 | 0.96 |
| 0.0364 | 4.8 | 7500 | 0.2211 | 0.96 |
| 0.0266 | 4.83 | 7550 | 0.2207 | 0.96 |
| 0.0419 | 4.86 | 7600 | 0.2199 | 0.9533 |
| 0.0283 | 4.89 | 7650 | 0.2185 | 0.96 |
| 0.0193 | 4.93 | 7700 | 0.2173 | 0.96 |
| 0.022 | 4.96 | 7750 | 0.2157 | 0.9533 |
| 0.0517 | 4.99 | 7800 | 0.2158 | 0.9533 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.13.0
- Datasets 2.6.1
- Tokenizers 0.13.1
|
rufimelo/Legal-BERTimbau-large-TSDAE-v4
|
rufimelo
| 2022-11-01T13:12:56Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"feature-extraction",
"tsdae",
"pt",
"dataset:rufimelo/PortugueseLegalSentences-v1",
"license:mit",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-11-01T01:00:28Z |
---
language:
- pt
thumbnail: "Portugues BERT for the Legal Domain"
tags:
- bert
- pytorch
- tsdae
datasets:
- rufimelo/PortugueseLegalSentences-v1
license: "mit"
widget:
- text: "O advogado apresentou [MASK] ao juíz."
---
# Legal_BERTimbau
## Introduction
Legal_BERTimbau Large is a fine-tuned BERT model based on [BERTimbau](https://huggingface.co/neuralmind/bert-base-portuguese-cased) Large.
"BERTimbau Base is a pretrained BERT model for Brazilian Portuguese that achieves state-of-the-art performances on three downstream NLP tasks: Named Entity Recognition, Sentence Textual Similarity and Recognizing Textual Entailment. It is available in two sizes: Base and Large.
For further information or requests, please go to [BERTimbau repository](https://github.com/neuralmind-ai/portuguese-bert/)."
The performance of Language Models can change drastically when there is a domain shift between training and test data. In order create a Portuguese Language Model adapted to a Legal domain, the original BERTimbau model was submitted to a fine-tuning stage where it was performed 1 "PreTraining" epoch over 200000 cleaned documents (lr: 1e-5, using TSDAE technique)
## Available models
| Model | Arch. | #Layers | #Params |
| ---------------------------------------- | ---------- | ------- | ------- |
|`rufimelo/Legal-BERTimbau-base` |BERT-Base |12 |110M|
| `rufimelo/Legal-BERTimbau-large` | BERT-Large | 24 | 335M |
## Usage
```python
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("rufimelo/Legal-BERTimbau-large-TSDAE-v3")
model = AutoModelForMaskedLM.from_pretrained("rufimelo/Legal-BERTimbau-large-TSDAE")
```
### Masked language modeling prediction example
```python
from transformers import pipeline
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("rufimelo/Legal-BERTimbau-large-TSDAE-v3")
model = AutoModelForMaskedLM.from_pretrained("rufimelo/Legal-BERTimbau-large-TSDAE-v3")
pipe = pipeline('fill-mask', model=model, tokenizer=tokenizer)
pipe('O advogado apresentou [MASK] para o juíz')
# [{'score': 0.5034703612327576,
#'token': 8190,
#'token_str': 'recurso',
#'sequence': 'O advogado apresentou recurso para o juíz'},
#{'score': 0.07347951829433441,
#'token': 21973,
#'token_str': 'petição',
#'sequence': 'O advogado apresentou petição para o juíz'},
#{'score': 0.05165359005331993,
#'token': 4299,
#'token_str': 'resposta',
#'sequence': 'O advogado apresentou resposta para o juíz'},
#{'score': 0.04611917585134506,
#'token': 5265,
#'token_str': 'exposição',
#'sequence': 'O advogado apresentou exposição para o juíz'},
#{'score': 0.04068068787455559,
#'token': 19737, 'token_str':
#'alegações',
#'sequence': 'O advogado apresentou alegações para o juíz'}]
```
### For BERT embeddings
```python
import torch
from transformers import AutoModel
model = AutoModel.from_pretrained('rufimelo/Legal-BERTimbau-large-TSDAE')
input_ids = tokenizer.encode('O advogado apresentou recurso para o juíz', return_tensors='pt')
with torch.no_grad():
outs = model(input_ids)
encoded = outs[0][0, 1:-1]
#tensor([[ 0.0328, -0.4292, -0.6230, ..., -0.3048, -0.5674, 0.0157],
#[-0.3569, 0.3326, 0.7013, ..., -0.7778, 0.2646, 1.1310],
#[ 0.3169, 0.4333, 0.2026, ..., 1.0517, -0.1951, 0.7050],
#...,
#[-0.3648, -0.8137, -0.4764, ..., -0.2725, -0.4879, 0.6264],
#[-0.2264, -0.1821, -0.3011, ..., -0.5428, 0.1429, 0.0509],
#[-1.4617, 0.6281, -0.0625, ..., -1.2774, -0.4491, 0.3131]])
```
## Citation
If you use this work, please cite BERTimbau's work:
```bibtex
@inproceedings{souza2020bertimbau,
author = {F{\'a}bio Souza and
Rodrigo Nogueira and
Roberto Lotufo},
title = {{BERT}imbau: pretrained {BERT} models for {B}razilian {P}ortuguese},
booktitle = {9th Brazilian Conference on Intelligent Systems, {BRACIS}, Rio Grande do Sul, Brazil, October 20-23 (to appear)},
year = {2020}
}
```
|
emrevarol/dz_finetuning_distilbert-base-uncased-finetuned-sst-2-english
|
emrevarol
| 2022-11-01T13:03:58Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-01T12:54:37Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: dz_finetuning_distilbert-base-uncased-finetuned-sst-2-english
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dz_finetuning_distilbert-base-uncased-finetuned-sst-2-english
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0363
- Accuracy: 0.9933
- F1: 0.9938
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
kzk-kbys/distilbert-base-uncased-finetuned-emotion
|
kzk-kbys
| 2022-11-01T12:49:13Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-31T14:34:51Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.94
- name: F1
type: f1
value: 0.940059296063194
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1895
- Accuracy: 0.94
- F1: 0.9401
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.4628 | 1.0 | 2000 | 0.2334 | 0.9315 | 0.9312 |
| 0.1579 | 2.0 | 4000 | 0.1895 | 0.94 | 0.9401 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1
- Datasets 2.6.1
- Tokenizers 0.13.1
|
memento/ddpm-butterflies-128
|
memento
| 2022-11-01T12:35:06Z | 3 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"en",
"dataset:huggan/smithsonian_butterflies_subset",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-11-01T11:20:13Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: huggan/smithsonian_butterflies_subset
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/memento/ddpm-butterflies-128/tensorboard?#scalars)
|
zhujlfine/wav2vec2-common_voice-tr-demo
|
zhujlfine
| 2022-11-01T12:28:13Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"common_voice",
"generated_from_trainer",
"tr",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-11-01T08:35:56Z |
---
language:
- tr
license: apache-2.0
tags:
- automatic-speech-recognition
- common_voice
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-common_voice-tr-demo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-common_voice-tr-demo
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the COMMON_VOICE - TR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5682
- Wer: 0.5739
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 15.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 3.69 | 100 | 3.5365 | 1.0 |
| No log | 7.4 | 200 | 2.9341 | 0.9999 |
| No log | 11.11 | 300 | 0.6994 | 0.6841 |
| No log | 14.8 | 400 | 0.5623 | 0.5792 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.11.0+cu113
- Datasets 2.6.1
- Tokenizers 0.12.1
|
emrevarol/dz_finetuning-sentiment-model-3000-samples
|
emrevarol
| 2022-11-01T12:18:37Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-01T12:05:17Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: dz_finetuning-sentiment-model-3000-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dz_finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0553
- Accuracy: 0.99
- F1: 0.9908
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
huggingtweets/manjhunathravi
|
huggingtweets
| 2022-11-01T11:48:44Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-01T11:47:34Z |
---
language: en
thumbnail: http://www.huggingtweets.com/manjhunathravi/1667303320061/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1550071946041102336/7TWTKpfv_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Manjhunath Ravi 🚀</div>
<div style="text-align: center; font-size: 14px;">@manjhunathravi</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Manjhunath Ravi 🚀.
| Data | Manjhunath Ravi 🚀 |
| --- | --- |
| Tweets downloaded | 3218 |
| Retweets | 2 |
| Short tweets | 287 |
| Tweets kept | 2929 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/r8x1jof9/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @manjhunathravi's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/mmrw5vmz) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/mmrw5vmz/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/manjhunathravi')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
gabrielgmendonca/bert-base-portuguese-cased-finetuned-enjoei
|
gabrielgmendonca
| 2022-11-01T11:41:39Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"tensorboard",
"bert",
"fill-mask",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-11-01T01:14:08Z |
---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: gabrielgmendonca/bert-base-portuguese-cased-finetuned-enjoei
results: []
---
# gabrielgmendonca/bert-base-portuguese-cased-finetuned-enjoei
This model is a fine-tuned version of [neuralmind/bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased)
on a teaching dataset extracted from https://www.enjoei.com.br/.
It achieves the following results on the evaluation set:
- Train Loss: 6.0784
- Validation Loss: 5.2882
- Epoch: 2
## Intended uses & limitations
This model is intended for **educational purposes**.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -985, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 6.3618 | 5.7723 | 0 |
| 6.3353 | 5.4076 | 1 |
| 6.0784 | 5.2882 | 2 |
### Framework versions
- Transformers 4.23.1
- TensorFlow 2.9.2
- Datasets 2.6.1
- Tokenizers 0.13.1
|
davidmasip/deforestation_predatathon
|
davidmasip
| 2022-11-01T11:05:41Z | 4 | 0 |
fastai
|
[
"fastai",
"region:us"
] | null | 2022-11-01T09:04:53Z |
---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
ayyuce/my_solar_model
|
ayyuce
| 2022-11-01T11:04:54Z | 0 | 1 |
sklearn
|
[
"sklearn",
"skops",
"tabular-regression",
"license:mit",
"region:us"
] |
tabular-regression
| 2022-11-01T11:04:25Z |
---
license: mit
library_name: sklearn
tags:
- sklearn
- skops
- tabular-regression
widget:
structuredData:
AMBIENT_TEMPERATURE:
- 21.4322062
- 27.322759933333337
- 25.56246340000001
DAILY_YIELD:
- 0.0
- 996.4285714
- 685.0
DC_POWER:
- 0.0
- 8358.285714
- 6741.285714
IRRADIATION:
- 0.0
- 0.6465474886666664
- 0.498367802
MODULE_TEMPERATURE:
- 19.826896066666663
- 45.7407144
- 38.252356133333336
TOTAL_YIELD:
- 7218223.0
- 6366043.429
- 6372656.0
---
# Model description
This is a LinearRegression model trained on Solar Power Generation Data.
## Intended uses & limitations
This model is not ready to be used in production.
## Training Procedure
### Hyperparameters
The model is trained with below hyperparameters.
<details>
<summary> Click to expand </summary>
| Hyperparameter | Value |
|------------------|------------|
| alpha | 1.0 |
| copy_X | True |
| fit_intercept | True |
| l1_ratio | 0.5 |
| max_iter | 1000 |
| normalize | deprecated |
| positive | False |
| precompute | False |
| random_state | 0 |
| selection | cyclic |
| tol | 0.0001 |
| warm_start | False |
</details>
### Model Plot
The model plot is below.
<style>#sk-a3a3b863-d5cf-4b57-9e19-e3d8f2db0a0b {color: black;background-color: white;}#sk-a3a3b863-d5cf-4b57-9e19-e3d8f2db0a0b pre{padding: 0;}#sk-a3a3b863-d5cf-4b57-9e19-e3d8f2db0a0b div.sk-toggleable {background-color: white;}#sk-a3a3b863-d5cf-4b57-9e19-e3d8f2db0a0b label.sk-toggleable__label {cursor: pointer;display: block;width: 100%;margin-bottom: 0;padding: 0.3em;box-sizing: border-box;text-align: center;}#sk-a3a3b863-d5cf-4b57-9e19-e3d8f2db0a0b div.sk-toggleable__content {max-height: 0;max-width: 0;overflow: hidden;text-align: left;background-color: #f0f8ff;}#sk-a3a3b863-d5cf-4b57-9e19-e3d8f2db0a0b div.sk-toggleable__content pre {margin: 0.2em;color: black;border-radius: 0.25em;background-color: #f0f8ff;}#sk-a3a3b863-d5cf-4b57-9e19-e3d8f2db0a0b input.sk-toggleable__control:checked~div.sk-toggleable__content {max-height: 200px;max-width: 100%;overflow: auto;}#sk-a3a3b863-d5cf-4b57-9e19-e3d8f2db0a0b div.sk-estimator input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-a3a3b863-d5cf-4b57-9e19-e3d8f2db0a0b div.sk-label input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-a3a3b863-d5cf-4b57-9e19-e3d8f2db0a0b input.sk-hidden--visually {border: 0;clip: rect(1px 1px 1px 1px);clip: rect(1px, 1px, 1px, 1px);height: 1px;margin: -1px;overflow: hidden;padding: 0;position: absolute;width: 1px;}#sk-a3a3b863-d5cf-4b57-9e19-e3d8f2db0a0b div.sk-estimator {font-family: monospace;background-color: #f0f8ff;border: 1px dotted black;border-radius: 0.25em;box-sizing: border-box;margin-bottom: 0.5em;}#sk-a3a3b863-d5cf-4b57-9e19-e3d8f2db0a0b div.sk-estimator:hover {background-color: #d4ebff;}#sk-a3a3b863-d5cf-4b57-9e19-e3d8f2db0a0b div.sk-parallel-item::after {content: "";width: 100%;border-bottom: 1px solid gray;flex-grow: 1;}#sk-a3a3b863-d5cf-4b57-9e19-e3d8f2db0a0b div.sk-label:hover label.sk-toggleable__label {background-color: #d4ebff;}#sk-a3a3b863-d5cf-4b57-9e19-e3d8f2db0a0b div.sk-serial::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 2em;bottom: 0;left: 50%;}#sk-a3a3b863-d5cf-4b57-9e19-e3d8f2db0a0b div.sk-serial {display: flex;flex-direction: column;align-items: center;background-color: white;padding-right: 0.2em;padding-left: 0.2em;}#sk-a3a3b863-d5cf-4b57-9e19-e3d8f2db0a0b div.sk-item {z-index: 1;}#sk-a3a3b863-d5cf-4b57-9e19-e3d8f2db0a0b div.sk-parallel {display: flex;align-items: stretch;justify-content: center;background-color: white;}#sk-a3a3b863-d5cf-4b57-9e19-e3d8f2db0a0b div.sk-parallel::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 2em;bottom: 0;left: 50%;}#sk-a3a3b863-d5cf-4b57-9e19-e3d8f2db0a0b div.sk-parallel-item {display: flex;flex-direction: column;position: relative;background-color: white;}#sk-a3a3b863-d5cf-4b57-9e19-e3d8f2db0a0b div.sk-parallel-item:first-child::after {align-self: flex-end;width: 50%;}#sk-a3a3b863-d5cf-4b57-9e19-e3d8f2db0a0b div.sk-parallel-item:last-child::after {align-self: flex-start;width: 50%;}#sk-a3a3b863-d5cf-4b57-9e19-e3d8f2db0a0b div.sk-parallel-item:only-child::after {width: 0;}#sk-a3a3b863-d5cf-4b57-9e19-e3d8f2db0a0b div.sk-dashed-wrapped {border: 1px dashed gray;margin: 0 0.4em 0.5em 0.4em;box-sizing: border-box;padding-bottom: 0.4em;background-color: white;position: relative;}#sk-a3a3b863-d5cf-4b57-9e19-e3d8f2db0a0b div.sk-label label {font-family: monospace;font-weight: bold;background-color: white;display: inline-block;line-height: 1.2em;}#sk-a3a3b863-d5cf-4b57-9e19-e3d8f2db0a0b div.sk-label-container {position: relative;z-index: 2;text-align: center;}#sk-a3a3b863-d5cf-4b57-9e19-e3d8f2db0a0b div.sk-container {display: inline-block;position: relative;}</style><div id="sk-a3a3b863-d5cf-4b57-9e19-e3d8f2db0a0b" class"sk-top-container"><div class="sk-container"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="d20384ee-8f34-4e73-b4a5-b15dfd56af7a" type="checkbox" checked><label class="sk-toggleable__label" for="d20384ee-8f34-4e73-b4a5-b15dfd56af7a">ElasticNet</label><div class="sk-toggleable__content"><pre>ElasticNet(random_state=0)</pre></div></div></div></div></div>
## Evaluation Results
You can find the details about evaluation process and the evaluation results.
| Metric | Value |
|----------|---------|
| accuracy | 99.9994 |
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
```python
import pickle
with open(dtc_pkl_filename, 'rb') as file:
clf = pickle.load(file)
```
</details>
# Model Card Authors
This model card is written by following authors:
ayyuce demirbas
# Model Card Contact
You can contact the model card authors through following channels:
[More Information Needed]
# Citation
Below you can find information related to citation.
**BibTeX:**
```
bibtex
@inproceedings{...,year={2022}}
```
|
cjbarrie/bert-base-multilingual-uncased-finetuned-masress
|
cjbarrie
| 2022-11-01T10:21:27Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-23T17:56:39Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: bert-base-multilingual-uncased-finetuned-masress
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-uncased-finetuned-masress
This model is a fine-tuned version of [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0946
- Accuracy: 0.5782
- F1: 0.5769
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.1646 | 1.0 | 151 | 1.0626 | 0.5588 | 0.5566 |
| 0.9281 | 2.0 | 302 | 0.9800 | 0.5869 | 0.5792 |
| 0.8269 | 3.0 | 453 | 1.0134 | 0.5911 | 0.5775 |
| 0.7335 | 4.0 | 604 | 1.0644 | 0.5861 | 0.5816 |
| 0.6786 | 5.0 | 755 | 1.0946 | 0.5782 | 0.5769 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
cgt/Roberta-wwm-ext-large-qa
|
cgt
| 2022-11-01T09:34:05Z | 14 | 1 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:cmrc2018",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-10-31T02:20:59Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cmrc2018
model-index:
- name: Roberta-wwm-ext-large-qa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Roberta-wwm-ext-large-qa
This model is a fine-tuned version of [hfl/chinese-roberta-wwm-ext-large](https://huggingface.co/hfl/chinese-roberta-wwm-ext-large) on the cmrc2018 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1416
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.3584 | 1.0 | 1200 | 1.1416 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.10.0+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
kkotkar1/t5-base-finetuned-eli5
|
kkotkar1
| 2022-11-01T08:05:49Z | 129 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:eli5",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-01T02:10:15Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- eli5
metrics:
- rouge
model-index:
- name: t5-base-finetuned-eli5-finetuned-eli5
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: eli5
type: eli5
config: LFQA_reddit
split: train_eli5
args: LFQA_reddit
metrics:
- name: Rouge1
type: rouge
value: 0.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-finetuned-eli5-finetuned-eli5
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the eli5 dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Rouge1: 0.0
- Rouge2: 0.0
- Rougel: 0.0
- Rougelsum: 0.0
- Gen Len: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 0.0 | 1.0 | 68159 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
pig4431/IMDB_roBERTa_5E
|
pig4431
| 2022-11-01T08:01:43Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-01T08:00:15Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
model-index:
- name: IMDB_roBERTa_5E
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.9466666666666667
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IMDB_roBERTa_5E
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2383
- Accuracy: 0.9467
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5851 | 0.06 | 50 | 0.1789 | 0.94 |
| 0.2612 | 0.13 | 100 | 0.1520 | 0.9533 |
| 0.2339 | 0.19 | 150 | 0.1997 | 0.9267 |
| 0.2349 | 0.26 | 200 | 0.1702 | 0.92 |
| 0.207 | 0.32 | 250 | 0.1515 | 0.9333 |
| 0.2222 | 0.38 | 300 | 0.1522 | 0.9467 |
| 0.1916 | 0.45 | 350 | 0.1328 | 0.94 |
| 0.1559 | 0.51 | 400 | 0.1676 | 0.94 |
| 0.1621 | 0.58 | 450 | 0.1363 | 0.9467 |
| 0.1663 | 0.64 | 500 | 0.1327 | 0.9533 |
| 0.1841 | 0.7 | 550 | 0.1347 | 0.9467 |
| 0.1742 | 0.77 | 600 | 0.1127 | 0.9533 |
| 0.1559 | 0.83 | 650 | 0.1119 | 0.9467 |
| 0.172 | 0.9 | 700 | 0.1123 | 0.9467 |
| 0.1644 | 0.96 | 750 | 0.1326 | 0.96 |
| 0.1524 | 1.02 | 800 | 0.1718 | 0.9467 |
| 0.1456 | 1.09 | 850 | 0.1464 | 0.9467 |
| 0.1271 | 1.15 | 900 | 0.1190 | 0.9533 |
| 0.1412 | 1.21 | 950 | 0.1323 | 0.9533 |
| 0.1114 | 1.28 | 1000 | 0.1475 | 0.9467 |
| 0.1222 | 1.34 | 1050 | 0.1592 | 0.9467 |
| 0.1164 | 1.41 | 1100 | 0.1345 | 0.96 |
| 0.1126 | 1.47 | 1150 | 0.1325 | 0.9533 |
| 0.1237 | 1.53 | 1200 | 0.1561 | 0.9533 |
| 0.1385 | 1.6 | 1250 | 0.1225 | 0.9467 |
| 0.1522 | 1.66 | 1300 | 0.1119 | 0.9533 |
| 0.1154 | 1.73 | 1350 | 0.1231 | 0.96 |
| 0.1182 | 1.79 | 1400 | 0.1366 | 0.96 |
| 0.1415 | 1.85 | 1450 | 0.0972 | 0.96 |
| 0.124 | 1.92 | 1500 | 0.1082 | 0.96 |
| 0.1584 | 1.98 | 1550 | 0.1770 | 0.96 |
| 0.0927 | 2.05 | 1600 | 0.1821 | 0.9533 |
| 0.1065 | 2.11 | 1650 | 0.0999 | 0.9733 |
| 0.0974 | 2.17 | 1700 | 0.1365 | 0.9533 |
| 0.079 | 2.24 | 1750 | 0.1694 | 0.9467 |
| 0.1217 | 2.3 | 1800 | 0.1564 | 0.9533 |
| 0.0676 | 2.37 | 1850 | 0.2116 | 0.9467 |
| 0.0832 | 2.43 | 1900 | 0.1779 | 0.9533 |
| 0.0899 | 2.49 | 1950 | 0.0999 | 0.9667 |
| 0.0898 | 2.56 | 2000 | 0.1502 | 0.9467 |
| 0.0955 | 2.62 | 2050 | 0.1776 | 0.9467 |
| 0.0989 | 2.69 | 2100 | 0.1279 | 0.9533 |
| 0.102 | 2.75 | 2150 | 0.1005 | 0.9667 |
| 0.0957 | 2.81 | 2200 | 0.1070 | 0.9667 |
| 0.0786 | 2.88 | 2250 | 0.1881 | 0.9467 |
| 0.0897 | 2.94 | 2300 | 0.1951 | 0.9533 |
| 0.0801 | 3.01 | 2350 | 0.1827 | 0.9467 |
| 0.0829 | 3.07 | 2400 | 0.1854 | 0.96 |
| 0.0665 | 3.13 | 2450 | 0.1775 | 0.9533 |
| 0.0838 | 3.2 | 2500 | 0.1700 | 0.96 |
| 0.0441 | 3.26 | 2550 | 0.1810 | 0.96 |
| 0.071 | 3.32 | 2600 | 0.2083 | 0.9533 |
| 0.0655 | 3.39 | 2650 | 0.1943 | 0.96 |
| 0.0565 | 3.45 | 2700 | 0.2486 | 0.9533 |
| 0.0669 | 3.52 | 2750 | 0.2540 | 0.9533 |
| 0.0671 | 3.58 | 2800 | 0.2140 | 0.9467 |
| 0.0857 | 3.64 | 2850 | 0.1609 | 0.9533 |
| 0.0585 | 3.71 | 2900 | 0.2067 | 0.9467 |
| 0.0597 | 3.77 | 2950 | 0.2380 | 0.9467 |
| 0.0932 | 3.84 | 3000 | 0.1727 | 0.9533 |
| 0.0744 | 3.9 | 3050 | 0.2099 | 0.9467 |
| 0.0679 | 3.96 | 3100 | 0.2034 | 0.9467 |
| 0.0447 | 4.03 | 3150 | 0.2461 | 0.9533 |
| 0.0486 | 4.09 | 3200 | 0.2032 | 0.9533 |
| 0.0409 | 4.16 | 3250 | 0.2468 | 0.9467 |
| 0.0605 | 4.22 | 3300 | 0.2422 | 0.9467 |
| 0.0319 | 4.28 | 3350 | 0.2681 | 0.9467 |
| 0.0483 | 4.35 | 3400 | 0.2222 | 0.9533 |
| 0.0801 | 4.41 | 3450 | 0.2247 | 0.9533 |
| 0.0333 | 4.48 | 3500 | 0.2190 | 0.9533 |
| 0.0432 | 4.54 | 3550 | 0.2167 | 0.9533 |
| 0.0381 | 4.6 | 3600 | 0.2507 | 0.9467 |
| 0.0647 | 4.67 | 3650 | 0.2410 | 0.9533 |
| 0.0427 | 4.73 | 3700 | 0.2447 | 0.9467 |
| 0.0627 | 4.8 | 3750 | 0.2332 | 0.9533 |
| 0.0569 | 4.86 | 3800 | 0.2358 | 0.9533 |
| 0.069 | 4.92 | 3850 | 0.2379 | 0.9533 |
| 0.0474 | 4.99 | 3900 | 0.2383 | 0.9467 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.13.0
- Datasets 2.6.1
- Tokenizers 0.13.1
|
hdc-labs/outputs
|
hdc-labs
| 2022-11-01T07:55:10Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-11-01T07:39:34Z |
---
tags:
- generated_from_trainer
datasets:
- common_voice
metrics:
- wer
model-index:
- name: outputs
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice
type: common_voice
config: tr
split: train+validation
args: tr
metrics:
- name: Wer
type: wer
value: 0.35818608926565215
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# outputs
This model was trained from scratch on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3878
- Wer: 0.3582
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- num_epochs: 1.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.7391 | 0.92 | 100 | 3.5760 | 1.0 |
| 2.927 | 1.83 | 200 | 3.0796 | 0.9999 |
| 0.9009 | 2.75 | 300 | 0.9278 | 0.8226 |
| 0.6529 | 3.67 | 400 | 0.5926 | 0.6367 |
| 0.3623 | 4.59 | 500 | 0.5372 | 0.5692 |
| 0.2888 | 5.5 | 600 | 0.4407 | 0.4838 |
| 0.285 | 6.42 | 700 | 0.4341 | 0.4694 |
| 0.0842 | 7.34 | 800 | 0.4153 | 0.4302 |
| 0.1415 | 8.26 | 900 | 0.4317 | 0.4136 |
| 0.1552 | 9.17 | 1000 | 0.4145 | 0.4013 |
| 0.1184 | 10.09 | 1100 | 0.4115 | 0.3844 |
| 0.0556 | 11.01 | 1200 | 0.4182 | 0.3862 |
| 0.0851 | 11.93 | 1300 | 0.3985 | 0.3688 |
| 0.0961 | 12.84 | 1400 | 0.4030 | 0.3665 |
| 0.0596 | 13.76 | 1500 | 0.3880 | 0.3631 |
| 0.0917 | 14.68 | 1600 | 0.3878 | 0.3582 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.11.0+cu102
- Datasets 2.6.1
- Tokenizers 0.13.1
|
kit-nlp/transformers-ud-japanese-electra-base-discriminator-cyberbullying
|
kit-nlp
| 2022-11-01T07:18:40Z | 37 | 2 |
transformers
|
[
"transformers",
"pytorch",
"electra",
"text-classification",
"ja",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-09-09T04:08:15Z |
---
language: ja
license: cc-by-sa-4.0
---
# electra-base-cyberbullying
This is an [ELECTRA](https://github.com/google-research/electra) Base model for the Japanese language finetuned for automatic cyberbullying detection.
The model was based on [Megagon Labs ELECTRA Base](https://huggingface.co/megagonlabs/transformers-ud-japanese-electra-base-discriminator), and later finetuned on a balanced dataset created by unifying two datasets, namely "Harmful BBS Japanese comments dataset" and "Twitter Japanese cyberbullying dataset".
## Licenses
The finetuned model with all attached files is licensed under [CC BY-SA 4.0](http://creativecommons.org/licenses/by-sa/4.0/), or Creative Commons Attribution-ShareAlike 4.0 International License.
<a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-sa/4.0/88x31.png" /></a>
## Citations
Please, cite this model using the following citation.
```
@inproceedings{tanabe2022electra-base-cyberbullying,
title={北見工業大学 テキスト情報処理研究室 ELECTRA Base ネットいじめ検出モデル},
author={田邊 威裕 and プタシンスキ ミハウ and エロネン ユーソ and 桝井 文人},
publisher={HuggingFace},
year={2022},
url = "https://huggingface.co/kit-nlp/transformers-ud-japanese-electra-base-discriminator-cyberbullying"
}
```
|
kit-nlp/bert-base-japanese-sentiment-cyberbullying
|
kit-nlp
| 2022-11-01T07:18:05Z | 52 | 4 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"ja",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-09-09T02:16:34Z |
---
language: ja
license: cc-by-sa-4.0
---
# electra-base-cyberbullying
This is a BERT Base model for the Japanese language finetuned for automatic cyberbullying detection.
The model was based on [daigo's BERT Base for Japanese sentiment analysis](https://huggingface.co/daigo/bert-base-japanese-sentiment), and later finetuned on a balanced dataset created by unifying two datasets, namely "Harmful BBS Japanese comments dataset" and "Twitter Japanese cyberbullying dataset".
## Licenses
The finetuned model with all attached files is licensed under [CC BY-SA 4.0](http://creativecommons.org/licenses/by-sa/4.0/), or Creative Commons Attribution-ShareAlike 4.0 International License.
<a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-sa/4.0/88x31.png" /></a>
## Citations
Please, cite this model using the following citation.
```
@inproceedings{tanabe2022bert-base-cyberbullying,
title={北見工業大学 テキスト情報処理研究室 BERT Base ネットいじめ検出モデル (Daigo ver.)},
author={田邊 威裕 and プタシンスキ ミハウ and エロネン ユーソ and 桝井 文人},
publisher={HuggingFace},
year={2022},
url = "https://huggingface.co/kit-nlp/bert-base-japanese-sentiment-cyberbullying"
}
```
|
thisisHJLee/wav2vec2-large-xls-r-300m-korean-sen3
|
thisisHJLee
| 2022-11-01T07:17:15Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-11-01T02:14:34Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-xls-r-300m-korean-sen3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-korean-sen3
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0111
- Cer: 0.0014
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.7932 | 1.0 | 1614 | 0.2858 | 0.0740 |
| 0.153 | 2.0 | 3228 | 0.0290 | 0.0054 |
| 0.08 | 3.0 | 4842 | 0.0111 | 0.0014 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.12.0
- Datasets 1.18.3
- Tokenizers 0.11.0
|
kit-nlp/electra-small-japanese-discriminator-cyberbullying
|
kit-nlp
| 2022-11-01T07:14:15Z | 9 | 2 |
transformers
|
[
"transformers",
"pytorch",
"electra",
"text-classification",
"ja",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-09-09T02:43:59Z |
---
language: ja
license: cc-by-sa-4.0
---
# electra-base-cyberbullying
This is an [ELECTRA](https://github.com/google-research/electra) Small model for the Japanese language finetuned for automatic cyberbullying detection.
The model was based on [Izumi Lab ELECTRA small Japanese discriminator](https://huggingface.co/izumi-lab/electra-small-japanese-discriminator), and later finetuned on a balanced dataset created by unifying two datasets, namely "Harmful BBS Japanese comments dataset" and "Twitter Japanese cyberbullying dataset".
## Licenses
The finetuned model with all attached files is licensed under [CC BY-SA 4.0](http://creativecommons.org/licenses/by-sa/4.0/), or Creative Commons Attribution-ShareAlike 4.0 International License.
<a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-sa/4.0/88x31.png" /></a>
## Citations
Please, cite this model using the following citation.
```
@inproceedings{tanabe2022electra-small-cyberbullying,
title={北見工業大学 テキスト情報処理研究室 ELECTRA Small ネットいじめ検出モデル (Izumi Lab ver.)},
author={田邊 威裕 and プタシンスキ ミハウ and エロネン ユーソ and 桝井 文人},
publisher={HuggingFace},
year={2022},
url = "https://huggingface.co/kit-nlp/electra-small-japanese-discriminator-cyberbullying"
}
```
|
SSK0908/ppo-LunarLander-v2
|
SSK0908
| 2022-11-01T05:33:03Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-11-01T05:32:34Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 259.54 +/- 22.80
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
fanpu/model_output_subreddit-wallstreetbets_new
|
fanpu
| 2022-11-01T05:30:52Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-01T01:38:17Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: model_output_subreddit-wallstreetbets_new
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model_output_subreddit-wallstreetbets_new
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6832
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 3.8945 | 1.19 | 5000 | 3.7710 |
| 3.6757 | 2.39 | 10000 | 3.7186 |
| 3.5115 | 3.58 | 15000 | 3.6829 |
| 3.3631 | 4.77 | 20000 | 3.6832 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.13.0+cu117
- Datasets 2.6.1
- Tokenizers 0.13.1
|
adit94/sentencetest
|
adit94
| 2022-11-01T04:30:14Z | 1 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-11-01T03:42:10Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 625 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.SoftmaxLoss.SoftmaxLoss`
Parameters of the fit()-Method:
```
{
"epochs": 3,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 188,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 768, 'out_features': 768, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
huggingtweets/fienddddddd
|
huggingtweets
| 2022-11-01T03:45:19Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-01T03:39:16Z |
---
language: en
thumbnail: http://www.huggingtweets.com/fienddddddd/1667274315870/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1429983882741489668/TQAnTzje_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Golden Boy Noah</div>
<div style="text-align: center; font-size: 14px;">@fienddddddd</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Golden Boy Noah.
| Data | Golden Boy Noah |
| --- | --- |
| Tweets downloaded | 158 |
| Retweets | 30 |
| Short tweets | 12 |
| Tweets kept | 116 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/25q0d5x3/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @fienddddddd's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3ob718th) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3ob718th/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/fienddddddd')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Mingguksky/PyTorch-StudioGAN
|
Mingguksky
| 2022-11-01T03:20:38Z | 0 | 18 | null |
[
"arxiv:2206.09479",
"region:us"
] | null | 2022-10-31T14:06:24Z |
<p align="center">
<img width="60%" src="https://raw.githubusercontent.com/POSTECH-CVLab/PyTorch-StudioGAN/master/docs/figures/studiogan_logo.jpg" />
</p>**StudioGAN** is a Pytorch library providing implementations of representative Generative Adversarial Networks (GANs) for conditional/unconditional image generation. StudioGAN aims to offer an identical playground for modern GANs so that machine learning researchers can readily compare and analyze a new idea.
This hub provides all the checkpoints we used to create the GAN benchmarks below.
Please visit our github repository ([PyTorch-StudioGAN](https://github.com/POSTECH-CVLab/PyTorch-StudioGAN)) for more details.
<p align="center">
<img width="95%" src="https://raw.githubusercontent.com/POSTECH-CVLab/PyTorch-StudioGAN/master/docs/figures/StudioGAN_Benchmark.png"/>
</p>
## License
PyTorch-StudioGAN is an open-source library under the MIT license (MIT). However, portions of the library are avaiiable under distinct license terms: StyleGAN2, StyleGAN2-ADA, and StyleGAN3 are licensed under [NVIDIA source code license](https://github.com/POSTECH-CVLab/PyTorch-StudioGAN/blob/master/LICENSE-NVIDIA), and PyTorch-FID is licensed under [Apache License](https://github.com/POSTECH-CVLab/PyTorch-StudioGAN/blob/master/src/metrics/fid.py).
## Citation
StudioGAN is established for the following research projects. Please cite our work if you use StudioGAN.
```bib
@article{kang2022StudioGAN,
title = {{StudioGAN: A Taxonomy and Benchmark of GANs for Image Synthesis}},
author = {MinGuk Kang and Joonghyuk Shin and Jaesik Park},
journal = {2206.09479 (arXiv)},
year = {2022}
}
```
```bib
@inproceedings{kang2021ReACGAN,
title = {{Rebooting ACGAN: Auxiliary Classifier GANs with Stable Training}},
author = {Minguk Kang, Woohyeon Shim, Minsu Cho, and Jaesik Park},
journal = {Conference on Neural Information Processing Systems (NeurIPS)},
year = {2021}
}
```
```bib
@inproceedings{kang2020ContraGAN,
title = {{ContraGAN: Contrastive Learning for Conditional Image Generation}},
author = {Minguk Kang and Jaesik Park},
journal = {Conference on Neural Information Processing Systems (NeurIPS)},
year = {2020}
}
```
|
cheese7858/stance_detection
|
cheese7858
| 2022-11-01T02:39:11Z | 9 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-31T03:22:06Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: stance_detection
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# stance_detection
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) towards 26 US SPAC stock mergers on Twitter.
It achieves the following results on the evaluation set:
- Loss: 0.4906
- Accuracy: 0.8409
- F1w: 0.8574
- Acc0: 0.8293
- Acc1: 0.6
- Acc2: 0.7652
- Acc3: 0.8637
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1w | Acc0 | Acc1 | Acc2 | Acc3 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:------:|:----:|:------:|:------:|
| 0.7748 | 1.0 | 194 | 0.5172 | 0.8158 | 0.8297 | 0.8699 | 0.0 | 0.7429 | 0.8248 |
| 0.5181 | 2.0 | 388 | 0.4692 | 0.8509 | 0.8587 | 0.8699 | 0.4 | 0.7429 | 0.8743 |
| 0.3868 | 3.0 | 582 | 0.4906 | 0.8409 | 0.8574 | 0.8293 | 0.6 | 0.7652 | 0.8637 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.11.0
- Datasets 2.5.2
- Tokenizers 0.12.1
|
rufimelo/Legal-BERTimbau-base-TSDAE-sts
|
rufimelo
| 2022-11-01T01:31:11Z | 3 | 1 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"pt",
"dataset:assin",
"dataset:assin2",
"dataset:stsb_multi_mt",
"dataset:rufimelo/PortugueseLegalSentences-v1",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-10-29T17:36:41Z |
---
language:
- pt
thumbnail: "Portuguese BERT for the Legal Domain"
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- transformers
datasets:
- assin
- assin2
- stsb_multi_mt
- rufimelo/PortugueseLegalSentences-v1
widget:
- source_sentence: "O advogado apresentou as provas ao juíz."
sentences:
- "O juíz leu as provas."
- "O juíz leu o recurso."
- "O juíz atirou uma pedra."
example_title: "Example 1"
model-index:
- name: BERTimbau
results:
- task:
name: STS
type: STS
metrics:
- name: Pearson Correlation - assin Dataset
type: Pearson Correlation
value: 0.78814
- name: Pearson Correlation - assin2 Dataset
type: Pearson Correlation
value: 0.81380
- name: Pearson Correlation - stsb_multi_mt pt Dataset
type: Pearson Correlation
value: 0.75777
---
# rufimelo/Legal-BERTimbau-base-TSDAE-sts
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
rufimelo/Legal-BERTimbau-base-TSDAE-sts is based on Legal-BERTimbau-large which derives from [BERTimbau](https://huggingface.co/neuralmind/bert-large-portuguese-cased) large.
It is adapted to the Portuguese legal domain and trained for STS on portuguese datasets.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["Isto é um exemplo", "Isto é um outro exemplo"]
model = SentenceTransformer('rufimelo/Legal-BERTimbau-base-TSDAE-sts')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('rufimelo/Legal-BERTimbau-base-TSDAE-sts')
model = AutoModel.from_pretrained('rufimelo/Legal-BERTimbau-base-TSDAE-sts')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results STS
| Model| Assin | Assin2|stsb_multi_mt pt| avg|
| ---------------------------------------- | ---------- | ---------- |---------- |---------- |
| Legal-BERTimbau-sts-base| 0.71457| 0.73545 | 0.72383|0.72462|
| Legal-BERTimbau-sts-base-ma| 0.74874 | 0.79532|0.82254 |0.78886|
| Legal-BERTimbau-sts-base-ma-v2| 0.75481 | 0.80262|0.82178|0.79307|
| Legal-BERTimbau-base-TSDAE-sts|0.78814 |0.81380 |0.75777|0.78657|
| Legal-BERTimbau-sts-large| 0.76629| 0.82357 | 0.79120|0.79369|
| Legal-BERTimbau-sts-large-v2| 0.76299 | 0.81121|0.81726 |0.79715|
| Legal-BERTimbau-sts-large-ma| 0.76195| 0.81622 | 0.82608|0.80142|
| Legal-BERTimbau-sts-large-ma-v2| 0.7836| 0.8462| 0.8261| 0.81863|
| Legal-BERTimbau-sts-large-ma-v3| 0.7749| **0.8470**| 0.8364| **0.81943**|
| Legal-BERTimbau-large-v2-sts| 0.71665| 0.80106| 0.73724| 0.75165|
| Legal-BERTimbau-large-TSDAE-sts| 0.72376| 0.79261| 0.73635| 0.75090|
| Legal-BERTimbau-large-TSDAE-sts-v2| 0.81326| 0.83130| 0.786314| 0.81029|
| Legal-BERTimbau-large-TSDAE-sts-v3|0.80703 |0.82270 |0.77638 |0.80204 |
| ---------------------------------------- | ---------- |---------- |---------- |---------- |
| BERTimbau base Fine-tuned for STS|**0.78455** | 0.80626|0.82841|0.80640|
| BERTimbau large Fine-tuned for STS|0.78193 | 0.81758|0.83784|0.81245|
| ---------------------------------------- | ---------- |---------- |---------- |---------- |
| paraphrase-multilingual-mpnet-base-v2| 0.71457| 0.79831 |0.83999 |0.78429|
| paraphrase-multilingual-mpnet-base-v2 Fine-tuned with assin(s)| 0.77641|0.79831 |**0.84575**|0.80682|
## Training
rufimelo/Legal-BERTimbau-base-TSDAE-sts is based on rufimelo/Legal-BERTimbau-base-TSDAE which derives from [BERTimbau](https://huggingface.co/neuralmind/bert-base-portuguese-cased) large.
rufimelo/Legal-BERTimbau-base-TSDAE was trained with TSDAE: 50000 cleaned documents (https://huggingface.co/datasets/rufimelo/PortugueseLegalSentences-v1)
'lr': 1e-5
It was trained for Semantic Textual Similarity, being submitted to a fine tuning stage with the [assin](https://huggingface.co/datasets/assin), [assin2](https://huggingface.co/datasets/assin2) and [stsb_multi_mt pt](https://huggingface.co/datasets/stsb_multi_mt) datasets. 'lr': 1e-5
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False})
)
```
## Citing & Authors
## Citing & Authors
If you use this work, please cite:
```bibtex
@inproceedings{souza2020bertimbau,
author = {F{\'a}bio Souza and
Rodrigo Nogueira and
Roberto Lotufo},
title = {{BERT}imbau: pretrained {BERT} models for {B}razilian {P}ortuguese},
booktitle = {9th Brazilian Conference on Intelligent Systems, {BRACIS}, Rio Grande do Sul, Brazil, October 20-23 (to appear)},
year = {2020}
}
@inproceedings{fonseca2016assin,
title={ASSIN: Avaliacao de similaridade semantica e inferencia textual},
author={Fonseca, E and Santos, L and Criscuolo, Marcelo and Aluisio, S},
booktitle={Computational Processing of the Portuguese Language-12th International Conference, Tomar, Portugal},
pages={13--15},
year={2016}
}
@inproceedings{real2020assin,
title={The assin 2 shared task: a quick overview},
author={Real, Livy and Fonseca, Erick and Oliveira, Hugo Goncalo},
booktitle={International Conference on Computational Processing of the Portuguese Language},
pages={406--412},
year={2020},
organization={Springer}
}
@InProceedings{huggingface:dataset:stsb_multi_mt,
title = {Machine translated multilingual STS benchmark dataset.},
author={Philip May},
year={2021},
url={https://github.com/PhilipMay/stsb-multi-mt}
}
```
|
Assadullah/donut-base-sroie2
|
Assadullah
| 2022-11-01T01:15:12Z | 45 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:imagefolder",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2022-09-21T02:28:53Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: donut-base-sroie2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-sroie2
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.23.0.dev0
- Pytorch 1.8.1+cu102
- Datasets 2.4.0
- Tokenizers 0.12.1
|
sd-concepts-library/anders-zorn
|
sd-concepts-library
| 2022-11-01T00:49:33Z | 0 | 13 | null |
[
"license:mit",
"region:us"
] | null | 2022-11-01T00:49:30Z |
---
license: mit
---
### Anders Zorn on Stable Diffusion
This is the `<anders-zorn>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:












|
microsoft/cocolm-base
|
microsoft
| 2022-11-01T00:26:13Z | 11 | 6 |
transformers
|
[
"transformers",
"pytorch",
"arxiv:2102.08473",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:05Z |
# Model Card for COCO-LM: Correcting and Contrasting Text Sequences for Language Model Pretraining
# Model Details
## Model Description
This model card contains the COCO-LM model (**base++** version) pretrained models on GLUE and SQuAD 2.0 benchmarks.
- **Developed by:** Microsoft
- **Shared by [Optional]:** HuggingFace
- **Model type:** Language model
- **Language(s) (NLP):** en
- **License:** MIT
- **Related Models:** More information needed
- **Parent Model:** More information needed
- **Resources for more information:**
- [GitHub Repo](https://github.com/microsoft/COCO-LM)
- [Associated Paper](https://arxiv.org/abs/2102.08473)
# Uses
## Direct Use
Correcting and Contrasting Text Sequences for Language Model Pretraining
## Downstream Use [Optional]
More information needed.
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recomendations.
# Training Details
## Training Data
See the associated [dataset card]({0}) for further details
## Training Procedure
### Preprocessing
The model devloeprs note in the [associated paper](https://arxiv.org/abs/2102.08473):
>We employ three standard settings, base, base++, and large++. Base is the BERTBase training configuration: Pretraining on Wikipedia and BookCorpus (16 GB of texts) for 256 million samples on 512 token sequences (125K batches with 2048 batch size). We use the same corpus and 32, 768 uncased BPE vocabulary as with TUPE. Base++ trains the base size model with larger corpora and/or more training steps. Following recent research, we add in OpenWebText, CC-News , and STORIES, to a total of 160 GB texts, and train for 4 billion (with 2048 batch size) samples. We follow the prepossessing of UniLMV2 and use 64, 000 cased BPE vocabulary. Large++ uses the same training corpora as base++ and pretrains for 4 billion samples (2048 batch size). Its Transformer configuration is the same with BERTLarge.
### Speeds, Sizes, Times
More information needed
# Evaluation
## Testing Data, Factors & Metrics
### Testing Data
GLUE
SQuAD 2.0
### Factors
All results are single-task, single-model fine-tuning.
### Metrics
More information needed
## Results
### GLUE Fine-Tuning Results
The [General Language Understanding Evaluation (GLUE)](https://gluebenchmark.com/) benchmark is a collection of sentence- or sentence-pair language understanding tasks for evaluating and analyzing natural language understanding systems.
GLUE dev set results of COCO-LM base++ and large++ models are as follows (median of 5 different random seeds):
| Model | MNLI-m/mm | QQP | QNLI | SST-2 | CoLA | RTE | MRPC | STS-B | AVG |
| ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ |
| COCO-LM base++ | 90.2/90.0 | 92.2 | 94.2 | 94.6 | 67.3 | 87.4 | 91.2 | 91.8 | 88.6 |
| COCO-LM large++ | 91.4/91.6 | 92.8 | 95.7 | 96.9 | 73.9 | 91.0 | 92.2 | 92.7 | 90.8 |
GLUE test set results of COCO-LM base++ and large++ models are as follows (no ensemble, task-specific tricks, etc.):
| Model | MNLI-m/mm | QQP | QNLI | SST-2 | CoLA | RTE | MRPC | STS-B | AVG |
| ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ |
| COCO-LM base++ | 89.8/89.3 | 89.8 | 94.2 | 95.6 | 68.6 | 82.3 | 88.5 | 90.3 | 87.4 |
| COCO-LM large++ | 91.6/91.1 | 90.5 | 95.8 | 96.7 | 70.5 | 89.2 | 88.4 | 91.8 | 89.3 |
### SQuAD 2.0 Fine-Tuning Results
[Stanford Question Answering Dataset (SQuAD)](https://rajpurkar.github.io/SQuAD-explorer/) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable.
SQuAD 2.0 dev set results of COCO-LM base++ and large++ models are as follows (median of 5 different random seeds):
| Model | EM | F1 |
| ------ | ------ | ------ |
| COCO-LM base++ | 85.4 | 88.1 |
| COCO-LM large++ | 88.2 | 91.0 |
# Model Examination
The model delevopers note in the [associated paper](https://arxiv.org/abs/2102.08473):
>Architecture. Removing relative position encoding (Rel-Pos) leads to better numbers on some tasks but significantly hurts MNLI. Using a shallow auxiliary network and keeping the same hidden dimension (768) is more effective than ELECTRA’s 12-layer but 256-hidden dimension generator.
>One limitation of this work is that the contrastive pairs are constructed by simple cropping and MLM replacements. Recent studies have shown the effectiveness of advanced data augmentation techniques in fine-tuning language models [16, 38, 51]. A future research direction is to explore better ways to construct contrastive pairs in language model pretraining.
# Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** More information needed
- **Hours used:** More information needed
- **Cloud Provider:** More information needed
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed}
# Technical Specifications [optional]
## Model Architecture and Objective
The model delevopers note in the [associated paper](https://arxiv.org/abs/2102.08473):
>Model Architecture. Our base/base++ model uses the BERTBase architecture: 12 layer Transformer, 768 hidden size, plus T5 relative position encoding. Our large++ model is the same with BERTLarge, 24 layer and 1024 hidden size, plus T5 relative position encoding. Our auxiliary network uses the same hidden size but a shallow 4-layer Transformer in base/base++ and a 6-layer one in large++. When generating XMLM we disable dropout in the auxiliary model
## Compute Infrastructure
More information needed
### Hardware
More information needed
### Software
More information needed
# Citation
**BibTeX:**
If you find the code and models useful for your research, please cite the following paper:
```
@inproceedings{meng2021cocolm,
title={{COCO-LM}: Correcting and contrasting text sequences for language model pretraining},
author={Meng, Yu and Xiong, Chenyan and Bajaj, Payal and Tiwary, Saurabh and Bennett, Paul and Han, Jiawei and Song, Xia},
booktitle={Conference on Neural Information Processing Systems},
year={2021}
}
```
# Glossary [optional]
More information needed
# More Information [optional]
More information needed
# Model Card Authors [optional]
Microsoft in collaboration with Ezi Ozoani and the HuggingFace team
# Model Card Contact
More information needed
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
```python
from transformers import AutoModel
model = AutoModel.from_pretrained("microsoft/cocolm-base")
```
</details>
|
LeKazuha/Tf-base
|
LeKazuha
| 2022-10-31T23:32:43Z | 62 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-10-31T23:27:29Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Tf-base
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Tf-base
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 11064, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.7.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Sangeetha/fineTunedCategory
|
Sangeetha
| 2022-10-31T23:21:49Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-10-31T23:21:34Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 183 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 183,
"warmup_steps": 19,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
sd-concepts-library/degodsheavy
|
sd-concepts-library
| 2022-10-31T22:14:24Z | 0 | 2 | null |
[
"license:mit",
"region:us"
] | null | 2022-10-31T22:14:13Z |
---
license: mit
---
### DeGodsHeavy on Stable Diffusion
This is the `<degods-heavy>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:























|
LeKazuha/distilbert-base-uncased-finetuned-squad
|
LeKazuha
| 2022-10-31T22:03:59Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-09-18T21:31:52Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: LeKazuha/distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# LeKazuha/distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.1946
- Train End Logits Accuracy: 0.6778
- Train Start Logits Accuracy: 0.6365
- Validation Loss: 1.1272
- Validation End Logits Accuracy: 0.6948
- Validation Start Logits Accuracy: 0.6569
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 11064, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 1.1946 | 0.6778 | 0.6365 | 1.1272 | 0.6948 | 0.6569 | 0 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.7.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
SiddharthaM/twitter-data-distilbert-base-uncased-sentiment-finetuned-memes-test
|
SiddharthaM
| 2022-10-31T21:14:21Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-31T16:48:38Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: twitter-data-distilbert-base-uncased-sentiment-finetuned-memes-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# twitter-data-distilbert-base-uncased-sentiment-finetuned-memes-test
This model is a fine-tuned version of [jayantapaul888/twitter-data-distilbert-base-uncased-sentiment-finetuned-memes](https://huggingface.co/jayantapaul888/twitter-data-distilbert-base-uncased-sentiment-finetuned-memes) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6985
- Accuracy: 0.8218
- Precision: 0.8225
- Recall: 0.8218
- F1: 0.8221
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| No log | 1.0 | 294 | 0.4189 | 0.8124 | 0.8151 | 0.8124 | 0.8136 |
| 0.4351 | 2.0 | 588 | 0.4057 | 0.8258 | 0.8275 | 0.8258 | 0.8263 |
| 0.4351 | 3.0 | 882 | 0.4628 | 0.8220 | 0.8244 | 0.8220 | 0.8224 |
| 0.2248 | 4.0 | 1176 | 0.5336 | 0.8250 | 0.8258 | 0.8250 | 0.8253 |
| 0.2248 | 5.0 | 1470 | 0.6466 | 0.8208 | 0.8217 | 0.8208 | 0.8212 |
| 0.1158 | 6.0 | 1764 | 0.6985 | 0.8218 | 0.8225 | 0.8218 | 0.8221 |
### Framework versions
- Transformers 4.24.0.dev0
- Pytorch 1.11.0+cu102
- Datasets 2.6.1
- Tokenizers 0.13.1
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.