pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
listlengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
listlengths 0
201
| languages
listlengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
listlengths 0
722
| processed_texts
listlengths 1
723
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
sentence-similarity
|
sentence-transformers
|
# aditeyabaral/sentencetransformer-bert-hinglish-big
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('aditeyabaral/sentencetransformer-bert-hinglish-big')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('aditeyabaral/sentencetransformer-bert-hinglish-big')
model = AutoModel.from_pretrained('aditeyabaral/sentencetransformer-bert-hinglish-big')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=aditeyabaral/sentencetransformer-bert-hinglish-big)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 4617 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
{"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "sentence-similarity"}
|
aditeyabaral/sentencetransformer-bert-hinglish-big
| null |
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#sentence-transformers #pytorch #bert #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us
|
# aditeyabaral/sentencetransformer-bert-hinglish-big
This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Usage (HuggingFace Transformers)
Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL
## Training
The model was trained with the parameters:
DataLoader:
'URL.dataloader.DataLoader' of length 4617 with parameters:
Loss:
'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss'
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
|
[
"# aditeyabaral/sentencetransformer-bert-hinglish-big\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 4617 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss' \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
[
"TAGS\n#sentence-transformers #pytorch #bert #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us \n",
"# aditeyabaral/sentencetransformer-bert-hinglish-big\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 4617 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss' \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
sentence-similarity
|
sentence-transformers
|
# aditeyabaral/sentencetransformer-bert-hinglish-small
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('aditeyabaral/sentencetransformer-bert-hinglish-small')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('aditeyabaral/sentencetransformer-bert-hinglish-small')
model = AutoModel.from_pretrained('aditeyabaral/sentencetransformer-bert-hinglish-small')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=aditeyabaral/sentencetransformer-bert-hinglish-small)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 4617 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
{"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "sentence-similarity"}
|
aditeyabaral/sentencetransformer-bert-hinglish-small
| null |
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#sentence-transformers #pytorch #bert #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us
|
# aditeyabaral/sentencetransformer-bert-hinglish-small
This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Usage (HuggingFace Transformers)
Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL
## Training
The model was trained with the parameters:
DataLoader:
'URL.dataloader.DataLoader' of length 4617 with parameters:
Loss:
'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss'
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
|
[
"# aditeyabaral/sentencetransformer-bert-hinglish-small\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 4617 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss' \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
[
"TAGS\n#sentence-transformers #pytorch #bert #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us \n",
"# aditeyabaral/sentencetransformer-bert-hinglish-small\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 4617 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss' \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
sentence-similarity
|
sentence-transformers
|
# aditeyabaral/sentencetransformer-contrastive-roberta-base
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('aditeyabaral/sentencetransformer-contrastive-roberta-base')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('aditeyabaral/sentencetransformer-contrastive-roberta-base')
model = AutoModel.from_pretrained('aditeyabaral/sentencetransformer-contrastive-roberta-base')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=aditeyabaral/sentencetransformer-contrastive-roberta-base)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 9234 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.ContrastiveLoss.ContrastiveLoss` with parameters:
```
{'distance_metric': 'SiameseDistanceMetric.COSINE_DISTANCE', 'margin': 0.5, 'size_average': True}
```
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
{"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "sentence-similarity"}
|
aditeyabaral/sentencetransformer-contrastive-roberta-base
| null |
[
"sentence-transformers",
"pytorch",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#sentence-transformers #pytorch #roberta #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us
|
# aditeyabaral/sentencetransformer-contrastive-roberta-base
This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Usage (HuggingFace Transformers)
Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL
## Training
The model was trained with the parameters:
DataLoader:
'URL.dataloader.DataLoader' of length 9234 with parameters:
Loss:
'sentence_transformers.losses.ContrastiveLoss.ContrastiveLoss' with parameters:
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
|
[
"# aditeyabaral/sentencetransformer-contrastive-roberta-base\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 9234 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.ContrastiveLoss.ContrastiveLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
[
"TAGS\n#sentence-transformers #pytorch #roberta #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us \n",
"# aditeyabaral/sentencetransformer-contrastive-roberta-base\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 9234 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.ContrastiveLoss.ContrastiveLoss' with parameters:\n \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
sentence-similarity
|
sentence-transformers
|
# aditeyabaral/sentencetransformer-distilbert-base-cased
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('aditeyabaral/sentencetransformer-distilbert-base-cased')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('aditeyabaral/sentencetransformer-distilbert-base-cased')
model = AutoModel.from_pretrained('aditeyabaral/sentencetransformer-distilbert-base-cased')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=aditeyabaral/sentencetransformer-distilbert-base-cased)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 9234 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
{"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "sentence-similarity"}
|
aditeyabaral/sentencetransformer-distilbert-base-cased
| null |
[
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#sentence-transformers #pytorch #distilbert #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us
|
# aditeyabaral/sentencetransformer-distilbert-base-cased
This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Usage (HuggingFace Transformers)
Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL
## Training
The model was trained with the parameters:
DataLoader:
'URL.dataloader.DataLoader' of length 9234 with parameters:
Loss:
'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss'
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
|
[
"# aditeyabaral/sentencetransformer-distilbert-base-cased\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 9234 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss' \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
[
"TAGS\n#sentence-transformers #pytorch #distilbert #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us \n",
"# aditeyabaral/sentencetransformer-distilbert-base-cased\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 9234 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss' \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
sentence-similarity
|
sentence-transformers
|
# aditeyabaral/sentencetransformer-distilbert-hinglish-big
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('aditeyabaral/sentencetransformer-distilbert-hinglish-big')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('aditeyabaral/sentencetransformer-distilbert-hinglish-big')
model = AutoModel.from_pretrained('aditeyabaral/sentencetransformer-distilbert-hinglish-big')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=aditeyabaral/sentencetransformer-distilbert-hinglish-big)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 4617 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
{"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "sentence-similarity"}
|
aditeyabaral/sentencetransformer-distilbert-hinglish-big
| null |
[
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#sentence-transformers #pytorch #distilbert #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us
|
# aditeyabaral/sentencetransformer-distilbert-hinglish-big
This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Usage (HuggingFace Transformers)
Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL
## Training
The model was trained with the parameters:
DataLoader:
'URL.dataloader.DataLoader' of length 4617 with parameters:
Loss:
'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss'
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
|
[
"# aditeyabaral/sentencetransformer-distilbert-hinglish-big\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 4617 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss' \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
[
"TAGS\n#sentence-transformers #pytorch #distilbert #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us \n",
"# aditeyabaral/sentencetransformer-distilbert-hinglish-big\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 4617 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss' \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
sentence-similarity
|
sentence-transformers
|
# aditeyabaral/sentencetransformer-distilbert-hinglish-small
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('aditeyabaral/sentencetransformer-distilbert-hinglish-small')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('aditeyabaral/sentencetransformer-distilbert-hinglish-small')
model = AutoModel.from_pretrained('aditeyabaral/sentencetransformer-distilbert-hinglish-small')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=aditeyabaral/sentencetransformer-distilbert-hinglish-small)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 4617 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
{"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "sentence-similarity"}
|
aditeyabaral/sentencetransformer-distilbert-hinglish-small
| null |
[
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#sentence-transformers #pytorch #distilbert #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us
|
# aditeyabaral/sentencetransformer-distilbert-hinglish-small
This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Usage (HuggingFace Transformers)
Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL
## Training
The model was trained with the parameters:
DataLoader:
'URL.dataloader.DataLoader' of length 4617 with parameters:
Loss:
'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss'
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
|
[
"# aditeyabaral/sentencetransformer-distilbert-hinglish-small\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 4617 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss' \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
[
"TAGS\n#sentence-transformers #pytorch #distilbert #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us \n",
"# aditeyabaral/sentencetransformer-distilbert-hinglish-small\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 4617 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss' \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
sentence-similarity
|
sentence-transformers
|
# aditeyabaral/sentencetransformer-indic-bert
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('aditeyabaral/sentencetransformer-indic-bert')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('aditeyabaral/sentencetransformer-indic-bert')
model = AutoModel.from_pretrained('aditeyabaral/sentencetransformer-indic-bert')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=aditeyabaral/sentencetransformer-indic-bert)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 9234 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: AlbertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
{"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "sentence-similarity"}
|
aditeyabaral/sentencetransformer-indic-bert
| null |
[
"sentence-transformers",
"pytorch",
"albert",
"feature-extraction",
"sentence-similarity",
"transformers",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#sentence-transformers #pytorch #albert #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us
|
# aditeyabaral/sentencetransformer-indic-bert
This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Usage (HuggingFace Transformers)
Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL
## Training
The model was trained with the parameters:
DataLoader:
'URL.dataloader.DataLoader' of length 9234 with parameters:
Loss:
'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss'
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
|
[
"# aditeyabaral/sentencetransformer-indic-bert\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 9234 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss' \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
[
"TAGS\n#sentence-transformers #pytorch #albert #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us \n",
"# aditeyabaral/sentencetransformer-indic-bert\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 9234 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss' \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
sentence-similarity
|
sentence-transformers
|
# aditeyabaral/sentencetransformer-roberta-base
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('aditeyabaral/sentencetransformer-roberta-base')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('aditeyabaral/sentencetransformer-roberta-base')
model = AutoModel.from_pretrained('aditeyabaral/sentencetransformer-roberta-base')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=aditeyabaral/sentencetransformer-roberta-base)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 9234 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
{"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "sentence-similarity"}
|
aditeyabaral/sentencetransformer-roberta-base
| null |
[
"sentence-transformers",
"pytorch",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#sentence-transformers #pytorch #roberta #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us
|
# aditeyabaral/sentencetransformer-roberta-base
This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Usage (HuggingFace Transformers)
Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL
## Training
The model was trained with the parameters:
DataLoader:
'URL.dataloader.DataLoader' of length 9234 with parameters:
Loss:
'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss'
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
|
[
"# aditeyabaral/sentencetransformer-roberta-base\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 9234 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss' \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
[
"TAGS\n#sentence-transformers #pytorch #roberta #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us \n",
"# aditeyabaral/sentencetransformer-roberta-base\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 9234 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss' \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
sentence-similarity
|
sentence-transformers
|
# aditeyabaral/sentencetransformer-roberta-hinglish-big
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('aditeyabaral/sentencetransformer-roberta-hinglish-big')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('aditeyabaral/sentencetransformer-roberta-hinglish-big')
model = AutoModel.from_pretrained('aditeyabaral/sentencetransformer-roberta-hinglish-big')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=aditeyabaral/sentencetransformer-roberta-hinglish-big)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 4617 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
{"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "sentence-similarity"}
|
aditeyabaral/sentencetransformer-roberta-hinglish-big
| null |
[
"sentence-transformers",
"pytorch",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#sentence-transformers #pytorch #roberta #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us
|
# aditeyabaral/sentencetransformer-roberta-hinglish-big
This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Usage (HuggingFace Transformers)
Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL
## Training
The model was trained with the parameters:
DataLoader:
'URL.dataloader.DataLoader' of length 4617 with parameters:
Loss:
'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss'
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
|
[
"# aditeyabaral/sentencetransformer-roberta-hinglish-big\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 4617 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss' \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
[
"TAGS\n#sentence-transformers #pytorch #roberta #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us \n",
"# aditeyabaral/sentencetransformer-roberta-hinglish-big\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 4617 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss' \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
sentence-similarity
|
sentence-transformers
|
# aditeyabaral/sentencetransformer-roberta-hinglish-small
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('aditeyabaral/sentencetransformer-roberta-hinglish-small')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('aditeyabaral/sentencetransformer-roberta-hinglish-small')
model = AutoModel.from_pretrained('aditeyabaral/sentencetransformer-roberta-hinglish-small')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=aditeyabaral/sentencetransformer-roberta-hinglish-small)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 4617 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
{"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "sentence-similarity"}
|
aditeyabaral/sentencetransformer-roberta-hinglish-small
| null |
[
"sentence-transformers",
"pytorch",
"roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#sentence-transformers #pytorch #roberta #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us
|
# aditeyabaral/sentencetransformer-roberta-hinglish-small
This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Usage (HuggingFace Transformers)
Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL
## Training
The model was trained with the parameters:
DataLoader:
'URL.dataloader.DataLoader' of length 4617 with parameters:
Loss:
'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss'
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
|
[
"# aditeyabaral/sentencetransformer-roberta-hinglish-small\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 4617 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss' \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
[
"TAGS\n#sentence-transformers #pytorch #roberta #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us \n",
"# aditeyabaral/sentencetransformer-roberta-hinglish-small\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 4617 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss' \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
sentence-similarity
|
sentence-transformers
|
# aditeyabaral/sentencetransformer-xlm-roberta-base
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('aditeyabaral/sentencetransformer-xlm-roberta-base')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('aditeyabaral/sentencetransformer-xlm-roberta-base')
model = AutoModel.from_pretrained('aditeyabaral/sentencetransformer-xlm-roberta-base')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=aditeyabaral/sentencetransformer-xlm-roberta-base)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 9234 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
{"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "sentence-similarity"}
|
aditeyabaral/sentencetransformer-xlm-roberta-base
| null |
[
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#sentence-transformers #pytorch #xlm-roberta #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us
|
# aditeyabaral/sentencetransformer-xlm-roberta-base
This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Usage (HuggingFace Transformers)
Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL
## Training
The model was trained with the parameters:
DataLoader:
'URL.dataloader.DataLoader' of length 9234 with parameters:
Loss:
'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss'
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
|
[
"# aditeyabaral/sentencetransformer-xlm-roberta-base\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 9234 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss' \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
[
"TAGS\n#sentence-transformers #pytorch #xlm-roberta #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us \n",
"# aditeyabaral/sentencetransformer-xlm-roberta-base\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 9234 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss' \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
text2text-generation
|
transformers
|
T5 model
This is a sentence-transformers mode
|
{}
|
aditi2222/t5-paraphrase
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
T5 model
This is a sentence-transformers mode
|
[] |
[
"TAGS\n#transformers #pytorch #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n"
] |
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.10.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["conll2003"]}
|
adityavithaldas/distilbert-base-uncased-finetuned-ner
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of distilbert-base-uncased on the conll2003 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.10.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
[
"# distilbert-base-uncased-finetuned-ner\n\nThis model is a fine-tuned version of distilbert-base-uncased on the conll2003 dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3",
"### Framework versions\n\n- Transformers 4.10.2\n- Pytorch 1.9.0+cu102\n- Datasets 1.12.1\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# distilbert-base-uncased-finetuned-ner\n\nThis model is a fine-tuned version of distilbert-base-uncased on the conll2003 dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3",
"### Framework versions\n\n- Transformers 4.10.2\n- Pytorch 1.9.0+cu102\n- Datasets 1.12.1\n- Tokenizers 0.10.3"
] |
automatic-speech-recognition
|
transformers
|
```python
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
processor = Wav2Vec2Processor.from_pretrained("adresgezgini/Wav2Vec-tr-AG-v1")
model = Wav2Vec2ForCTC.from_pretrained("adresgezgini/Wav2Vec-tr-AG-v1")
```
Dosyalar bölümünde paylaşılan ses1.mp3[1], ses1.mp3[2] ve ses1.mp3[3] ses dosyaları açık kaynaklı canlı kitap ses kayıtları üzerinden 1 - 1.5 dakika arasında belli bir kısmın alınması ile oluşturulmuştur. Oluşturulan sesler ile model test edilmiş ve WER değerleri kaydedilmiştir.
<div align="center">
|Sesler|WER|
| :---: | :---: |
|SES1.mp3|0,17|
|SES2.mp3|0,31|
|SES3.mp3|0,20|
</div>
[1][Sabahattin Ali - Çaydanlık | YT: Sesli Kitap Dünyası](https://www.youtube.com/watch?v=IHUfOpqw-8s)\
[2][Sabahattin Ali - Ses | YT: Sesli Kitap Dünyası](https://www.youtube.com/watch?v=XzX2wBjncOg)\
[3][Sabahattin Ali - Sıçra Köşk | YT: Sesli Kitap Dünyası](https://www.youtube.com/watch?v=SJwUaq0Nu9c)\
|
{}
|
adresgezgini/Wav2Vec2-tr-AG-v1
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #endpoints_compatible #region-us
|
Dosyalar bölümünde paylaşılan ses1.mp3[1], ses1.mp3[2] ve ses1.mp3[3] ses dosyaları açık kaynaklı canlı kitap ses kayıtları üzerinden 1 - 1.5 dakika arasında belli bir kısmın alınması ile oluşturulmuştur. Oluşturulan sesler ile model test edilmiş ve WER değerleri kaydedilmiştir.
[1]Sabahattin Ali - Çaydanlık | YT: Sesli Kitap Dünyası
[2]Sabahattin Ali - Ses | YT: Sesli Kitap Dünyası
[3]Sabahattin Ali - Sıçra Köşk | YT: Sesli Kitap Dünyası\
|
[] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #endpoints_compatible #region-us \n"
] |
text-generation
|
transformers
|
AdresGezgini Inc. R&D Center Turkish GPT-2 Model Trained with Turkish Wiki Corpus for 10 Epochs
|
{}
|
adresgezgini/turkish-gpt-2
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tf #jax #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
AdresGezgini Inc. R&D Center Turkish GPT-2 Model Trained with Turkish Wiki Corpus for 10 Epochs
|
[] |
[
"TAGS\n#transformers #pytorch #tf #jax #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
automatic-speech-recognition
|
transformers
|
# wav2vec-tr-lite-AG
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "tr", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("emre/wav2vec-tr-lite-AG")
model = Wav2Vec2ForCTC.from_pretrained("emre/wav2vec-tr-lite-AG")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
**Test Result**: 27.30 %
[here](https://adresgezgini.com)
|
{"language": "tr", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech"], "datasets": ["common_voice"], "metrics": ["wer"]}
|
adresgezgini/wav2vec-tr-lite-AG
| null |
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"tr",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"tr"
] |
TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #tr #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
|
# wav2vec-tr-lite-AG
## Usage
The model can be used directly (without a language model) as follows:
'''python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "tr", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("emre/wav2vec-tr-lite-AG")
model = Wav2Vec2ForCTC.from_pretrained("emre/wav2vec-tr-lite-AG")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
Test Result: 27.30 %
here
|
[
"# wav2vec-tr-lite-AG",
"## Usage\n\nThe model can be used directly (without a language model) as follows:\n\n'''python\nimport torch\nimport torchaudio\nfrom datasets import load_dataset\nfrom transformers import Wav2Vec2ForCTC, Wav2Vec2Processor\n\ntest_dataset = load_dataset(\"common_voice\", \"tr\", split=\"test[:2%]\") \n\nprocessor = Wav2Vec2Processor.from_pretrained(\"emre/wav2vec-tr-lite-AG\")\nmodel = Wav2Vec2ForCTC.from_pretrained(\"emre/wav2vec-tr-lite-AG\")\n\nresampler = torchaudio.transforms.Resample(48_000, 16_000)\n\nTest Result: 27.30 %\n\n\n here"
] |
[
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #tr #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n",
"# wav2vec-tr-lite-AG",
"## Usage\n\nThe model can be used directly (without a language model) as follows:\n\n'''python\nimport torch\nimport torchaudio\nfrom datasets import load_dataset\nfrom transformers import Wav2Vec2ForCTC, Wav2Vec2Processor\n\ntest_dataset = load_dataset(\"common_voice\", \"tr\", split=\"test[:2%]\") \n\nprocessor = Wav2Vec2Processor.from_pretrained(\"emre/wav2vec-tr-lite-AG\")\nmodel = Wav2Vec2ForCTC.from_pretrained(\"emre/wav2vec-tr-lite-AG\")\n\nresampler = torchaudio.transforms.Resample(48_000, 16_000)\n\nTest Result: 27.30 %\n\n\n here"
] |
text-classification
|
transformers
|
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 19333717
- CO2 Emissions (in grams): 88.89388195672073
## Validation Metrics
- Loss: 1.0499154329299927
- Accuracy: 0.6207088513638894
- Macro F1: 0.46250803661544765
- Micro F1: 0.6207088513638894
- Weighted F1: 0.5850362079928957
- Macro Precision: 0.6451479987704787
- Micro Precision: 0.6207088513638894
- Weighted Precision: 0.6285080101186085
- Macro Recall: 0.4405680478429344
- Micro Recall: 0.6207088513638894
- Weighted Recall: 0.6207088513638894
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/adrianmoses/autonlp-auto-nlp-lyrics-classification-19333717
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("adrianmoses/autonlp-auto-nlp-lyrics-classification-19333717", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("adrianmoses/autonlp-auto-nlp-lyrics-classification-19333717", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
```
|
{"language": "en", "tags": "autonlp", "datasets": ["adrianmoses/autonlp-data-auto-nlp-lyrics-classification"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 88.89388195672073}
|
adrianmoses/autonlp-auto-nlp-lyrics-classification-19333717
| null |
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autonlp",
"en",
"dataset:adrianmoses/autonlp-data-auto-nlp-lyrics-classification",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #bert #text-classification #autonlp #en #dataset-adrianmoses/autonlp-data-auto-nlp-lyrics-classification #co2_eq_emissions #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 19333717
- CO2 Emissions (in grams): 88.89388195672073
## Validation Metrics
- Loss: 1.0499154329299927
- Accuracy: 0.6207088513638894
- Macro F1: 0.46250803661544765
- Micro F1: 0.6207088513638894
- Weighted F1: 0.5850362079928957
- Macro Precision: 0.6451479987704787
- Micro Precision: 0.6207088513638894
- Weighted Precision: 0.6285080101186085
- Macro Recall: 0.4405680478429344
- Micro Recall: 0.6207088513638894
- Weighted Recall: 0.6207088513638894
## Usage
You can use cURL to access this model:
Or Python API:
|
[
"# Model Trained Using AutoNLP\n\n- Problem type: Multi-class Classification\n- Model ID: 19333717\n- CO2 Emissions (in grams): 88.89388195672073",
"## Validation Metrics\n\n- Loss: 1.0499154329299927\n- Accuracy: 0.6207088513638894\n- Macro F1: 0.46250803661544765\n- Micro F1: 0.6207088513638894\n- Weighted F1: 0.5850362079928957\n- Macro Precision: 0.6451479987704787\n- Micro Precision: 0.6207088513638894\n- Weighted Precision: 0.6285080101186085\n- Macro Recall: 0.4405680478429344\n- Micro Recall: 0.6207088513638894\n- Weighted Recall: 0.6207088513638894",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] |
[
"TAGS\n#transformers #pytorch #bert #text-classification #autonlp #en #dataset-adrianmoses/autonlp-data-auto-nlp-lyrics-classification #co2_eq_emissions #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# Model Trained Using AutoNLP\n\n- Problem type: Multi-class Classification\n- Model ID: 19333717\n- CO2 Emissions (in grams): 88.89388195672073",
"## Validation Metrics\n\n- Loss: 1.0499154329299927\n- Accuracy: 0.6207088513638894\n- Macro F1: 0.46250803661544765\n- Micro F1: 0.6207088513638894\n- Weighted F1: 0.5850362079928957\n- Macro Precision: 0.6451479987704787\n- Micro Precision: 0.6207088513638894\n- Weighted Precision: 0.6285080101186085\n- Macro Recall: 0.4405680478429344\n- Micro Recall: 0.6207088513638894\n- Weighted Recall: 0.6207088513638894",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] |
null | null |
# Hate Speech Detection Model
Created from dataset provided by ROHAN KHILNANI
|
{}
|
adrianmoses/hate-speech-detection
| null |
[
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#has_space #region-us
|
# Hate Speech Detection Model
Created from dataset provided by ROHAN KHILNANI
|
[
"# Hate Speech Detection Model\n\n\nCreated from dataset provided by ROHAN KHILNANI"
] |
[
"TAGS\n#has_space #region-us \n",
"# Hate Speech Detection Model\n\n\nCreated from dataset provided by ROHAN KHILNANI"
] |
text-generation
|
transformers
|
# Rick DialoGPT medium model
|
{"tags": ["conversational"]}
|
adviksinghania/DialoGPT-medium-rick
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Rick DialoGPT medium model
|
[
"# Rick DialoGPT medium model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Rick DialoGPT medium model"
] |
token-classification
|
transformers
|
distilbert-base-uncased finetuned on the conll2003 dataset for NER.
|
{}
|
adzcodez/TokenClassificationTest
| null |
[
"transformers",
"pytorch",
"distilbert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #distilbert #token-classification #autotrain_compatible #endpoints_compatible #region-us
|
distilbert-base-uncased finetuned on the conll2003 dataset for NER.
|
[] |
[
"TAGS\n#transformers #pytorch #distilbert #token-classification #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text-generation
|
transformers
|
## A conversational agent with many personalities (PersonaGPT)
PersonaGPT is an open-domain conversational agent designed to do 2 tasks:
1. decoding _personalized_ responses based on input personality facts (the "persona" profile of the bot).
2. incorporating _turn-level goals_ into its responses through "action codes" (e.g., "talk about work", "ask about favorite music").
It builds on the [DialoGPT-medium](https://huggingface.co/microsoft/DialoGPT-medium) pretrained model based on the [GPT-2](https://github.com/openai/gpt-2) architecture.
This model is trained on the [Persona-Chat](https://arxiv.org/pdf/1801.07243) dataset, with added special tokens to better distinguish between conversational history and personality traits for dyadic conversations. Furthermore, some active learning was used to train the model to do _controlled_ decoding using turn-level goals.
## Full Repo
Preprocessing, training and implementation details can be found in the [personaGPT repo](https://github.com/af1tang/personaGPT).
### How to Use
1. Load the model and define some helper functions.
```python
from transformers import GPT2Tokenizer, GPT2LMHeadModel
import torch
tokenizer = GPT2Tokenizer.from_pretrained("af1tang/personaGPT")
model = GPT2LMHeadModel.from_pretrained("af1tang/personaGPT")
if torch.cuda.is_available():
model = model.cuda()
## utility functions ##
flatten = lambda l: [item for sublist in l for item in sublist]
def to_data(x):
if torch.cuda.is_available():
x = x.cpu()
return x.data.numpy()
def to_var(x):
if not torch.is_tensor(x):
x = torch.Tensor(x)
if torch.cuda.is_available():
x = x.cuda()
return x
def display_dialog_history(dialog_hx):
for j, line in enumerate(dialog_hx):
msg = tokenizer.decode(line)
if j %2 == 0:
print(">> User: "+ msg)
else:
print("Bot: "+msg)
print()
def generate_next(bot_input_ids, do_sample=True, top_k=10, top_p=.92,
max_length=1000, pad_token=tokenizer.eos_token_id):
full_msg = model.generate(bot_input_ids, do_sample=True,
top_k=top_k, top_p=top_p,
max_length=max_length, pad_token_id=tokenizer.eos_token_id)
msg = to_data(full_msg.detach()[0])[bot_input_ids.shape[-1]:]
return msg
```
2. Give your chatbot partner a set of personalities.
```python
# get personality facts for conversation
personas = []
for i in range(3):
response = input(">> Fact %d: "%(i+1))+ tokenizer.eos_token
personas.append(response)
personas = tokenizer.encode(''.join(['<|p2|>'] + personas + ['<|sep|>'] + ['<|start|>']))
```
3. The first use of PersonaGPT is to do _personalized_ dialog generation. Use the following loop to interact with the model.
```python
# converse for 8 turns
dialog_hx = []
for step in range(8):
# encode the user input
user_inp = tokenizer.encode(input(">> User: ") + tokenizer.eos_token)
# append to the chat history
dialog_hx.append(user_inp)
# generated a response while limiting the total chat history to 1000 tokens,
bot_input_ids = to_var([personas + flatten(dialog_hx)]).long()
msg = generate_next(bot_input_ids)
dialog_hx.append(msg)
print("Bot: {}".format(tokenizer.decode(msg, skip_special_tokens=True)))
```
Example of personalized decoding:
| | Persona Facts |
|---|---|
| 1 | i am 64 years old but feel quite young. |
| 2 | i try to eat healthy but limit mcdonalds to once a week. |
| 3 | i regret working as a doctor for the last 20 years. |
| 4 | my secret hobby is making self-help youtube videos. |
| 5 | i've been to spain many times and wish i could go again. |
| Role | Response|
|---|---|
| Human | yo whats good fam? |
| Bot | hello, how are you doing? |
| Human | not bad what you doing today?|
| Bot | eating a lot healthier now that i'm in my 60s |
| Human | i'm more of a fast food guy |
| Bot | me too but i'm very healthy so i try to avoid it |
| Human | what do you do for fun? |
| Bot | i make videos for fun and eat very healthy |
| Human | nice what kind of videos? |
| Bot | i love making videos about self help stuff |
| Human | i think i need some help |
| Bot | what kind of help would you like to get |
| Human | i'm 21 and feel lost in life |
| Bot | i can relate, i'm only 65 |
4. The second use of PersonaGPT is to do _controlled response generation_. Use the following program to interact with the model.
```python
## available actions ##
action_space = [ 'ask about kids.', "ask about pets.", 'talk about work.',
'ask about marital status.', 'talk about travel.', 'ask about age and gender.',
'ask about hobbies.', 'ask about favorite food.', 'talk about movies.',
'talk about music.', 'talk about politics.']
# converse for 8 turns
dialog_hx = []
for step in range(8):
# choose an action
act = None
while act not in action_space:
display_dialog_history(dialog_hx)
print()
print(" actions: ")
for k,v in enumerate(action_space): print(k,v)
try:
act = action_space[int(input(" input [0-10]: " ))]
except:
act = None
print()
# format into prefix code
action_prefix = tokenizer.encode(''.join(['<|act|> '] + [act] + ['<|p1|>'] + [] + ['<|sep|>'] + ['<|start|>']))
bot_input_ids = to_var([action_prefix + flatten(dialog_hx)]).long()
# generate query conditioned on action
msg = generate_next(bot_input_ids, top_k=top_k, top_p=top_p, max_length=max_length)
dialog_hx.append(msg)
# generate bot response
bot_input_ids = to_var([personas+ flatten(dialog_hx)]).long()
msg = generate_next(bot_input_ids, top_k=top_k, top_p=top_p, max_length=max_length)
dialog_hx.append(msg)
display_dialog_history(dialog_hx)
```
Example of controlled response generation:
| | Bot 2's Persona Facts |
|---|---|
| 1 | i am a quiet engineer. |
| 2 | i'm single and am looking for love. |
| 3 | sadly, i don't have any relatable hobbies. |
| 4 | luckily, however, i am tall and athletic. |
| 5 | on friday nights, i watch re-runs of the simpsons alone. |
| Role | Action | Response |
|---|---|---|
| Bot 1 | talk about movies | hello do you like movies? |
| Bot 2 | - | i love the simpsons. how about you? |
| Bot 1 | talk about music | i like the walking dead. do you play an instrument? |
| Bot 2 | - | yes i play the violin. |
| Bot 1 | ask about marital status | are you married or single? |
| Bot 2 | - | i am single but i am looking for love. |
| Bot 1 | talk about work | my job is working in a factory. what's your occupation? |
| Bot 2 | - | engineer. i'm very quiet so no one hears me. |
| Bot 1 | talk about hobbies | do you have any hobbies? |
| Bot 2 | - | i watch reruns of the simpsons. |
| Bot 1 | ask about favorite food | what's your favorite food? |
| Bot 2 | - | i love pizza. how about yourself? |
| Bot 1 | ask about pets | i also love pizza. do you like animals? |
| Bot 2 | - | i have two dogs. what is your occupation? |
| Bot 1 | talk about work | i'm a factory worker. what's your dream job? |
| Bot 2 | - | i'd love to be a writer one day. |
|
{"license": "gpl-3.0", "tags": ["conversational"]}
|
af1tang/personaGPT
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"arxiv:1801.07243",
"license:gpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1801.07243"
] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #arxiv-1801.07243 #license-gpl-3.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
A conversational agent with many personalities (PersonaGPT)
-----------------------------------------------------------
PersonaGPT is an open-domain conversational agent designed to do 2 tasks:
1. decoding *personalized* responses based on input personality facts (the "persona" profile of the bot).
2. incorporating *turn-level goals* into its responses through "action codes" (e.g., "talk about work", "ask about favorite music").
It builds on the DialoGPT-medium pretrained model based on the GPT-2 architecture.
This model is trained on the Persona-Chat dataset, with added special tokens to better distinguish between conversational history and personality traits for dyadic conversations. Furthermore, some active learning was used to train the model to do *controlled* decoding using turn-level goals.
Full Repo
---------
Preprocessing, training and implementation details can be found in the personaGPT repo.
### How to Use
1. Load the model and define some helper functions.
2. Give your chatbot partner a set of personalities.
3. The first use of PersonaGPT is to do *personalized* dialog generation. Use the following loop to interact with the model.
Example of personalized decoding:
4. The second use of PersonaGPT is to do *controlled response generation*. Use the following program to interact with the model.
Example of controlled response generation:
Role: Bot 1, Action: talk about movies, Response: hello do you like movies?
Role: Bot 2, Action: -, Response: i love the simpsons. how about you?
Role: Bot 1, Action: talk about music, Response: i like the walking dead. do you play an instrument?
Role: Bot 2, Action: -, Response: yes i play the violin.
Role: Bot 1, Action: ask about marital status, Response: are you married or single?
Role: Bot 2, Action: -, Response: i am single but i am looking for love.
Role: Bot 1, Action: talk about work, Response: my job is working in a factory. what's your occupation?
Role: Bot 2, Action: -, Response: engineer. i'm very quiet so no one hears me.
Role: Bot 1, Action: talk about hobbies, Response: do you have any hobbies?
Role: Bot 2, Action: -, Response: i watch reruns of the simpsons.
Role: Bot 1, Action: ask about favorite food, Response: what's your favorite food?
Role: Bot 2, Action: -, Response: i love pizza. how about yourself?
Role: Bot 1, Action: ask about pets, Response: i also love pizza. do you like animals?
Role: Bot 2, Action: -, Response: i have two dogs. what is your occupation?
Role: Bot 1, Action: talk about work, Response: i'm a factory worker. what's your dream job?
Role: Bot 2, Action: -, Response: i'd love to be a writer one day.
|
[
"### How to Use\n\n\n1. Load the model and define some helper functions.\n2. Give your chatbot partner a set of personalities.\n3. The first use of PersonaGPT is to do *personalized* dialog generation. Use the following loop to interact with the model.\n\n\nExample of personalized decoding:\n\n\n\n\n4. The second use of PersonaGPT is to do *controlled response generation*. Use the following program to interact with the model.\n\n\nExample of controlled response generation:\n\n\n\nRole: Bot 1, Action: talk about movies, Response: hello do you like movies?\nRole: Bot 2, Action: -, Response: i love the simpsons. how about you?\nRole: Bot 1, Action: talk about music, Response: i like the walking dead. do you play an instrument?\nRole: Bot 2, Action: -, Response: yes i play the violin.\nRole: Bot 1, Action: ask about marital status, Response: are you married or single?\nRole: Bot 2, Action: -, Response: i am single but i am looking for love.\nRole: Bot 1, Action: talk about work, Response: my job is working in a factory. what's your occupation?\nRole: Bot 2, Action: -, Response: engineer. i'm very quiet so no one hears me.\nRole: Bot 1, Action: talk about hobbies, Response: do you have any hobbies?\nRole: Bot 2, Action: -, Response: i watch reruns of the simpsons.\nRole: Bot 1, Action: ask about favorite food, Response: what's your favorite food?\nRole: Bot 2, Action: -, Response: i love pizza. how about yourself?\nRole: Bot 1, Action: ask about pets, Response: i also love pizza. do you like animals?\nRole: Bot 2, Action: -, Response: i have two dogs. what is your occupation?\nRole: Bot 1, Action: talk about work, Response: i'm a factory worker. what's your dream job?\nRole: Bot 2, Action: -, Response: i'd love to be a writer one day."
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #arxiv-1801.07243 #license-gpl-3.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"### How to Use\n\n\n1. Load the model and define some helper functions.\n2. Give your chatbot partner a set of personalities.\n3. The first use of PersonaGPT is to do *personalized* dialog generation. Use the following loop to interact with the model.\n\n\nExample of personalized decoding:\n\n\n\n\n4. The second use of PersonaGPT is to do *controlled response generation*. Use the following program to interact with the model.\n\n\nExample of controlled response generation:\n\n\n\nRole: Bot 1, Action: talk about movies, Response: hello do you like movies?\nRole: Bot 2, Action: -, Response: i love the simpsons. how about you?\nRole: Bot 1, Action: talk about music, Response: i like the walking dead. do you play an instrument?\nRole: Bot 2, Action: -, Response: yes i play the violin.\nRole: Bot 1, Action: ask about marital status, Response: are you married or single?\nRole: Bot 2, Action: -, Response: i am single but i am looking for love.\nRole: Bot 1, Action: talk about work, Response: my job is working in a factory. what's your occupation?\nRole: Bot 2, Action: -, Response: engineer. i'm very quiet so no one hears me.\nRole: Bot 1, Action: talk about hobbies, Response: do you have any hobbies?\nRole: Bot 2, Action: -, Response: i watch reruns of the simpsons.\nRole: Bot 1, Action: ask about favorite food, Response: what's your favorite food?\nRole: Bot 2, Action: -, Response: i love pizza. how about yourself?\nRole: Bot 1, Action: ask about pets, Response: i also love pizza. do you like animals?\nRole: Bot 2, Action: -, Response: i have two dogs. what is your occupation?\nRole: Bot 1, Action: talk about work, Response: i'm a factory worker. what's your dream job?\nRole: Bot 2, Action: -, Response: i'd love to be a writer one day."
] |
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-de-finetuned-en-to-de
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-de](https://huggingface.co/Helsinki-NLP/opus-mt-en-de) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6798
- Bleu: 26.4396
- Gen Len: 24.8156
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:------:|:---------------:|:-------:|:-------:|
| 2.0864 | 1.0 | 568611 | 1.6798 | 26.4396 | 24.8156 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.0.dev20210415+cu101
- Datasets 1.16.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["wmt16"], "metrics": ["bleu"], "model-index": [{"name": "opus-mt-en-de-finetuned-en-to-de", "results": [{"task": {"type": "text2text-generation", "name": "Sequence-to-sequence Language Modeling"}, "dataset": {"name": "wmt16", "type": "wmt16", "args": "de-en"}, "metrics": [{"type": "bleu", "value": 26.4396, "name": "Bleu"}]}]}]}
|
afreireosorio/opus-mt-en-de-finetuned-en-to-de
| null |
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #marian #text2text-generation #generated_from_trainer #dataset-wmt16 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
|
opus-mt-en-de-finetuned-en-to-de
================================
This model is a fine-tuned version of Helsinki-NLP/opus-mt-en-de on the wmt16 dataset.
It achieves the following results on the evaluation set:
* Loss: 1.6798
* Bleu: 26.4396
* Gen Len: 24.8156
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0002
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 1
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.12.5
* Pytorch 1.9.0.dev20210415+cu101
* Datasets 1.16.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.9.0.dev20210415+cu101\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #marian #text2text-generation #generated_from_trainer #dataset-wmt16 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.9.0.dev20210415+cu101\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
text-generation
|
transformers
|
# aggb DialogGPT spanish model
|
{"tags": ["conversational"]}
|
aggb/DialogGPT-small-AGGB-B
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# aggb DialogGPT spanish model
|
[
"# aggb DialogGPT spanish model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# aggb DialogGPT spanish model"
] |
text-classification
|
transformers
|
bert-base-uncased model trained on the tobacco800 dataset for the task of page-stream-segmentation.
[Link](https://github.com/agiagoulas/page-stream-segmentation) to the GitHub Repo with the model implementation.
|
{}
|
agiagoulas/bert-pss
| null |
[
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #jax #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us
|
bert-base-uncased model trained on the tobacco800 dataset for the task of page-stream-segmentation.
Link to the GitHub Repo with the model implementation.
|
[] |
[
"TAGS\n#transformers #pytorch #jax #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us \n"
] |
null | null |
# Text to Speech Model
## Being used for the `Audio Labeler` effect in Audacity
metadata:
```
{
metadata = {
'sample_rate': 16000,
'domain_tags': ['speech'],
'short_description': 'I will label your speech into text :]',
'long_description':
'This is an Audacity wrapper for the model, '
'forked from the repository '
'facebook/s2t-medium-librispeech-asr'
'This model was trained by Changhan Wang'
'and Yun Tang and Xutai Ma and Anne Wu'
'and Dmytro Okhonko and Juan Pino.',
'tags': ['speech-to-text'],
'effect_type': 'waveform-to-labels',
'multichannel': False,
'labels': ["<pad>", "<s>", "</s>", "<unk>", "|", "E", "T", "A", "O", "N", "I", "H", "S", "R", "D", "L", "U", "M", "W", "C", "F", "G", "Y", "P", "B", "V", "K", "'", "X", "J", "Q", "Z"],
}
```
|
{"tags": ["audacity"], "inference": false}
|
aguilara42/audacity-Wav2Vec2-Base
| null |
[
"audacity",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#audacity #region-us
|
# Text to Speech Model
## Being used for the 'Audio Labeler' effect in Audacity
metadata:
|
[
"# Text to Speech Model",
"## Being used for the 'Audio Labeler' effect in Audacity\n\nmetadata:"
] |
[
"TAGS\n#audacity #region-us \n",
"# Text to Speech Model",
"## Being used for the 'Audio Labeler' effect in Audacity\n\nmetadata:"
] |
null | null |
# Labeler With Timestamps
## Being used for the `Audio Labeler` effect in Audacity
This is a audio labeler model which is used in Audacity's labeler effect.
metadata:
```
{
"sample_rate": 48000,
"domain_tags": ["Music"],
"tags": ["Audio Labeler"],
"effect_type": "waveform-to-labels",
"multichannel": false,
"labels": ["Acoustic Guitar", "Auxiliary Percussion", "Brass", "Clean Electric Guitar", "Distorted Electric Guitar", "Double Bass", "Drum Set", "Electric Bass", "Flute", "piano", "Reeds", "Saxophone", "Strings", "Trumpet", "Voice"],
"short_description": "Use me to label some instruments!",
"long_description": "An audio labeler, which outputs label predictions and time ranges for the labels. This model can label various instruments listed in the labels section."
}
```
|
{"tags": ["audacity"], "inference": false}
|
aguilara42/openl3-labeler-w-timestamps
| null |
[
"audacity",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#audacity #region-us
|
# Labeler With Timestamps
## Being used for the 'Audio Labeler' effect in Audacity
This is a audio labeler model which is used in Audacity's labeler effect.
metadata:
|
[
"# Labeler With Timestamps",
"## Being used for the 'Audio Labeler' effect in Audacity\n\nThis is a audio labeler model which is used in Audacity's labeler effect. \n\nmetadata:"
] |
[
"TAGS\n#audacity #region-us \n",
"# Labeler With Timestamps",
"## Being used for the 'Audio Labeler' effect in Audacity\n\nThis is a audio labeler model which is used in Audacity's labeler effect. \n\nmetadata:"
] |
null |
transformers
|
Hello World!
|
{}
|
ahanadeb/wav2vec2-large-indian-instrument-classification-v1
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #wav2vec2 #endpoints_compatible #region-us
|
Hello World!
|
[] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #endpoints_compatible #region-us \n"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "wav2vec2-base-timit-demo-colab", "results": []}]}
|
ahazeemi/wav2vec2-base-timit-demo-colab
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
|
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of facebook/wav2vec2-base on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
[
"# wav2vec2-base-timit-demo-colab\n\nThis model is a fine-tuned version of facebook/wav2vec2-base on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 1000\n- num_epochs: 30\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.11.3\n- Pytorch 1.10.0+cu111\n- Datasets 1.13.3\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n",
"# wav2vec2-base-timit-demo-colab\n\nThis model is a fine-tuned version of facebook/wav2vec2-base on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 1000\n- num_epochs: 30\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.11.3\n- Pytorch 1.10.0+cu111\n- Datasets 1.13.3\n- Tokenizers 0.10.3"
] |
null |
speechbrain
|
<iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe>
<br/><br/>
# Speaker Verification with ECAPA-TDNN embeddings on Voxceleb
This repository provides all the necessary tools to perform speaker verification with a pretrained ECAPA-TDNN model using SpeechBrain.
The system can be used to extract speaker embeddings as well.
It is trained on Voxceleb 1+ Voxceleb2 training data.
For a better experience, we encourage you to learn more about
[SpeechBrain](https://speechbrain.github.io). The model performance on Voxceleb1-test set(Cleaned) is:
| Release | EER(%) | minDCF |
|:-------------:|:--------------:|:--------------:|
| 05-03-21 | 0.69 | 0.08258 |
## Pipeline description
This system is composed of an ECAPA-TDNN model. It is a combination of convolutional and residual blocks. The embeddings are extracted using attentive statistical pooling. The system is trained with Additive Margin Softmax Loss. Speaker Verification is performed using cosine distance between speaker embeddings.
## Install SpeechBrain
First of all, please install SpeechBrain with the following command:
```
gh repo clone aheba/speechbrain-aheba-contribs
git checkout pretrain_new
pip install -r requirements.txt
pip install --editable .
```
Please notice that we encourage you to read our tutorials and learn more about
[SpeechBrain](https://speechbrain.github.io).
### Compute your speaker embeddings
```python
import torchaudio
from speechbrain.pretrained import Predictor
classifier = Predictor.import_model(source="aheba31/test-predictor")
signal, fs = torchaudio.load('samples/audio_samples/example1.wav')
embeddings = classifier.encode_batch(signal)
```
### Perform Speaker Verification
```python
from speechbrain.pretrained import SpeakerRecognition
verification = SpeakerRecognition.from_hparams(source="aheba31/test-predictor", savedir="aheba31/test-predictor")
score, prediction = verification.verify_files("speechbrain/spkrec-ecapa-voxceleb/example1.wav", "speechbrain/spkrec-ecapa-voxceleb/example2.flac")
```
The prediction is 1 if the two signals in input are from the same speaker and 0 otherwise.
### Inference on GPU
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
### Training
The model was trained with SpeechBrain (aa018540).
To train it from scratch follows these steps:
1. Clone SpeechBrain:
```bash
git clone https://github.com/speechbrain/speechbrain/
```
2. Install it:
```
cd speechbrain
pip install -r requirements.txt
pip install -e .
```
3. Run Training:
```
cd recipes/VoxCeleb/SpeakerRec
python train_speaker_embeddings.py hparams/train_ecapa_tdnn.yaml --data_folder=your_data_folder
```
You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1-ahC1xeyPinAHp2oAohL-02smNWO41Cc?usp=sharing).
### Limitations
The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.
#### Referencing ECAPA-TDNN
```
@inproceedings{DBLP:conf/interspeech/DesplanquesTD20,
author = {Brecht Desplanques and
Jenthe Thienpondt and
Kris Demuynck},
editor = {Helen Meng and
Bo Xu and
Thomas Fang Zheng},
title = {{ECAPA-TDNN:} Emphasized Channel Attention, Propagation and Aggregation
in {TDNN} Based Speaker Verification},
booktitle = {Interspeech 2020},
pages = {3830--3834},
publisher = {{ISCA}},
year = {2020},
}
```
# **Citing SpeechBrain**
Please, cite SpeechBrain if you use it for your research or business.
```bibtex
@misc{speechbrain,
title={{SpeechBrain}: A General-Purpose Speech Toolkit},
author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio},
year={2021},
eprint={2106.04624},
archivePrefix={arXiv},
primaryClass={eess.AS},
note={arXiv:2106.04624}
}
```
# **About SpeechBrain**
- Website: https://speechbrain.github.io/
- Code: https://github.com/speechbrain/speechbrain/
- HuggingFace: https://huggingface.co/speechbrain/
|
{"language": "en", "license": "apache-2.0", "tags": ["speechbrain", "embeddings", "Speaker", "Verification", "Identification", "pytorch", "ECAPA", "TDNN"], "datasets": ["voxceleb"], "metrics": ["EER"], "widget": [{"example_title": "VoxCeleb Speaker id10003", "src": "https://cdn-media.huggingface.co/speech_samples/VoxCeleb1_00003.wav"}, {"example_title": "VoxCeleb Speaker id10004", "src": "https://cdn-media.huggingface.co/speech_samples/VoxCeleb_00004.wav"}]}
|
aheba31/test-predictor
| null |
[
"speechbrain",
"embeddings",
"Speaker",
"Verification",
"Identification",
"pytorch",
"ECAPA",
"TDNN",
"en",
"dataset:voxceleb",
"arxiv:2106.04624",
"license:apache-2.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2106.04624"
] |
[
"en"
] |
TAGS
#speechbrain #embeddings #Speaker #Verification #Identification #pytorch #ECAPA #TDNN #en #dataset-voxceleb #arxiv-2106.04624 #license-apache-2.0 #region-us
|
Speaker Verification with ECAPA-TDNN embeddings on Voxceleb
===========================================================
This repository provides all the necessary tools to perform speaker verification with a pretrained ECAPA-TDNN model using SpeechBrain.
The system can be used to extract speaker embeddings as well.
It is trained on Voxceleb 1+ Voxceleb2 training data.
For a better experience, we encourage you to learn more about
SpeechBrain. The model performance on Voxceleb1-test set(Cleaned) is:
Pipeline description
--------------------
This system is composed of an ECAPA-TDNN model. It is a combination of convolutional and residual blocks. The embeddings are extracted using attentive statistical pooling. The system is trained with Additive Margin Softmax Loss. Speaker Verification is performed using cosine distance between speaker embeddings.
Install SpeechBrain
-------------------
First of all, please install SpeechBrain with the following command:
Please notice that we encourage you to read our tutorials and learn more about
SpeechBrain.
### Compute your speaker embeddings
### Perform Speaker Verification
The prediction is 1 if the two signals in input are from the same speaker and 0 otherwise.
### Inference on GPU
To perform inference on the GPU, add 'run\_opts={"device":"cuda"}' when calling the 'from\_hparams' method.
### Training
The model was trained with SpeechBrain (aa018540).
To train it from scratch follows these steps:
1. Clone SpeechBrain:
2. Install it:
3. Run Training:
You can find our training results (models, logs, etc) here.
### Limitations
The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.
#### Referencing ECAPA-TDNN
Citing SpeechBrain
==================
Please, cite SpeechBrain if you use it for your research or business.
About SpeechBrain
=================
* Website: URL
* Code: URL
* HuggingFace: URL
|
[
"### Compute your speaker embeddings",
"### Perform Speaker Verification\n\n\nThe prediction is 1 if the two signals in input are from the same speaker and 0 otherwise.",
"### Inference on GPU\n\n\nTo perform inference on the GPU, add 'run\\_opts={\"device\":\"cuda\"}' when calling the 'from\\_hparams' method.",
"### Training\n\n\nThe model was trained with SpeechBrain (aa018540).\nTo train it from scratch follows these steps:\n\n\n1. Clone SpeechBrain:\n2. Install it:\n3. Run Training:\n\n\nYou can find our training results (models, logs, etc) here.",
"### Limitations\n\n\nThe SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.",
"#### Referencing ECAPA-TDNN\n\n\nCiting SpeechBrain\n==================\n\n\nPlease, cite SpeechBrain if you use it for your research or business.\n\n\nAbout SpeechBrain\n=================\n\n\n* Website: URL\n* Code: URL\n* HuggingFace: URL"
] |
[
"TAGS\n#speechbrain #embeddings #Speaker #Verification #Identification #pytorch #ECAPA #TDNN #en #dataset-voxceleb #arxiv-2106.04624 #license-apache-2.0 #region-us \n",
"### Compute your speaker embeddings",
"### Perform Speaker Verification\n\n\nThe prediction is 1 if the two signals in input are from the same speaker and 0 otherwise.",
"### Inference on GPU\n\n\nTo perform inference on the GPU, add 'run\\_opts={\"device\":\"cuda\"}' when calling the 'from\\_hparams' method.",
"### Training\n\n\nThe model was trained with SpeechBrain (aa018540).\nTo train it from scratch follows these steps:\n\n\n1. Clone SpeechBrain:\n2. Install it:\n3. Run Training:\n\n\nYou can find our training results (models, logs, etc) here.",
"### Limitations\n\n\nThe SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.",
"#### Referencing ECAPA-TDNN\n\n\nCiting SpeechBrain\n==================\n\n\nPlease, cite SpeechBrain if you use it for your research or business.\n\n\nAbout SpeechBrain\n=================\n\n\n* Website: URL\n* Code: URL\n* HuggingFace: URL"
] |
null |
transformers
|
<iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe>
<br/><br/>
# Speaker Verification with ECAPA-TDNN embeddings on Zaion
This repository provides all the necessary tools to perform speaker verification with a pretrained ECAPA-TDNN model using SpeechBrain.
The system can be used to extract speaker embeddings as well.
It is trained on Voxceleb 1+ Voxceleb2 training data.
For a better experience, we encourage you to learn more about
[SpeechBrain](https://speechbrain.github.io). The model performance on Voxceleb1-test set(Cleaned) is:
## Pipeline description
This system is composed of an ECAPA-TDNN model. It is a combination of convolutional and residual blocks. The embeddings are extracted using attentive statistical pooling. The system is trained with Additive Margin Softmax Loss. Speaker Verification is performed using cosine distance between speaker embeddings.
## Install SpeechBrain
First of all, please install SpeechBrain with the following command:
```
gh repo clone aheba/speechbrain-aheba-contribs
git checkout pretrain_new
pip install -r requirements.txt
pip install --editable .
```
Please notice that we encourage you to read our tutorials and learn more about
[SpeechBrain](https://speechbrain.github.io).
### Compute your speaker embeddings
```python
import torch audio
import torch
from speechbrain.pretrained import Pretrained
classifier = Pretrained.import_model(source="aheba31/test-predictor", pymodule_file="inference.py" ,class_name="EncoderClassifier")
print(classifier.classify_file("/workspace/contributions/test/spkrec-ecapa-voxceleb/example1.wav"))
```
### Inference on GPU
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
### Training
The model was trained with SpeechBrain (aa018540).
To train it from scratch follows these steps:
1. Clone SpeechBrain:
```bash
git clone https://github.com/speechbrain/speechbrain/
```
2. Install it:
```
cd speechbrain
pip install -r requirements.txt
pip install -e .
```
3. Run Training:
```
cd recipes/VoxCeleb/SpeakerRec
python train_speaker_embeddings.py hparams/train_ecapa_tdnn.yaml --data_folder=your_data_folder
```
You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1-ahC1xeyPinAHp2oAohL-02smNWO41Cc?usp=sharing).
### Limitations
The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.
#### Referencing ECAPA-TDNN
```
@inproceedings{DBLP:conf/interspeech/DesplanquesTD20,
author = {Brecht Desplanques and
Jenthe Thienpondt and
Kris Demuynck},
editor = {Helen Meng and
Bo Xu and
Thomas Fang Zheng},
title = {{ECAPA-TDNN:} Emphasized Channel Attention, Propagation and Aggregation
in {TDNN} Based Speaker Verification},
booktitle = {Interspeech 2020},
pages = {3830--3834},
publisher = {{ISCA}},
year = {2020},
}
```
# **Citing SpeechBrain**
Please, cite SpeechBrain if you use it for your research or business.
```bibtex
@misc{speechbrain,
title={{SpeechBrain}: A General-Purpose Speech Toolkit},
author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio},
year={2021},
eprint={2106.04624},
archivePrefix={arXiv},
primaryClass={eess.AS},
note={arXiv:2106.04624}
}
```
# **About SpeechBrain**
- Website: https://speechbrain.github.io/
- Code: https://github.com/speechbrain/speechbrain/
- HuggingFace: https://huggingface.co/speechbrain/
|
{"language": "en", "license": "apache-2.0", "tags": ["speechbrain", "embeddings", "Speaker", "Verification", "Identification", "pytorch", "ECAPA", "TDNN"], "datasets": ["Zaion corpus"], "metrics": ["EER"], "widget": [{"example_title": "VoxCeleb Speaker id10003", "src": "https://cdn-media.huggingface.co/speech_samples/VoxCeleb1_00003.wav"}, {"example_title": "VoxCeleb Speaker id10004", "src": "https://cdn-media.huggingface.co/speech_samples/VoxCeleb_00004.wav"}]}
|
aheba31/zaion-speaker-ident
| null |
[
"transformers",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #endpoints_compatible #region-us
|
<iframe src="URL frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe>
<br/><br/>
# Speaker Verification with ECAPA-TDNN embeddings on Zaion
This repository provides all the necessary tools to perform speaker verification with a pretrained ECAPA-TDNN model using SpeechBrain.
The system can be used to extract speaker embeddings as well.
It is trained on Voxceleb 1+ Voxceleb2 training data.
For a better experience, we encourage you to learn more about
SpeechBrain. The model performance on Voxceleb1-test set(Cleaned) is:
## Pipeline description
This system is composed of an ECAPA-TDNN model. It is a combination of convolutional and residual blocks. The embeddings are extracted using attentive statistical pooling. The system is trained with Additive Margin Softmax Loss. Speaker Verification is performed using cosine distance between speaker embeddings.
## Install SpeechBrain
First of all, please install SpeechBrain with the following command:
Please notice that we encourage you to read our tutorials and learn more about
SpeechBrain.
### Compute your speaker embeddings
### Inference on GPU
To perform inference on the GPU, add 'run_opts={"device":"cuda"}' when calling the 'from_hparams' method.
### Training
The model was trained with SpeechBrain (aa018540).
To train it from scratch follows these steps:
1. Clone SpeechBrain:
2. Install it:
3. Run Training:
You can find our training results (models, logs, etc) here.
### Limitations
The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.
#### Referencing ECAPA-TDNN
# Citing SpeechBrain
Please, cite SpeechBrain if you use it for your research or business.
# About SpeechBrain
- Website: URL
- Code: URL
- HuggingFace: URL
|
[
"# Speaker Verification with ECAPA-TDNN embeddings on Zaion\n\nThis repository provides all the necessary tools to perform speaker verification with a pretrained ECAPA-TDNN model using SpeechBrain. \nThe system can be used to extract speaker embeddings as well. \nIt is trained on Voxceleb 1+ Voxceleb2 training data. \n\nFor a better experience, we encourage you to learn more about\nSpeechBrain. The model performance on Voxceleb1-test set(Cleaned) is:",
"## Pipeline description\n\nThis system is composed of an ECAPA-TDNN model. It is a combination of convolutional and residual blocks. The embeddings are extracted using attentive statistical pooling. The system is trained with Additive Margin Softmax Loss. Speaker Verification is performed using cosine distance between speaker embeddings.",
"## Install SpeechBrain\n\nFirst of all, please install SpeechBrain with the following command:\n\n\n\nPlease notice that we encourage you to read our tutorials and learn more about\nSpeechBrain.",
"### Compute your speaker embeddings",
"### Inference on GPU\nTo perform inference on the GPU, add 'run_opts={\"device\":\"cuda\"}' when calling the 'from_hparams' method.",
"### Training\nThe model was trained with SpeechBrain (aa018540).\nTo train it from scratch follows these steps:\n1. Clone SpeechBrain:\n\n2. Install it:\n\n\n3. Run Training:\n\n\nYou can find our training results (models, logs, etc) here.",
"### Limitations\nThe SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.",
"#### Referencing ECAPA-TDNN",
"# Citing SpeechBrain\nPlease, cite SpeechBrain if you use it for your research or business.",
"# About SpeechBrain\n- Website: URL\n- Code: URL\n- HuggingFace: URL"
] |
[
"TAGS\n#transformers #endpoints_compatible #region-us \n",
"# Speaker Verification with ECAPA-TDNN embeddings on Zaion\n\nThis repository provides all the necessary tools to perform speaker verification with a pretrained ECAPA-TDNN model using SpeechBrain. \nThe system can be used to extract speaker embeddings as well. \nIt is trained on Voxceleb 1+ Voxceleb2 training data. \n\nFor a better experience, we encourage you to learn more about\nSpeechBrain. The model performance on Voxceleb1-test set(Cleaned) is:",
"## Pipeline description\n\nThis system is composed of an ECAPA-TDNN model. It is a combination of convolutional and residual blocks. The embeddings are extracted using attentive statistical pooling. The system is trained with Additive Margin Softmax Loss. Speaker Verification is performed using cosine distance between speaker embeddings.",
"## Install SpeechBrain\n\nFirst of all, please install SpeechBrain with the following command:\n\n\n\nPlease notice that we encourage you to read our tutorials and learn more about\nSpeechBrain.",
"### Compute your speaker embeddings",
"### Inference on GPU\nTo perform inference on the GPU, add 'run_opts={\"device\":\"cuda\"}' when calling the 'from_hparams' method.",
"### Training\nThe model was trained with SpeechBrain (aa018540).\nTo train it from scratch follows these steps:\n1. Clone SpeechBrain:\n\n2. Install it:\n\n\n3. Run Training:\n\n\nYou can find our training results (models, logs, etc) here.",
"### Limitations\nThe SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.",
"#### Referencing ECAPA-TDNN",
"# Citing SpeechBrain\nPlease, cite SpeechBrain if you use it for your research or business.",
"# About SpeechBrain\n- Website: URL\n- Code: URL\n- HuggingFace: URL"
] |
text-classification
|
transformers
|
### FinancialBERT for Sentiment Analysis
[*FinancialBERT*](https://huggingface.co/ahmedrachid/FinancialBERT) is a BERT model pre-trained on a large corpora of financial texts. The purpose is to enhance financial NLP research and practice in financial domain, hoping that financial practitioners and researchers can benefit from this model without the necessity of the significant computational resources required to train the model.
The model was fine-tuned for Sentiment Analysis task on _Financial PhraseBank_ dataset. Experiments show that this model outperforms the general BERT and other financial domain-specific models.
More details on `FinancialBERT`'s pre-training process can be found at: https://www.researchgate.net/publication/358284785_FinancialBERT_-_A_Pretrained_Language_Model_for_Financial_Text_Mining
### Training data
FinancialBERT model was fine-tuned on [Financial PhraseBank](https://www.researchgate.net/publication/251231364_FinancialPhraseBank-v10), a dataset consisting of 4840 Financial News categorised by sentiment (negative, neutral, positive).
### Fine-tuning hyper-parameters
- learning_rate = 2e-5
- batch_size = 32
- max_seq_length = 512
- num_train_epochs = 5
### Evaluation metrics
The evaluation metrics used are: Precision, Recall and F1-score. The following is the classification report on the test set.
| sentiment | precision | recall | f1-score | support |
| ------------- |:-------------:|:-------------:|:-------------:| -----:|
| negative | 0.96 | 0.97 | 0.97 | 58 |
| neutral | 0.98 | 0.99 | 0.98 | 279 |
| positive | 0.98 | 0.97 | 0.97 | 148 |
| macro avg | 0.97 | 0.98 | 0.98 | 485 |
| weighted avg | 0.98 | 0.98 | 0.98 | 485 |
### How to use
The model can be used thanks to Transformers pipeline for sentiment analysis.
```python
from transformers import BertTokenizer, BertForSequenceClassification
from transformers import pipeline
model = BertForSequenceClassification.from_pretrained("ahmedrachid/FinancialBERT-Sentiment-Analysis",num_labels=3)
tokenizer = BertTokenizer.from_pretrained("ahmedrachid/FinancialBERT-Sentiment-Analysis")
nlp = pipeline("sentiment-analysis", model=model, tokenizer=tokenizer)
sentences = ["Operating profit rose to EUR 13.1 mn from EUR 8.7 mn in the corresponding period in 2007 representing 7.7 % of net sales.",
"Bids or offers include at least 1,000 shares and the value of the shares must correspond to at least EUR 4,000.",
"Raute reported a loss per share of EUR 0.86 for the first half of 2009 , against EPS of EUR 0.74 in the corresponding period of 2008.",
]
results = nlp(sentences)
print(results)
[{'label': 'positive', 'score': 0.9998133778572083},
{'label': 'neutral', 'score': 0.9997822642326355},
{'label': 'negative', 'score': 0.9877365231513977}]
```
> Created by [Ahmed Rachid Hazourli](https://www.linkedin.com/in/ahmed-rachid/)
|
{"language": "en", "tags": ["financial-sentiment-analysis", "sentiment-analysis"], "datasets": ["financial_phrasebank"], "widget": [{"text": "Operating profit rose to EUR 13.1 mn from EUR 8.7 mn in the corresponding period in 2007 representing 7.7 % of net sales."}, {"text": "Bids or offers include at least 1,000 shares and the value of the shares must correspond to at least EUR 4,000."}, {"text": "Raute reported a loss per share of EUR 0.86 for the first half of 2009 , against EPS of EUR 0.74 in the corresponding period of 2008."}]}
|
ahmedrachid/FinancialBERT-Sentiment-Analysis
| null |
[
"transformers",
"pytorch",
"bert",
"text-classification",
"financial-sentiment-analysis",
"sentiment-analysis",
"en",
"dataset:financial_phrasebank",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #bert #text-classification #financial-sentiment-analysis #sentiment-analysis #en #dataset-financial_phrasebank #autotrain_compatible #endpoints_compatible #has_space #region-us
|
### FinancialBERT for Sentiment Analysis
*FinancialBERT* is a BERT model pre-trained on a large corpora of financial texts. The purpose is to enhance financial NLP research and practice in financial domain, hoping that financial practitioners and researchers can benefit from this model without the necessity of the significant computational resources required to train the model.
The model was fine-tuned for Sentiment Analysis task on *Financial PhraseBank* dataset. Experiments show that this model outperforms the general BERT and other financial domain-specific models.
More details on 'FinancialBERT''s pre-training process can be found at: URL
### Training data
FinancialBERT model was fine-tuned on Financial PhraseBank, a dataset consisting of 4840 Financial News categorised by sentiment (negative, neutral, positive).
### Fine-tuning hyper-parameters
* learning\_rate = 2e-5
* batch\_size = 32
* max\_seq\_length = 512
* num\_train\_epochs = 5
### Evaluation metrics
The evaluation metrics used are: Precision, Recall and F1-score. The following is the classification report on the test set.
### How to use
The model can be used thanks to Transformers pipeline for sentiment analysis.
>
> Created by Ahmed Rachid Hazourli
>
>
>
|
[
"### FinancialBERT for Sentiment Analysis\n\n\n*FinancialBERT* is a BERT model pre-trained on a large corpora of financial texts. The purpose is to enhance financial NLP research and practice in financial domain, hoping that financial practitioners and researchers can benefit from this model without the necessity of the significant computational resources required to train the model.\n\n\nThe model was fine-tuned for Sentiment Analysis task on *Financial PhraseBank* dataset. Experiments show that this model outperforms the general BERT and other financial domain-specific models.\n\n\nMore details on 'FinancialBERT''s pre-training process can be found at: URL",
"### Training data\n\n\nFinancialBERT model was fine-tuned on Financial PhraseBank, a dataset consisting of 4840 Financial News categorised by sentiment (negative, neutral, positive).",
"### Fine-tuning hyper-parameters\n\n\n* learning\\_rate = 2e-5\n* batch\\_size = 32\n* max\\_seq\\_length = 512\n* num\\_train\\_epochs = 5",
"### Evaluation metrics\n\n\nThe evaluation metrics used are: Precision, Recall and F1-score. The following is the classification report on the test set.",
"### How to use\n\n\nThe model can be used thanks to Transformers pipeline for sentiment analysis.\n\n\n\n> \n> Created by Ahmed Rachid Hazourli\n> \n> \n>"
] |
[
"TAGS\n#transformers #pytorch #bert #text-classification #financial-sentiment-analysis #sentiment-analysis #en #dataset-financial_phrasebank #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### FinancialBERT for Sentiment Analysis\n\n\n*FinancialBERT* is a BERT model pre-trained on a large corpora of financial texts. The purpose is to enhance financial NLP research and practice in financial domain, hoping that financial practitioners and researchers can benefit from this model without the necessity of the significant computational resources required to train the model.\n\n\nThe model was fine-tuned for Sentiment Analysis task on *Financial PhraseBank* dataset. Experiments show that this model outperforms the general BERT and other financial domain-specific models.\n\n\nMore details on 'FinancialBERT''s pre-training process can be found at: URL",
"### Training data\n\n\nFinancialBERT model was fine-tuned on Financial PhraseBank, a dataset consisting of 4840 Financial News categorised by sentiment (negative, neutral, positive).",
"### Fine-tuning hyper-parameters\n\n\n* learning\\_rate = 2e-5\n* batch\\_size = 32\n* max\\_seq\\_length = 512\n* num\\_train\\_epochs = 5",
"### Evaluation metrics\n\n\nThe evaluation metrics used are: Precision, Recall and F1-score. The following is the classification report on the test set.",
"### How to use\n\n\nThe model can be used thanks to Transformers pipeline for sentiment analysis.\n\n\n\n> \n> Created by Ahmed Rachid Hazourli\n> \n> \n>"
] |
fill-mask
|
transformers
|
**FinancialBERT** is a BERT model pre-trained on a large corpora of financial texts. The purpose is to enhance financial NLP research and practice in financial domain, hoping that financial practitioners and researchers can benefit from it without the necessity of the significant computational resources required to train the model.
The model was trained on a large corpus of financial texts:
- *TRC2-financial*: 1.8M news articles that were published by Reuters between 2008 and 2010.
- *Bloomberg News*: 400,000 articles between 2006 and 2013.
- *Corporate Reports*: 192,000 transcripts (10-K & 10-Q)
- *Earning Calls*: 42,156 documents.
More details on `FinancialBERT` can be found at: https://www.researchgate.net/publication/358284785_FinancialBERT_-_A_Pretrained_Language_Model_for_Financial_Text_Mining
> Created by [Ahmed Rachid Hazourli](https://www.linkedin.com/in/ahmed-rachid/)
|
{"language": "en", "tags": ["fill-mask"], "widget": [{"text": "Tesla remains one of the highest [MASK] stocks on the market. Meanwhile, Aurora Innovation is a pre-revenue upstart that shows promise."}, {"text": "Asian stocks [MASK] from a one-year low on Wednesday as U.S. share futures and oil recovered from the previous day's selloff, but uncertainty over the impact of the Omicron"}, {"text": "U.S. stocks were set to rise on Monday, led by [MASK] in Apple which neared $3 trillion in market capitalization, while investors braced for a Federal Reserve meeting later this week."}]}
|
ahmedrachid/FinancialBERT
| null |
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"en",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #bert #fill-mask #en #autotrain_compatible #endpoints_compatible #has_space #region-us
|
FinancialBERT is a BERT model pre-trained on a large corpora of financial texts. The purpose is to enhance financial NLP research and practice in financial domain, hoping that financial practitioners and researchers can benefit from it without the necessity of the significant computational resources required to train the model.
The model was trained on a large corpus of financial texts:
- *TRC2-financial*: 1.8M news articles that were published by Reuters between 2008 and 2010.
- *Bloomberg News*: 400,000 articles between 2006 and 2013.
- *Corporate Reports*: 192,000 transcripts (10-K & 10-Q)
- *Earning Calls*: 42,156 documents.
More details on 'FinancialBERT' can be found at: URL
> Created by Ahmed Rachid Hazourli
|
[] |
[
"TAGS\n#transformers #pytorch #bert #fill-mask #en #autotrain_compatible #endpoints_compatible #has_space #region-us \n"
] |
text2text-generation
|
transformers
|
#Bert2Bert Turkish Paraphrase Generation
#INISTA 2021
#Comparison of Turkish Paraphrase Generation Models
#Dataset
The dataset used in model training was created with the combination of the translation of the QQP dataset and manually generated dataset.
Dataset [Link](https://drive.google.com/file/d/1-2l9EwIzXZ7fUkNW1vdeF3lzQp2pygp_/view?usp=sharing)
#How To Use
```python
from transformers import BertTokenizerFast,EncoderDecoderModel
tokenizer=BertTokenizerFast.from_pretrained("dbmdz/bert-base-turkish-cased")
model = EncoderDecoderModel.from_pretrained("ahmetbagci/bert2bert-turkish-paraphrase-generation")
text="son model arabalar çevreye daha mı az zarar veriyor?"
input_ids = tokenizer(text, return_tensors="pt").input_ids
output_ids = model.generate(input_ids)
print(tokenizer.decode(output_ids[0], skip_special_tokens=True))
#sample output
#son model arabalar çevre için daha az zararlı mı?
```
#Cite
```bibtex
@INPROCEEDINGS{9548335,
author={Bağcı, Ahmet and Amasyali, Mehmet Fatih},
booktitle={2021 International Conference on INnovations in Intelligent SysTems and Applications (INISTA)},
title={Comparison of Turkish Paraphrase Generation Models},
year={2021},
volume={},
number={},
pages={1-6},
doi={10.1109/INISTA52262.2021.9548335}
}
```
|
{"language": ["tr"], "tags": ["paraphrasing", "encoder-decoder", "seq2seq", "bert"]}
|
ahmetbagci/bert2bert-turkish-paraphrase-generation
| null |
[
"transformers",
"pytorch",
"encoder-decoder",
"text2text-generation",
"paraphrasing",
"seq2seq",
"bert",
"tr",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"tr"
] |
TAGS
#transformers #pytorch #encoder-decoder #text2text-generation #paraphrasing #seq2seq #bert #tr #autotrain_compatible #endpoints_compatible #region-us
|
#Bert2Bert Turkish Paraphrase Generation
#INISTA 2021
#Comparison of Turkish Paraphrase Generation Models
#Dataset
The dataset used in model training was created with the combination of the translation of the QQP dataset and manually generated dataset.
Dataset Link
#How To Use
#Cite
|
[] |
[
"TAGS\n#transformers #pytorch #encoder-decoder #text2text-generation #paraphrasing #seq2seq #bert #tr #autotrain_compatible #endpoints_compatible #region-us \n"
] |
question-answering
|
transformers
|
## Albert xxlarge version 1 language model fine-tuned on SQuAD2.0
### (updated 30Sept2020) with the following results:
```
exact: 86.11134506864315
f1: 89.35371214945009
total': 11873
HasAns_exact': 83.56950067476383
HasAns_f1': 90.06353312254078
HasAns_total': 5928
NoAns_exact': 88.64592094196804
NoAns_f1': 88.64592094196804
NoAns_total': 5945
best_exact': 86.11134506864315
best_exact_thresh': 0.0
best_f1': 89.35371214944985
best_f1_thresh': 0.0
```
### from script:
```
python ${EXAMPLES}/run_squad.py \
--model_type albert \
--model_name_or_path albert-xxlarge-v1 \
--do_train \
--do_eval \
--train_file ${SQUAD}/train-v2.0.json \
--predict_file ${SQUAD}/dev-v2.0.json \
--version_2_with_negative \
--do_lower_case \
--num_train_epochs 3 \
--max_steps 8144 \
--warmup_steps 814 \
--learning_rate 3e-5 \
--max_seq_length 512 \
--doc_stride 128 \
--per_gpu_train_batch_size 6 \
--gradient_accumulation_steps 8 \
--per_gpu_eval_batch_size 48 \
--fp16 \
--fp16_opt_level O1 \
--threads 12 \
--logging_steps 50 \
--save_steps 3000 \
--overwrite_output_dir \
--output_dir ${MODEL_PATH}
```
### using the following software & system:
```
Transformers: 3.1.0
PyTorch: 1.6.0
TensorFlow: 2.3.1
Python: 3.8.1
OS: Linux-5.4.0-48-generic-x86_64-with-glibc2.10
CPU/GPU: Intel i9-9900K / NVIDIA Titan RTX 24GB
```
|
{}
|
ahotrod/albert_xxlargev1_squad2_512
| null |
[
"transformers",
"pytorch",
"tf",
"albert",
"question-answering",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tf #albert #question-answering #endpoints_compatible #has_space #region-us
|
## Albert xxlarge version 1 language model fine-tuned on SQuAD2.0
### (updated 30Sept2020) with the following results:
### from script:
### using the following software & system:
|
[
"## Albert xxlarge version 1 language model fine-tuned on SQuAD2.0",
"### (updated 30Sept2020) with the following results:",
"### from script:",
"### using the following software & system:"
] |
[
"TAGS\n#transformers #pytorch #tf #albert #question-answering #endpoints_compatible #has_space #region-us \n",
"## Albert xxlarge version 1 language model fine-tuned on SQuAD2.0",
"### (updated 30Sept2020) with the following results:",
"### from script:",
"### using the following software & system:"
] |
question-answering
|
transformers
|
## ELECTRA_large_discriminator language model fine-tuned on SQuAD2.0
### with the following results:
```
"exact": 87.09677419354838,
"f1": 89.98343832723452,
"total": 11873,
"HasAns_exact": 84.66599190283401,
"HasAns_f1": 90.44759839056285,
"HasAns_total": 5928,
"NoAns_exact": 89.52060555088309,
"NoAns_f1": 89.52060555088309,
"NoAns_total": 5945,
"best_exact": 87.09677419354838,
"best_exact_thresh": 0.0,
"best_f1": 89.98343832723432,
"best_f1_thresh": 0.0
```
### from script:
```
python ${EXAMPLES}/run_squad.py \
--model_type electra \
--model_name_or_path google/electra-large-discriminator \
--do_train \
--do_eval \
--train_file ${SQUAD}/train-v2.0.json \
--predict_file ${SQUAD}/dev-v2.0.json \
--version_2_with_negative \
--do_lower_case \
--num_train_epochs 3 \
--warmup_steps 306 \
--weight_decay 0.01 \
--learning_rate 3e-5 \
--max_grad_norm 0.5 \
--adam_epsilon 1e-6 \
--max_seq_length 512 \
--doc_stride 128 \
--per_gpu_train_batch_size 8 \
--gradient_accumulation_steps 16 \
--per_gpu_eval_batch_size 128 \
--fp16 \
--fp16_opt_level O1 \
--threads 12 \
--logging_steps 50 \
--save_steps 1000 \
--overwrite_output_dir \
--output_dir ${MODEL_PATH}
```
### using the following system & software:
```
Transformers: 2.11.0
PyTorch: 1.5.0
TensorFlow: 2.2.0
Python: 3.8.1
OS/Platform: Linux-5.3.0-59-generic-x86_64-with-glibc2.10
CPU/GPU: Intel i9-9900K / NVIDIA Titan RTX 24GB
```
|
{}
|
ahotrod/electra_large_discriminator_squad2_512
| null |
[
"transformers",
"pytorch",
"tf",
"electra",
"question-answering",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tf #electra #question-answering #endpoints_compatible #has_space #region-us
|
## ELECTRA_large_discriminator language model fine-tuned on SQuAD2.0
### with the following results:
### from script:
### using the following system & software:
|
[
"## ELECTRA_large_discriminator language model fine-tuned on SQuAD2.0",
"### with the following results:",
"### from script:",
"### using the following system & software:"
] |
[
"TAGS\n#transformers #pytorch #tf #electra #question-answering #endpoints_compatible #has_space #region-us \n",
"## ELECTRA_large_discriminator language model fine-tuned on SQuAD2.0",
"### with the following results:",
"### from script:",
"### using the following system & software:"
] |
text2text-generation
|
transformers
|
IndicBART is a multilingual, sequence-to-sequence pre-trained model focusing on Indic languages and English. It currently supports 11 Indian languages and is based on the mBART architecture. You can use IndicBART model to build natural language generation applications for Indian languages by finetuning the model with supervised training data for tasks like machine translation, summarization, question generation, etc. Some salient features of the IndicBART are:
<ul>
<li >Supported languages: Assamese, Bengali, Gujarati, Hindi, Marathi, Odiya, Punjabi, Kannada, Malayalam, Tamil, Telugu and English. Not all of these languages are supported by mBART50 and mT5. </li>
<li >The model is much smaller than the mBART and mT5(-base) models, so less computationally expensive for finetuning and decoding. </li>
<li> Trained on large Indic language corpora (452 million sentences and 9 billion tokens) which also includes Indian English content. </li>
<li> All languages, except English, have been represented in Devanagari script to encourage transfer learning among the related languages. </li>
</ul>
You can read more about IndicBART in this <a href="https://arxiv.org/abs/2109.02903">paper</a>.
For detailed documentation, look here: https://github.com/AI4Bharat/indic-bart/ and https://indicnlp.ai4bharat.org/indic-bart/
# Pre-training corpus
We used the <a href="https://indicnlp.ai4bharat.org/corpora/">IndicCorp</a> data spanning 12 languages with 452 million sentences (9 billion tokens). The model was trained using the text-infilling objective used in mBART.
# Usage:
```
from transformers import MBartForConditionalGeneration, AutoModelForSeq2SeqLM
from transformers import AlbertTokenizer, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("ai4bharat/IndicBART", do_lower_case=False, use_fast=False, keep_accents=True)
# Or use tokenizer = AlbertTokenizer.from_pretrained("ai4bharat/IndicBART", do_lower_case=False, use_fast=False, keep_accents=True)
model = AutoModelForSeq2SeqLM.from_pretrained("ai4bharat/IndicBART")
# Or use model = MBartForConditionalGeneration.from_pretrained("ai4bharat/IndicBART")
# Some initial mapping
bos_id = tokenizer._convert_token_to_id_with_added_voc("<s>")
eos_id = tokenizer._convert_token_to_id_with_added_voc("</s>")
pad_id = tokenizer._convert_token_to_id_with_added_voc("<pad>")
# To get lang_id use any of ['<2as>', '<2bn>', '<2en>', '<2gu>', '<2hi>', '<2kn>', '<2ml>', '<2mr>', '<2or>', '<2pa>', '<2ta>', '<2te>']
# First tokenize the input and outputs. The format below is how IndicBART was trained so the input should be "Sentence </s> <2xx>" where xx is the language code. Similarly, the output should be "<2yy> Sentence </s>".
inp = tokenizer("I am a boy </s> <2en>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids # tensor([[ 466, 1981, 80, 25573, 64001, 64004]])
out = tokenizer("<2hi> मैं एक लड़का हूँ </s>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids # tensor([[64006, 942, 43, 32720, 8384, 64001]])
# Note that if you use any language other than Hindi or Marathi, you should convert its script to Devanagari using the Indic NLP Library.
model_outputs=model(input_ids=inp, decoder_input_ids=out[:,0:-1], labels=out[:,1:])
# For loss
model_outputs.loss ## This is not label smoothed.
# For logits
model_outputs.logits
# For generation. Pardon the messiness. Note the decoder_start_token_id.
model.eval() # Set dropouts to zero
model_output=model.generate(inp, use_cache=True, num_beams=4, max_length=20, min_length=1, early_stopping=True, pad_token_id=pad_id, bos_token_id=bos_id, eos_token_id=eos_id, decoder_start_token_id=tokenizer._convert_token_to_id_with_added_voc("<2en>"))
# Decode to get output strings
decoded_output=tokenizer.decode(model_output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)
print(decoded_output) # I am a boy
# Note that if your output language is not Hindi or Marathi, you should convert its script from Devanagari to the desired language using the Indic NLP Library.
# What if we mask?
inp = tokenizer("I am [MASK] </s> <2en>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids
model_output=model.generate(inp, use_cache=True, num_beams=4, max_length=20, min_length=1, early_stopping=True, pad_token_id=pad_id, bos_token_id=bos_id, eos_token_id=eos_id, decoder_start_token_id=tokenizer._convert_token_to_id_with_added_voc("<2en>"))
decoded_output=tokenizer.decode(model_output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)
print(decoded_output) # I am happy
inp = tokenizer("मैं [MASK] हूँ </s> <2hi>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids
model_output=model.generate(inp, use_cache=True, num_beams=4, max_length=20, min_length=1, early_stopping=True, pad_token_id=pad_id, bos_token_id=bos_id, eos_token_id=eos_id, decoder_start_token_id=tokenizer._convert_token_to_id_with_added_voc("<2en>"))
decoded_output=tokenizer.decode(model_output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)
print(decoded_output) # मैं जानता हूँ
inp = tokenizer("मला [MASK] पाहिजे </s> <2mr>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids
model_output=model.generate(inp, use_cache=True, num_beams=4, max_length=20, min_length=1, early_stopping=True, pad_token_id=pad_id, bos_token_id=bos_id, eos_token_id=eos_id, decoder_start_token_id=tokenizer._convert_token_to_id_with_added_voc("<2en>"))
decoded_output=tokenizer.decode(model_output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)
print(decoded_output) # मला ओळखलं पाहिजे
```
# Notes:
1. This is compatible with the latest version of transformers but was developed with version 4.3.2 so consider using 4.3.2 if possible.
2. While I have only shown how to get logits and loss and how to generate outputs, you can do pretty much everything the MBartForConditionalGeneration class can do as in https://huggingface.co/docs/transformers/model_doc/mbart#transformers.MBartForConditionalGeneration
3. Note that the tokenizer I have used is based on sentencepiece and not BPE. Therefore, I used the AlbertTokenizer class and not the MBartTokenizer class.
4. If you wish to use any language written in a non-Devanagari script (except English), then you should first convert it to Devanagari using the <a href="https://github.com/anoopkunchukuttan/indic_nlp_library">Indic NLP Library</a>. After you get the output, you should convert it back into the original script.
# Fine-tuning on a downstream task
1. If you wish to fine-tune this model, then you can do so using the <a href="https://github.com/prajdabre/yanmtt">YANMTT</a> toolkit, following the instructions <a href="https://github.com/AI4Bharat/indic-bart ">here</a>.
2. (Untested) Alternatively, you may use the official huggingface scripts for <a href="https://github.com/huggingface/transformers/tree/master/examples/pytorch/translation">translation</a> and <a href="https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization">summarization</a>.
# Contributors
<ul>
<li> Raj Dabre </li>
<li> Himani Shrotriya </li>
<li> Anoop Kunchukuttan </li>
<li> Ratish Puduppully </li>
<li> Mitesh M. Khapra </li>
<li> Pratyush Kumar </li>
</ul>
# Paper
If you use IndicBART, please cite the following paper:
```
@misc{dabre2021indicbart,
title={IndicBART: A Pre-trained Model for Natural Language Generation of Indic Languages},
author={Raj Dabre and Himani Shrotriya and Anoop Kunchukuttan and Ratish Puduppully and Mitesh M. Khapra and Pratyush Kumar},
year={2021},
eprint={2109.02903},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
# License
The model is available under the MIT License.
|
{"language": ["as", "bn", "gu", "hi", "kn", "ml", "mr", "or", "pa", "ta", "te"], "tags": ["multilingual", "nlp", "indicnlp"]}
|
ai4bharat/IndicBART
| null |
[
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"multilingual",
"nlp",
"indicnlp",
"as",
"bn",
"gu",
"hi",
"kn",
"ml",
"mr",
"or",
"pa",
"ta",
"te",
"arxiv:2109.02903",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2109.02903"
] |
[
"as",
"bn",
"gu",
"hi",
"kn",
"ml",
"mr",
"or",
"pa",
"ta",
"te"
] |
TAGS
#transformers #pytorch #mbart #text2text-generation #multilingual #nlp #indicnlp #as #bn #gu #hi #kn #ml #mr #or #pa #ta #te #arxiv-2109.02903 #autotrain_compatible #endpoints_compatible #region-us
|
IndicBART is a multilingual, sequence-to-sequence pre-trained model focusing on Indic languages and English. It currently supports 11 Indian languages and is based on the mBART architecture. You can use IndicBART model to build natural language generation applications for Indian languages by finetuning the model with supervised training data for tasks like machine translation, summarization, question generation, etc. Some salient features of the IndicBART are:
<ul>
<li >Supported languages: Assamese, Bengali, Gujarati, Hindi, Marathi, Odiya, Punjabi, Kannada, Malayalam, Tamil, Telugu and English. Not all of these languages are supported by mBART50 and mT5. </li>
<li >The model is much smaller than the mBART and mT5(-base) models, so less computationally expensive for finetuning and decoding. </li>
<li> Trained on large Indic language corpora (452 million sentences and 9 billion tokens) which also includes Indian English content. </li>
<li> All languages, except English, have been represented in Devanagari script to encourage transfer learning among the related languages. </li>
</ul>
You can read more about IndicBART in this <a href="URL
For detailed documentation, look here: URL and URL
# Pre-training corpus
We used the <a href="URL data spanning 12 languages with 452 million sentences (9 billion tokens). The model was trained using the text-infilling objective used in mBART.
# Usage:
# Notes:
1. This is compatible with the latest version of transformers but was developed with version 4.3.2 so consider using 4.3.2 if possible.
2. While I have only shown how to get logits and loss and how to generate outputs, you can do pretty much everything the MBartForConditionalGeneration class can do as in URL
3. Note that the tokenizer I have used is based on sentencepiece and not BPE. Therefore, I used the AlbertTokenizer class and not the MBartTokenizer class.
4. If you wish to use any language written in a non-Devanagari script (except English), then you should first convert it to Devanagari using the <a href="URL NLP Library</a>. After you get the output, you should convert it back into the original script.
# Fine-tuning on a downstream task
1. If you wish to fine-tune this model, then you can do so using the <a href="URL toolkit, following the instructions <a href="URL ">here</a>.
2. (Untested) Alternatively, you may use the official huggingface scripts for <a href="URL and <a href="URL
# Contributors
<ul>
<li> Raj Dabre </li>
<li> Himani Shrotriya </li>
<li> Anoop Kunchukuttan </li>
<li> Ratish Puduppully </li>
<li> Mitesh M. Khapra </li>
<li> Pratyush Kumar </li>
</ul>
# Paper
If you use IndicBART, please cite the following paper:
# License
The model is available under the MIT License.
|
[
"# Pre-training corpus\n\nWe used the <a href=\"URL data spanning 12 languages with 452 million sentences (9 billion tokens). The model was trained using the text-infilling objective used in mBART.",
"# Usage:",
"# Notes:\n1. This is compatible with the latest version of transformers but was developed with version 4.3.2 so consider using 4.3.2 if possible.\n2. While I have only shown how to get logits and loss and how to generate outputs, you can do pretty much everything the MBartForConditionalGeneration class can do as in URL\n3. Note that the tokenizer I have used is based on sentencepiece and not BPE. Therefore, I used the AlbertTokenizer class and not the MBartTokenizer class.\n4. If you wish to use any language written in a non-Devanagari script (except English), then you should first convert it to Devanagari using the <a href=\"URL NLP Library</a>. After you get the output, you should convert it back into the original script.",
"# Fine-tuning on a downstream task\n\n1. If you wish to fine-tune this model, then you can do so using the <a href=\"URL toolkit, following the instructions <a href=\"URL \">here</a>.\n2. (Untested) Alternatively, you may use the official huggingface scripts for <a href=\"URL and <a href=\"URL",
"# Contributors\n<ul>\n<li> Raj Dabre </li>\n<li> Himani Shrotriya </li>\n<li> Anoop Kunchukuttan </li>\n<li> Ratish Puduppully </li>\n<li> Mitesh M. Khapra </li>\n<li> Pratyush Kumar </li>\n</ul>",
"# Paper\nIf you use IndicBART, please cite the following paper:",
"# License\nThe model is available under the MIT License."
] |
[
"TAGS\n#transformers #pytorch #mbart #text2text-generation #multilingual #nlp #indicnlp #as #bn #gu #hi #kn #ml #mr #or #pa #ta #te #arxiv-2109.02903 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Pre-training corpus\n\nWe used the <a href=\"URL data spanning 12 languages with 452 million sentences (9 billion tokens). The model was trained using the text-infilling objective used in mBART.",
"# Usage:",
"# Notes:\n1. This is compatible with the latest version of transformers but was developed with version 4.3.2 so consider using 4.3.2 if possible.\n2. While I have only shown how to get logits and loss and how to generate outputs, you can do pretty much everything the MBartForConditionalGeneration class can do as in URL\n3. Note that the tokenizer I have used is based on sentencepiece and not BPE. Therefore, I used the AlbertTokenizer class and not the MBartTokenizer class.\n4. If you wish to use any language written in a non-Devanagari script (except English), then you should first convert it to Devanagari using the <a href=\"URL NLP Library</a>. After you get the output, you should convert it back into the original script.",
"# Fine-tuning on a downstream task\n\n1. If you wish to fine-tune this model, then you can do so using the <a href=\"URL toolkit, following the instructions <a href=\"URL \">here</a>.\n2. (Untested) Alternatively, you may use the official huggingface scripts for <a href=\"URL and <a href=\"URL",
"# Contributors\n<ul>\n<li> Raj Dabre </li>\n<li> Himani Shrotriya </li>\n<li> Anoop Kunchukuttan </li>\n<li> Ratish Puduppully </li>\n<li> Mitesh M. Khapra </li>\n<li> Pratyush Kumar </li>\n</ul>",
"# Paper\nIf you use IndicBART, please cite the following paper:",
"# License\nThe model is available under the MIT License."
] |
null |
transformers
|
# IndicBERT
IndicBERT is a multilingual ALBERT model pretrained exclusively on 12 major Indian languages. It is pre-trained on our novel monolingual corpus of around 9 billion tokens and subsequently evaluated on a set of diverse tasks. IndicBERT has much fewer parameters than other multilingual models (mBERT, XLM-R etc.) while it also achieves a performance on-par or better than these models.
The 12 languages covered by IndicBERT are: Assamese, Bengali, English, Gujarati, Hindi, Kannada, Malayalam, Marathi, Oriya, Punjabi, Tamil, Telugu.
The code can be found [here](https://github.com/divkakwani/indic-bert). For more information, checkout our [project page](https://indicnlp.ai4bharat.org/) or our [paper](https://indicnlp.ai4bharat.org/papers/arxiv2020_indicnlp_corpus.pdf).
## Pretraining Corpus
We pre-trained indic-bert on AI4Bharat's monolingual corpus. The corpus has the following distribution of languages:
| Language | as | bn | en | gu | hi | kn | |
| ----------------- | ------ | ------ | ------ | ------ | ------ | ------ | ------- |
| **No. of Tokens** | 36.9M | 815M | 1.34B | 724M | 1.84B | 712M | |
| **Language** | **ml** | **mr** | **or** | **pa** | **ta** | **te** | **all** |
| **No. of Tokens** | 767M | 560M | 104M | 814M | 549M | 671M | 8.9B |
## Evaluation Results
IndicBERT is evaluated on IndicGLUE and some additional tasks. The results are summarized below. For more details about the tasks, refer our [official repo](https://github.com/divkakwani/indic-bert)
#### IndicGLUE
Task | mBERT | XLM-R | IndicBERT
-----| ----- | ----- | ------
News Article Headline Prediction | 89.58 | 95.52 | **95.87**
Wikipedia Section Title Prediction| **73.66** | 66.33 | 73.31
Cloze-style multiple-choice QA | 39.16 | 27.98 | **41.87**
Article Genre Classification | 90.63 | 97.03 | **97.34**
Named Entity Recognition (F1-score) | **73.24** | 65.93 | 64.47
Cross-Lingual Sentence Retrieval Task | 21.46 | 13.74 | **27.12**
Average | 64.62 | 61.09 | **66.66**
#### Additional Tasks
Task | Task Type | mBERT | XLM-R | IndicBERT
-----| ----- | ----- | ------ | -----
BBC News Classification | Genre Classification | 60.55 | **75.52** | 74.60
IIT Product Reviews | Sentiment Analysis | 74.57 | **78.97** | 71.32
IITP Movie Reviews | Sentiment Analaysis | 56.77 | **61.61** | 59.03
Soham News Article | Genre Classification | 80.23 | **87.6** | 78.45
Midas Discourse | Discourse Analysis | 71.20 | **79.94** | 78.44
iNLTK Headlines Classification | Genre Classification | 87.95 | 93.38 | **94.52**
ACTSA Sentiment Analysis | Sentiment Analysis | 48.53 | 59.33 | **61.18**
Winograd NLI | Natural Language Inference | 56.34 | 55.87 | **56.34**
Choice of Plausible Alternative (COPA) | Natural Language Inference | 54.92 | 51.13 | **58.33**
Amrita Exact Paraphrase | Paraphrase Detection | **93.81** | 93.02 | 93.75
Amrita Rough Paraphrase | Paraphrase Detection | 83.38 | 82.20 | **84.33**
Average | | 69.84 | **74.42** | 73.66
\* Note: all models have been restricted to a max_seq_length of 128.
## Downloads
The model can be downloaded [here](https://storage.googleapis.com/ai4bharat-public-indic-nlp-corpora/models/indic-bert-v1.tar.gz). Both tf checkpoints and pytorch binaries are included in the archive. Alternatively, you can also download it from [Huggingface](https://huggingface.co/ai4bharat/indic-bert).
## Citing
If you are using any of the resources, please cite the following article:
```
@inproceedings{kakwani2020indicnlpsuite,
title={{IndicNLPSuite: Monolingual Corpora, Evaluation Benchmarks and Pre-trained Multilingual Language Models for Indian Languages}},
author={Divyanshu Kakwani and Anoop Kunchukuttan and Satish Golla and Gokul N.C. and Avik Bhattacharyya and Mitesh M. Khapra and Pratyush Kumar},
year={2020},
booktitle={Findings of EMNLP},
}
```
We would like to hear from you if:
- You are using our resources. Please let us know how you are putting these resources to use.
- You have any feedback on these resources.
## License
The IndicBERT code (and models) are released under the MIT License.
## Contributors
- Divyanshu Kakwani
- Anoop Kunchukuttan
- Gokul NC
- Satish Golla
- Avik Bhattacharyya
- Mitesh Khapra
- Pratyush Kumar
This work is the outcome of a volunteer effort as part of [AI4Bharat initiative](https://ai4bharat.org).
## Contact
- Anoop Kunchukuttan ([anoop.kunchukuttan@gmail.com](mailto:anoop.kunchukuttan@gmail.com))
- Mitesh Khapra ([miteshk@cse.iitm.ac.in](mailto:miteshk@cse.iitm.ac.in))
- Pratyush Kumar ([pratyush@cse.iitm.ac.in](mailto:pratyush@cse.iitm.ac.in))
|
{"language": ["as", "bn", "en", "gu", "hi", "kn", "ml", "mr", "or", "pa", "ta", "te"], "license": "mit", "datasets": ["AI4Bharat IndicNLP Corpora"]}
|
ai4bharat/indic-bert
| null |
[
"transformers",
"pytorch",
"albert",
"as",
"bn",
"en",
"gu",
"hi",
"kn",
"ml",
"mr",
"or",
"pa",
"ta",
"te",
"license:mit",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"as",
"bn",
"en",
"gu",
"hi",
"kn",
"ml",
"mr",
"or",
"pa",
"ta",
"te"
] |
TAGS
#transformers #pytorch #albert #as #bn #en #gu #hi #kn #ml #mr #or #pa #ta #te #license-mit #endpoints_compatible #has_space #region-us
|
IndicBERT
=========
IndicBERT is a multilingual ALBERT model pretrained exclusively on 12 major Indian languages. It is pre-trained on our novel monolingual corpus of around 9 billion tokens and subsequently evaluated on a set of diverse tasks. IndicBERT has much fewer parameters than other multilingual models (mBERT, XLM-R etc.) while it also achieves a performance on-par or better than these models.
The 12 languages covered by IndicBERT are: Assamese, Bengali, English, Gujarati, Hindi, Kannada, Malayalam, Marathi, Oriya, Punjabi, Tamil, Telugu.
The code can be found here. For more information, checkout our project page or our paper.
Pretraining Corpus
------------------
We pre-trained indic-bert on AI4Bharat's monolingual corpus. The corpus has the following distribution of languages:
Evaluation Results
------------------
IndicBERT is evaluated on IndicGLUE and some additional tasks. The results are summarized below. For more details about the tasks, refer our official repo
#### IndicGLUE
#### Additional Tasks
\* Note: all models have been restricted to a max\_seq\_length of 128.
Downloads
---------
The model can be downloaded here. Both tf checkpoints and pytorch binaries are included in the archive. Alternatively, you can also download it from Huggingface.
Citing
------
If you are using any of the resources, please cite the following article:
We would like to hear from you if:
* You are using our resources. Please let us know how you are putting these resources to use.
* You have any feedback on these resources.
License
-------
The IndicBERT code (and models) are released under the MIT License.
Contributors
------------
* Divyanshu Kakwani
* Anoop Kunchukuttan
* Gokul NC
* Satish Golla
* Avik Bhattacharyya
* Mitesh Khapra
* Pratyush Kumar
This work is the outcome of a volunteer effort as part of AI4Bharat initiative.
Contact
-------
* Anoop Kunchukuttan (anoop.kunchukuttan@URL)
* Mitesh Khapra (miteshk@URL)
* Pratyush Kumar (pratyush@URL)
|
[
"#### IndicGLUE",
"#### Additional Tasks\n\n\n\n\\* Note: all models have been restricted to a max\\_seq\\_length of 128.\n\n\nDownloads\n---------\n\n\nThe model can be downloaded here. Both tf checkpoints and pytorch binaries are included in the archive. Alternatively, you can also download it from Huggingface.\n\n\nCiting\n------\n\n\nIf you are using any of the resources, please cite the following article:\n\n\nWe would like to hear from you if:\n\n\n* You are using our resources. Please let us know how you are putting these resources to use.\n* You have any feedback on these resources.\n\n\nLicense\n-------\n\n\nThe IndicBERT code (and models) are released under the MIT License.\n\n\nContributors\n------------\n\n\n* Divyanshu Kakwani\n* Anoop Kunchukuttan\n* Gokul NC\n* Satish Golla\n* Avik Bhattacharyya\n* Mitesh Khapra\n* Pratyush Kumar\n\n\nThis work is the outcome of a volunteer effort as part of AI4Bharat initiative.\n\n\nContact\n-------\n\n\n* Anoop Kunchukuttan (anoop.kunchukuttan@URL)\n* Mitesh Khapra (miteshk@URL)\n* Pratyush Kumar (pratyush@URL)"
] |
[
"TAGS\n#transformers #pytorch #albert #as #bn #en #gu #hi #kn #ml #mr #or #pa #ta #te #license-mit #endpoints_compatible #has_space #region-us \n",
"#### IndicGLUE",
"#### Additional Tasks\n\n\n\n\\* Note: all models have been restricted to a max\\_seq\\_length of 128.\n\n\nDownloads\n---------\n\n\nThe model can be downloaded here. Both tf checkpoints and pytorch binaries are included in the archive. Alternatively, you can also download it from Huggingface.\n\n\nCiting\n------\n\n\nIf you are using any of the resources, please cite the following article:\n\n\nWe would like to hear from you if:\n\n\n* You are using our resources. Please let us know how you are putting these resources to use.\n* You have any feedback on these resources.\n\n\nLicense\n-------\n\n\nThe IndicBERT code (and models) are released under the MIT License.\n\n\nContributors\n------------\n\n\n* Divyanshu Kakwani\n* Anoop Kunchukuttan\n* Gokul NC\n* Satish Golla\n* Avik Bhattacharyya\n* Mitesh Khapra\n* Pratyush Kumar\n\n\nThis work is the outcome of a volunteer effort as part of AI4Bharat initiative.\n\n\nContact\n-------\n\n\n* Anoop Kunchukuttan (anoop.kunchukuttan@URL)\n* Mitesh Khapra (miteshk@URL)\n* Pratyush Kumar (pratyush@URL)"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# recipe-improver
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.5570
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 5539, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 2.5570 | 0 |
### Framework versions
- Transformers 4.15.0
- TensorFlow 2.7.0
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "model-index": [{"name": "recipe-improver", "results": []}]}
|
aidan-o-brien/recipe-improver
| null |
[
"transformers",
"tf",
"albert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #tf #albert #question-answering #generated_from_keras_callback #license-apache-2.0 #endpoints_compatible #has_space #region-us
|
recipe-improver
===============
This model is a fine-tuned version of albert-base-v2 on an unknown dataset.
It achieves the following results on the evaluation set:
* Train Loss: 2.5570
* Epoch: 0
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* optimizer: {'name': 'Adam', 'learning\_rate': {'class\_name': 'PolynomialDecay', 'config': {'initial\_learning\_rate': 5e-05, 'decay\_steps': 5539, 'end\_learning\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta\_1': 0.9, 'beta\_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
* training\_precision: float32
### Training results
### Framework versions
* Transformers 4.15.0
* TensorFlow 2.7.0
* Datasets 1.17.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'learning\\_rate': {'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 5e-05, 'decay\\_steps': 5539, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}\n* training\\_precision: float32",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* TensorFlow 2.7.0\n* Datasets 1.17.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #tf #albert #question-answering #generated_from_keras_callback #license-apache-2.0 #endpoints_compatible #has_space #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'learning\\_rate': {'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 5e-05, 'decay\\_steps': 5539, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}\n* training\\_precision: float32",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* TensorFlow 2.7.0\n* Datasets 1.17.0\n* Tokenizers 0.10.3"
] |
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0607
- Precision: 0.9260
- Recall: 0.9384
- F1: 0.9322
- Accuracy: 0.9834
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2545 | 1.0 | 878 | 0.0711 | 0.9096 | 0.9214 | 0.9154 | 0.9800 |
| 0.0555 | 2.0 | 1756 | 0.0593 | 0.9185 | 0.9356 | 0.9270 | 0.9827 |
| 0.0297 | 3.0 | 2634 | 0.0607 | 0.9260 | 0.9384 | 0.9322 | 0.9834 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["conll2003"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "distilbert-base-uncased-finetuned-ner", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "conll2003", "type": "conll2003", "args": "conll2003"}, "metrics": [{"type": "precision", "value": 0.9260322366968425, "name": "Precision"}, {"type": "recall", "value": 0.9383599955252265, "name": "Recall"}, {"type": "f1", "value": 0.9321553592265377, "name": "F1"}, {"type": "accuracy", "value": 0.9834146186474335, "name": "Accuracy"}]}]}]}
|
aidj/distilbert-base-uncased-finetuned-ner
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
|
distilbert-base-uncased-finetuned-ner
=====================================
This model is a fine-tuned version of distilbert-base-uncased on the conll2003 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0607
* Precision: 0.9260
* Recall: 0.9384
* F1: 0.9322
* Accuracy: 0.9834
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vivos_prj1tha
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the vivos_dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7737
- Wer: 0.5128
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.0541 | 10.25 | 400 | 1.0293 | 0.7051 |
| 0.5514 | 20.51 | 800 | 0.7737 | 0.5128 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["vivos_dataset"], "model-index": [{"name": "vivos_prj1tha", "results": []}]}
|
aiface/vivos_prj1tha
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:vivos_dataset",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-vivos_dataset #license-apache-2.0 #endpoints_compatible #region-us
|
vivos\_prj1tha
==============
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the vivos\_dataset dataset.
It achieves the following results on the evaluation set:
* Loss: 0.7737
* Wer: 0.5128
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 30
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-vivos_dataset #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.10.3"
] |
text-generation
|
transformers
|
# My Awesome Model
|
{"tags": ["conversational"]}
|
aimiekhe/yummv1
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# My Awesome Model
|
[
"# My Awesome Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# My Awesome Model"
] |
text-generation
|
transformers
|
# My Awesome Model
|
{"tags": ["conversational"]}
|
aimiekhe/yummv2
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# My Awesome Model
|
[
"# My Awesome Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# My Awesome Model"
] |
summarization
|
transformers
|
# BART base model fine-tuned on CNN Dailymail
- This model is a [bart-base model](https://huggingface.co/facebook/bart-base) fine-tuned on the [CNN/Dailymail summarization dataset](https://huggingface.co/datasets/cnn_dailymail) using [Ainize Teachable-NLP](https://ainize.ai/teachable-nlp).
The Bart model was proposed by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer on 29 Oct, 2019. According to the abstract,
Bart uses a standard seq2seq/machine translation architecture with a bidirectional encoder (like BERT) and a left-to-right decoder (like GPT).
The pretraining task involves randomly shuffling the order of the original sentences and a novel in-filling scheme, where spans of text are replaced with a single mask token.
BART is particularly effective when fine tuned for text generation but also works well for comprehension tasks. It matches the performance of RoBERTa with comparable training resources on GLUE and SQuAD, achieves new state-of-the-art results on a range of abstractive dialogue, question answering, and summarization tasks, with gains of up to 6 ROUGE.
The Authors’ code can be found here:
https://github.com/pytorch/fairseq/tree/master/examples/bart
## Usage
### Python Code
```python
from transformers import PreTrainedTokenizerFast, BartForConditionalGeneration
# Load Model and Tokenize
tokenizer = PreTrainedTokenizerFast.from_pretrained("ainize/bart-base-cnn")
model = BartForConditionalGeneration.from_pretrained("ainize/bart-base-cnn")
# Encode Input Text
input_text = '(CNN) -- South Korea launched an investigation Tuesday into reports of toxic chemicals being dumped at a former U.S. military base, the Defense Ministry said. The tests follow allegations of American soldiers burying chemicals on Korean soil. The first tests are being carried out by a joint military, government and civilian task force at the site of what was Camp Mercer, west of Seoul. "Soil and underground water will be taken in the areas where toxic chemicals were allegedly buried," said the statement from the South Korean Defense Ministry. Once testing is finished, the government will decide on how to test more than 80 other sites -- all former bases. The alarm was raised this month when a U.S. veteran alleged barrels of the toxic herbicide Agent Orange were buried at an American base in South Korea in the late 1970s. Two of his fellow soldiers corroborated his story about Camp Carroll, about 185 miles (300 kilometers) southeast of the capital, Seoul. "We\'ve been working very closely with the Korean government since we had the initial claims," said Lt. Gen. John Johnson, who is heading the Camp Carroll Task Force. "If we get evidence that there is a risk to health, we are going to fix it." A joint U.S.- South Korean investigation is being conducted at Camp Carroll to test the validity of allegations. The U.S. military sprayed Agent Orange from planes onto jungles in Vietnam to kill vegetation in an effort to expose guerrilla fighters. Exposure to the chemical has been blamed for a wide variety of ailments, including certain forms of cancer and nerve disorders. It has also been linked to birth defects, according to the Department of Veterans Affairs. Journalist Yoonjung Seo contributed to this report.'
input_ids = tokenizer.encode(input_text, return_tensors="pt")
# Generate Summary Text Ids
summary_text_ids = model.generate(
input_ids=input_ids,
bos_token_id=model.config.bos_token_id,
eos_token_id=model.config.eos_token_id,
length_penalty=2.0,
max_length=142,
min_length=56,
num_beams=4,
)
# Decoding Text
print(tokenizer.decode(summary_text_ids[0], skip_special_tokens=True))
```
### API
You can experience this model through [ainize](https://ainize.ai/gkswjdzz/summarize-torchserve?branch=main).
|
{"language": "en", "license": "apache-2.0", "tags": ["summarization", "bart"], "datasets": ["cnn_dailymail"]}
|
ainize/bart-base-cnn
| null |
[
"transformers",
"pytorch",
"bart",
"feature-extraction",
"summarization",
"en",
"dataset:cnn_dailymail",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #bart #feature-extraction #summarization #en #dataset-cnn_dailymail #license-apache-2.0 #endpoints_compatible #has_space #region-us
|
# BART base model fine-tuned on CNN Dailymail
- This model is a bart-base model fine-tuned on the CNN/Dailymail summarization dataset using Ainize Teachable-NLP.
The Bart model was proposed by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer on 29 Oct, 2019. According to the abstract,
Bart uses a standard seq2seq/machine translation architecture with a bidirectional encoder (like BERT) and a left-to-right decoder (like GPT).
The pretraining task involves randomly shuffling the order of the original sentences and a novel in-filling scheme, where spans of text are replaced with a single mask token.
BART is particularly effective when fine tuned for text generation but also works well for comprehension tasks. It matches the performance of RoBERTa with comparable training resources on GLUE and SQuAD, achieves new state-of-the-art results on a range of abstractive dialogue, question answering, and summarization tasks, with gains of up to 6 ROUGE.
The Authors’ code can be found here:
URL
## Usage
### Python Code
### API
You can experience this model through ainize.
|
[
"# BART base model fine-tuned on CNN Dailymail\n\n- This model is a bart-base model fine-tuned on the CNN/Dailymail summarization dataset using Ainize Teachable-NLP.\n\nThe Bart model was proposed by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer on 29 Oct, 2019. According to the abstract,\n\nBart uses a standard seq2seq/machine translation architecture with a bidirectional encoder (like BERT) and a left-to-right decoder (like GPT).\n\nThe pretraining task involves randomly shuffling the order of the original sentences and a novel in-filling scheme, where spans of text are replaced with a single mask token.\n\nBART is particularly effective when fine tuned for text generation but also works well for comprehension tasks. It matches the performance of RoBERTa with comparable training resources on GLUE and SQuAD, achieves new state-of-the-art results on a range of abstractive dialogue, question answering, and summarization tasks, with gains of up to 6 ROUGE.\n\nThe Authors’ code can be found here:\nURL",
"## Usage",
"### Python Code",
"### API\nYou can experience this model through ainize."
] |
[
"TAGS\n#transformers #pytorch #bart #feature-extraction #summarization #en #dataset-cnn_dailymail #license-apache-2.0 #endpoints_compatible #has_space #region-us \n",
"# BART base model fine-tuned on CNN Dailymail\n\n- This model is a bart-base model fine-tuned on the CNN/Dailymail summarization dataset using Ainize Teachable-NLP.\n\nThe Bart model was proposed by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer on 29 Oct, 2019. According to the abstract,\n\nBart uses a standard seq2seq/machine translation architecture with a bidirectional encoder (like BERT) and a left-to-right decoder (like GPT).\n\nThe pretraining task involves randomly shuffling the order of the original sentences and a novel in-filling scheme, where spans of text are replaced with a single mask token.\n\nBART is particularly effective when fine tuned for text generation but also works well for comprehension tasks. It matches the performance of RoBERTa with comparable training resources on GLUE and SQuAD, achieves new state-of-the-art results on a range of abstractive dialogue, question answering, and summarization tasks, with gains of up to 6 ROUGE.\n\nThe Authors’ code can be found here:\nURL",
"## Usage",
"### Python Code",
"### API\nYou can experience this model through ainize."
] |
feature-extraction
|
transformers
|
Original repository : <https://huggingface.co/EleutherAI/gpt-j-6B>
|
{"license": "apache-2.0"}
|
ainize/gpt-j-6B-float16
| null |
[
"transformers",
"pytorch",
"gptj",
"feature-extraction",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gptj #feature-extraction #license-apache-2.0 #endpoints_compatible #has_space #region-us
|
Original repository : <URL
|
[] |
[
"TAGS\n#transformers #pytorch #gptj #feature-extraction #license-apache-2.0 #endpoints_compatible #has_space #region-us \n"
] |
text-generation
|
transformers
|
### Model information
Fine tuning data 1: https://www.kaggle.com/andradaolteanu/rickmorty-scripts
Base model: e-tony/gpt2-rnm
Epoch: 1
Train runtime: 3.4982 secs
Loss: 3.0894
Training notebook: [Colab](https://colab.research.google.com/drive/1RawVxulLETFicWMY0YANUdP-H-e7Eeyc)
### ===Teachable NLP=== ###
To train a GPT-2 model, write code and require GPU resources, but can easily fine-tune and get an API to use the model here for free.
Teachable NLP: [Teachable NLP](https://ainize.ai/teachable-nlp)
Tutorial: [Tutorial](https://forum.ainetwork.ai/t/teachable-nlp-how-to-use-teachable-nlp/65?utm_source=community&utm_medium=huggingface&utm_campaign=model&utm_content=teachable%20nlp)
|
{}
|
ainize/gpt2-rnm-with-only-rick
| null |
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
### Model information
Fine tuning data 1: URL
Base model: e-tony/gpt2-rnm
Epoch: 1
Train runtime: 3.4982 secs
Loss: 3.0894
Training notebook: Colab
### ===Teachable NLP=== ###
To train a GPT-2 model, write code and require GPU resources, but can easily fine-tune and get an API to use the model here for free.
Teachable NLP: Teachable NLP
Tutorial: Tutorial
|
[
"### Model information\n \n Fine tuning data 1: URL\n Base model: e-tony/gpt2-rnm\n Epoch: 1\n Train runtime: 3.4982 secs\n Loss: 3.0894\n\n\nTraining notebook: Colab",
"### ===Teachable NLP=== ###\n\nTo train a GPT-2 model, write code and require GPU resources, but can easily fine-tune and get an API to use the model here for free.\n\nTeachable NLP: Teachable NLP\n\nTutorial: Tutorial"
] |
[
"TAGS\n#transformers #pytorch #jax #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Model information\n \n Fine tuning data 1: URL\n Base model: e-tony/gpt2-rnm\n Epoch: 1\n Train runtime: 3.4982 secs\n Loss: 3.0894\n\n\nTraining notebook: Colab",
"### ===Teachable NLP=== ###\n\nTo train a GPT-2 model, write code and require GPU resources, but can easily fine-tune and get an API to use the model here for free.\n\nTeachable NLP: Teachable NLP\n\nTutorial: Tutorial"
] |
text-generation
|
transformers
|
### Model information
Fine tuning data 1: https://www.kaggle.com/andradaolteanu/rickmorty-scripts
Base model: e-tony/gpt2-rnm
Epoch: 3
Train runtime: 7.1779 secs
Loss: 2.5694
Training notebook: [Colab](https://colab.research.google.com/drive/12NvO1SIZevF8ybJqfN9O21I3i9bU1dOO#scrollTo=KUsyn02WWmf5)
### ===Teachable NLP=== ###
To train a GPT-2 model, write code and require GPU resources, but can easily fine-tune and get an API to use the model here for free.
Teachable NLP: [Teachable NLP](https://ainize.ai/teachable-nlp)
Tutorial: [Tutorial](https://forum.ainetwork.ai/t/teachable-nlp-how-to-use-teachable-nlp/65?utm_source=community&utm_medium=huggingface&utm_campaign=model&utm_content=teachable%20nlp)
|
{}
|
ainize/gpt2-rnm-with-season-1
| null |
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
### Model information
Fine tuning data 1: URL
Base model: e-tony/gpt2-rnm
Epoch: 3
Train runtime: 7.1779 secs
Loss: 2.5694
Training notebook: Colab
### ===Teachable NLP=== ###
To train a GPT-2 model, write code and require GPU resources, but can easily fine-tune and get an API to use the model here for free.
Teachable NLP: Teachable NLP
Tutorial: Tutorial
|
[
"### Model information\n \n Fine tuning data 1: URL\n Base model: e-tony/gpt2-rnm\n Epoch: 3\n Train runtime: 7.1779 secs\n Loss: 2.5694\n \n\n\nTraining notebook: Colab",
"### ===Teachable NLP=== ###\n\nTo train a GPT-2 model, write code and require GPU resources, but can easily fine-tune and get an API to use the model here for free.\n\nTeachable NLP: Teachable NLP\n\nTutorial: Tutorial"
] |
[
"TAGS\n#transformers #pytorch #jax #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Model information\n \n Fine tuning data 1: URL\n Base model: e-tony/gpt2-rnm\n Epoch: 3\n Train runtime: 7.1779 secs\n Loss: 2.5694\n \n\n\nTraining notebook: Colab",
"### ===Teachable NLP=== ###\n\nTo train a GPT-2 model, write code and require GPU resources, but can easily fine-tune and get an API to use the model here for free.\n\nTeachable NLP: Teachable NLP\n\nTutorial: Tutorial"
] |
text-generation
|
transformers
|
### Model information
Fine tuning data 1: https://www.kaggle.com/andradaolteanu/rickmorty-scripts
Fine tuning data 2: https://www.kaggle.com/mikhailgaerlan/spongebob-squarepants-completed-transcripts
Base model: e-tony/gpt2-rnm
Epoch: 2
Train runtime: 790.0612 secs
Loss: 2.8569
API page: [Ainize](https://ainize.ai/fpem123/GPT2-Rick-N-Morty-with-SpongeBob?branch=master)
Demo page: [End-point](https://master-gpt2-rick-n-morty-with-sponge-bob-fpem123.endpoint.ainize.ai/)
### ===Teachable NLP=== ###
To train a GPT-2 model, write code and require GPU resources, but can easily fine-tune and get an API to use the model here for free.
Teachable NLP: [Teachable NLP](https://ainize.ai/teachable-nlp)
Tutorial: [Tutorial](https://forum.ainetwork.ai/t/teachable-nlp-how-to-use-teachable-nlp/65?utm_source=community&utm_medium=huggingface&utm_campaign=model&utm_content=teachable%20nlp)
|
{}
|
ainize/gpt2-rnm-with-spongebob
| null |
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
### Model information
Fine tuning data 1: URL
Fine tuning data 2: URL
Base model: e-tony/gpt2-rnm
Epoch: 2
Train runtime: 790.0612 secs
Loss: 2.8569
API page: Ainize
Demo page: End-point
### ===Teachable NLP=== ###
To train a GPT-2 model, write code and require GPU resources, but can easily fine-tune and get an API to use the model here for free.
Teachable NLP: Teachable NLP
Tutorial: Tutorial
|
[
"### Model information\n \n Fine tuning data 1: URL\n Fine tuning data 2: URL\n Base model: e-tony/gpt2-rnm\n Epoch: 2\n Train runtime: 790.0612 secs\n Loss: 2.8569\n\nAPI page: Ainize\n\nDemo page: End-point",
"### ===Teachable NLP=== ###\n\nTo train a GPT-2 model, write code and require GPU resources, but can easily fine-tune and get an API to use the model here for free.\n\nTeachable NLP: Teachable NLP\n\nTutorial: Tutorial"
] |
[
"TAGS\n#transformers #pytorch #jax #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Model information\n \n Fine tuning data 1: URL\n Fine tuning data 2: URL\n Base model: e-tony/gpt2-rnm\n Epoch: 2\n Train runtime: 790.0612 secs\n Loss: 2.8569\n\nAPI page: Ainize\n\nDemo page: End-point",
"### ===Teachable NLP=== ###\n\nTo train a GPT-2 model, write code and require GPU resources, but can easily fine-tune and get an API to use the model here for free.\n\nTeachable NLP: Teachable NLP\n\nTutorial: Tutorial"
] |
text-generation
|
transformers
|
### Model information
Fine tuning data: https://www.kaggle.com/mikhailgaerlan/spongebob-squarepants-completed-transcripts
License: CC-BY-SA
Base model: gpt-2 large
Epoch: 50
Train runtime: 14723.0716 secs
Loss: 0.0268
API page: [Ainize](https://ainize.ai/fpem123/GPT2-Spongebob?branch=master)
Demo page: [End-point](https://master-gpt2-spongebob-fpem123.endpoint.ainize.ai/)
### ===Teachable NLP=== ###
To train a GPT-2 model, write code and require GPU resources, but can easily fine-tune and get an API to use the model here for free.
Teachable NLP: [Teachable NLP](https://ainize.ai/teachable-nlp)
Tutorial: [Tutorial](https://forum.ainetwork.ai/t/teachable-nlp-how-to-use-teachable-nlp/65?utm_source=community&utm_medium=huggingface&utm_campaign=model&utm_content=teachable%20nlp)
|
{}
|
ainize/gpt2-spongebob-script-large
| null |
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
### Model information
Fine tuning data: URL
License: CC-BY-SA
Base model: gpt-2 large
Epoch: 50
Train runtime: 14723.0716 secs
Loss: 0.0268
API page: Ainize
Demo page: End-point
### ===Teachable NLP=== ###
To train a GPT-2 model, write code and require GPU resources, but can easily fine-tune and get an API to use the model here for free.
Teachable NLP: Teachable NLP
Tutorial: Tutorial
|
[
"### Model information\n \n Fine tuning data: URL\n License: CC-BY-SA\n Base model: gpt-2 large \n Epoch: 50\n Train runtime: 14723.0716 secs\n Loss: 0.0268\n \n\nAPI page: Ainize\n\nDemo page: End-point",
"### ===Teachable NLP=== ###\n\nTo train a GPT-2 model, write code and require GPU resources, but can easily fine-tune and get an API to use the model here for free.\n\nTeachable NLP: Teachable NLP\n\nTutorial: Tutorial"
] |
[
"TAGS\n#transformers #pytorch #jax #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"### Model information\n \n Fine tuning data: URL\n License: CC-BY-SA\n Base model: gpt-2 large \n Epoch: 50\n Train runtime: 14723.0716 secs\n Loss: 0.0268\n \n\nAPI page: Ainize\n\nDemo page: End-point",
"### ===Teachable NLP=== ###\n\nTo train a GPT-2 model, write code and require GPU resources, but can easily fine-tune and get an API to use the model here for free.\n\nTeachable NLP: Teachable NLP\n\nTutorial: Tutorial"
] |
question-answering
|
transformers
|
# bert-base for QA
**Code:** See [Ainize Workspace](https://link.ainize.ai/3FjvBVn)
**klue-bert-base-mrc DEMO**: [Ainize DEMO](https://main-klue-mrc-bert-scy6500.endpoint.ainize.ai/)
**klue-bert-base-mrc API**: [Ainize API](https://ainize.ai/scy6500/KLUE-MRC-BERT?branch=main)
## Overview
**Language model:** klue/bert-base
**Language:** Korean
**Downstream-task:** Extractive QA
**Training data:** KLUE-MRC
**Eval data:** KLUE-MRC
## Usage
### In Transformers
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("ainize/klue-bert-base-mrc")
model = AutoModelForQuestionAnswering.from_pretrained("ainize/klue-bert-base-mrc")
context = "your context"
question = "your question"
encodings = tokenizer(context, question, max_length=512, truncation=True,
padding="max_length", return_token_type_ids=False)
encodings = {key: torch.tensor([val]) for key, val in encodings.items()}
input_ids = encodings["input_ids"]
attention_mask = encodings["attention_mask"]
pred = model(input_ids, attention_mask=attention_mask)
start_logits, end_logits = pred.start_logits, pred.end_logits
token_start_index, token_end_index = start_logits.argmax(dim=-1), end_logits.argmax(dim=-1)
pred_ids = input_ids[0][token_start_index: token_end_index + 1]
prediction = tokenizer.decode(pred_ids)
```
## About us
[Teachable NLP](https://ainize.ai/teachable-nlp) - Train NLP models with your own text without writing any code
[Ainize](https://ainize.ai/) - Deploy ML project using free gpu
|
{"language": "ko", "license": "cc-by-sa-4.0", "tags": ["bert", "mrc"], "datasets": ["klue"]}
|
ainize/klue-bert-base-mrc
| null |
[
"transformers",
"pytorch",
"bert",
"question-answering",
"mrc",
"ko",
"dataset:klue",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ko"
] |
TAGS
#transformers #pytorch #bert #question-answering #mrc #ko #dataset-klue #license-cc-by-sa-4.0 #endpoints_compatible #region-us
|
# bert-base for QA
Code: See Ainize Workspace
klue-bert-base-mrc DEMO: Ainize DEMO
klue-bert-base-mrc API: Ainize API
## Overview
Language model: klue/bert-base
Language: Korean
Downstream-task: Extractive QA
Training data: KLUE-MRC
Eval data: KLUE-MRC
## Usage
### In Transformers
## About us
Teachable NLP - Train NLP models with your own text without writing any code
Ainize - Deploy ML project using free gpu
|
[
"# bert-base for QA \n\nCode: See Ainize Workspace \n\nklue-bert-base-mrc DEMO: Ainize DEMO\n\nklue-bert-base-mrc API: Ainize API",
"## Overview\nLanguage model: klue/bert-base \nLanguage: Korean \nDownstream-task: Extractive QA \nTraining data: KLUE-MRC \nEval data: KLUE-MRC",
"## Usage",
"### In Transformers",
"## About us\nTeachable NLP - Train NLP models with your own text without writing any code \nAinize - Deploy ML project using free gpu"
] |
[
"TAGS\n#transformers #pytorch #bert #question-answering #mrc #ko #dataset-klue #license-cc-by-sa-4.0 #endpoints_compatible #region-us \n",
"# bert-base for QA \n\nCode: See Ainize Workspace \n\nklue-bert-base-mrc DEMO: Ainize DEMO\n\nklue-bert-base-mrc API: Ainize API",
"## Overview\nLanguage model: klue/bert-base \nLanguage: Korean \nDownstream-task: Extractive QA \nTraining data: KLUE-MRC \nEval data: KLUE-MRC",
"## Usage",
"### In Transformers",
"## About us\nTeachable NLP - Train NLP models with your own text without writing any code \nAinize - Deploy ML project using free gpu"
] |
text-classification
|
transformers
|
# bert-base for KLUE Relation Extraction task.
Fine-tuned klue/bert-base using KLUE RE dataset.
- <a href="https://klue-benchmark.com/">KLUE Benchmark Official Webpage</a>
- <a href="https://github.com/KLUE-benchmark/KLUE">KLUE Official Github</a>
- <a href="https://github.com/ainize-team/klue-re-workspace">KLUE RE Github</a>
- Run KLUE RE on free GPU : <a href="https://ainize.ai/workspace/create?imageId=hnj95592adzr02xPTqss&git=https://github.com/ainize-team/klue-re-workspace">Ainize Workspace</a>
<br>
# Usage
<pre><code>
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("ainize/klue-bert-base-re")
model = AutoModelForSequenceClassification.from_pretrained("ainize/klue-bert-base-re")
# Add "<subj>", "</subj>" to both ends of the subject object and "<obj>", "</obj>" to both ends of the object object.
sentence = "<subj>손흥민</subj>은 <obj>대한민국</obj>에서 태어났다."
encodings = tokenizer(sentence,
max_length=128,
truncation=True,
padding="max_length",
return_tensors="pt")
outputs = model(**encodings)
logits = outputs['logits']
preds = torch.argmax(logits, dim=1)
</code></pre>
<br>
# About us
- <a href="https://ainize.ai/teachable-nlp">Teachable NLP</a> - Train NLP models with your own text without writing any code
- <a href="https://ainize.ai/">Ainize</a> - Deploy ML project using free gpu
|
{}
|
ainize/klue-bert-base-re
| null |
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us
|
# bert-base for KLUE Relation Extraction task.
Fine-tuned klue/bert-base using KLUE RE dataset.
- <a href="URL Benchmark Official Webpage</a>
- <a href="URL Official Github</a>
- <a href="URL RE Github</a>
- Run KLUE RE on free GPU : <a href="URL/URL Workspace</a>
<br>
# Usage
<pre><code>
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("ainize/klue-bert-base-re")
model = AutoModelForSequenceClassification.from_pretrained("ainize/klue-bert-base-re")
# Add "<subj>", "</subj>" to both ends of the subject object and "<obj>", "</obj>" to both ends of the object object.
sentence = "<subj>손흥민</subj>은 <obj>대한민국</obj>에서 태어났다."
encodings = tokenizer(sentence,
max_length=128,
truncation=True,
padding="max_length",
return_tensors="pt")
outputs = model(encodings)
logits = outputs['logits']
preds = URL(logits, dim=1)
</code></pre>
<br>
# About us
- <a href="URL NLP</a> - Train NLP models with your own text without writing any code
- <a href="URL - Deploy ML project using free gpu
|
[
"# bert-base for KLUE Relation Extraction task.\nFine-tuned klue/bert-base using KLUE RE dataset.\n- <a href=\"URL Benchmark Official Webpage</a>\n- <a href=\"URL Official Github</a> \n- <a href=\"URL RE Github</a>\n- Run KLUE RE on free GPU : <a href=\"URL/URL Workspace</a>\n\n<br>",
"# Usage\n<pre><code>\nfrom transformers import AutoTokenizer, AutoModelForSequenceClassification\n\ntokenizer = AutoTokenizer.from_pretrained(\"ainize/klue-bert-base-re\")\nmodel = AutoModelForSequenceClassification.from_pretrained(\"ainize/klue-bert-base-re\")",
"# Add \"<subj>\", \"</subj>\" to both ends of the subject object and \"<obj>\", \"</obj>\" to both ends of the object object.\nsentence = \"<subj>손흥민</subj>은 <obj>대한민국</obj>에서 태어났다.\"\n\nencodings = tokenizer(sentence, \n max_length=128, \n truncation=True, \n padding=\"max_length\", \n return_tensors=\"pt\")\n\noutputs = model(encodings)\n\nlogits = outputs['logits']\n\npreds = URL(logits, dim=1)\n</code></pre>\n\n<br>",
"# About us\n- <a href=\"URL NLP</a> - Train NLP models with your own text without writing any code\n- <a href=\"URL - Deploy ML project using free gpu"
] |
[
"TAGS\n#transformers #pytorch #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us \n",
"# bert-base for KLUE Relation Extraction task.\nFine-tuned klue/bert-base using KLUE RE dataset.\n- <a href=\"URL Benchmark Official Webpage</a>\n- <a href=\"URL Official Github</a> \n- <a href=\"URL RE Github</a>\n- Run KLUE RE on free GPU : <a href=\"URL/URL Workspace</a>\n\n<br>",
"# Usage\n<pre><code>\nfrom transformers import AutoTokenizer, AutoModelForSequenceClassification\n\ntokenizer = AutoTokenizer.from_pretrained(\"ainize/klue-bert-base-re\")\nmodel = AutoModelForSequenceClassification.from_pretrained(\"ainize/klue-bert-base-re\")",
"# Add \"<subj>\", \"</subj>\" to both ends of the subject object and \"<obj>\", \"</obj>\" to both ends of the object object.\nsentence = \"<subj>손흥민</subj>은 <obj>대한민국</obj>에서 태어났다.\"\n\nencodings = tokenizer(sentence, \n max_length=128, \n truncation=True, \n padding=\"max_length\", \n return_tensors=\"pt\")\n\noutputs = model(encodings)\n\nlogits = outputs['logits']\n\npreds = URL(logits, dim=1)\n</code></pre>\n\n<br>",
"# About us\n- <a href=\"URL NLP</a> - Train NLP models with your own text without writing any code\n- <a href=\"URL - Deploy ML project using free gpu"
] |
summarization
|
transformers
|
# kobart-news
- This model is a [kobart](https://huggingface.co/hyunwoongko/kobart) fine-tuned on the [문서요약 텍스트/신문기사](https://aihub.or.kr/aidata/8054) using [Ainize Teachable-NLP](https://ainize.ai/teachable-nlp).
## Usage
### Python Code
```python
from transformers import PreTrainedTokenizerFast, BartForConditionalGeneration
# Load Model and Tokenize
tokenizer = PreTrainedTokenizerFast.from_pretrained("ainize/kobart-news")
model = BartForConditionalGeneration.from_pretrained("ainize/kobart-news")
# Encode Input Text
input_text = '국내 전반적인 경기침체로 상가 건물주의 수익도 전국적인 감소세를 보이고 있는 것으로 나타났다. 수익형 부동산 연구개발기업 상가정보연구소는 한국감정원 통계를 분석한 결과 전국 중대형 상가 순영업소득(부동산에서 발생하는 임대수입, 기타수입에서 제반 경비를 공제한 순소득)이 1분기 ㎡당 3만4200원에서 3분기 2만5800원으로 감소했다고 17일 밝혔다. 수도권, 세종시, 지방광역시에서 순영업소득이 가장 많이 감소한 지역은 3분기 1만3100원을 기록한 울산으로, 1분기 1만9100원 대비 31.4% 감소했다. 이어 대구(-27.7%), 서울(-26.9%), 광주(-24.9%), 부산(-23.5%), 세종(-23.4%), 대전(-21%), 경기(-19.2%), 인천(-18.5%) 순으로 감소했다. 지방 도시의 경우도 비슷했다. 경남의 3분기 순영업소득은 1만2800원으로 1분기 1만7400원 대비 26.4% 감소했으며 제주(-25.1%), 경북(-24.1%), 충남(-20.9%), 강원(-20.9%), 전남(-20.1%), 전북(-17%), 충북(-15.3%) 등도 감소세를 보였다. 조현택 상가정보연구소 연구원은 "올해 내수 경기의 침체된 분위기가 유지되며 상가, 오피스 등을 비롯한 수익형 부동산 시장의 분위기도 경직된 모습을 보였고 오피스텔, 지식산업센터 등의 수익형 부동산 공급도 증가해 공실의 위험도 늘었다"며 "실제 올 3분기 전국 중대형 상가 공실률은 11.5%를 기록하며 1분기 11.3% 대비 0.2% 포인트 증가했다"고 말했다. 그는 "최근 소셜커머스(SNS를 통한 전자상거래), 음식 배달 중개 애플리케이션, 중고 물품 거래 애플리케이션 등의 사용 증가로 오프라인 매장에 영향을 미쳤다"며 "향후 지역, 콘텐츠에 따른 상권 양극화 현상은 심화될 것으로 보인다"고 덧붙였다.'
input_ids = tokenizer.encode(input_text, return_tensors="pt")
# Generate Summary Text Ids
summary_text_ids = model.generate(
input_ids=input_ids,
bos_token_id=model.config.bos_token_id,
eos_token_id=model.config.eos_token_id,
length_penalty=2.0,
max_length=142,
min_length=56,
num_beams=4,
)
# Decoding Text
print(tokenizer.decode(summary_text_ids[0], skip_special_tokens=True))
```
### API and Demo
You can experience this model through [ainize-api](https://ainize.ai/gkswjdzz/summarize-torchserve?branch=main) and [ainize-demo](https://main-summarize-torchserve-gkswjdzz.endpoint.ainize.ai/).
|
{"language": "ko", "license": "mit", "tags": ["summarization", "bart"]}
|
ainize/kobart-news
| null |
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"summarization",
"ko",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ko"
] |
TAGS
#transformers #pytorch #bart #text2text-generation #summarization #ko #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# kobart-news
- This model is a kobart fine-tuned on the 문서요약 텍스트/신문기사 using Ainize Teachable-NLP.
## Usage
### Python Code
### API and Demo
You can experience this model through ainize-api and ainize-demo.
|
[
"# kobart-news\n- This model is a kobart fine-tuned on the 문서요약 텍스트/신문기사 using Ainize Teachable-NLP.",
"## Usage",
"### Python Code",
"### API and Demo\nYou can experience this model through ainize-api and ainize-demo."
] |
[
"TAGS\n#transformers #pytorch #bart #text2text-generation #summarization #ko #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# kobart-news\n- This model is a kobart fine-tuned on the 문서요약 텍스트/신문기사 using Ainize Teachable-NLP.",
"## Usage",
"### Python Code",
"### API and Demo\nYou can experience this model through ainize-api and ainize-demo."
] |
summarization
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-wikinewssum-all-languages
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2454
- Rouge1: 8.3826
- Rouge2: 3.5524
- Rougel: 6.8656
- Rougelsum: 7.8362
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|
| No log | 1.0 | 3467 | 2.4034 | 8.0363 | 3.2484 | 6.5409 | 7.477 |
| No log | 2.0 | 6934 | 2.3276 | 8.1054 | 3.2905 | 6.5765 | 7.5687 |
| No log | 3.0 | 10401 | 2.2976 | 8.169 | 3.4272 | 6.6597 | 7.6435 |
| No log | 4.0 | 13868 | 2.2795 | 8.2941 | 3.5353 | 6.7881 | 7.7664 |
| 2.8057 | 5.0 | 17335 | 2.2621 | 8.3302 | 3.5599 | 6.8238 | 7.7928 |
| 2.8057 | 6.0 | 20802 | 2.2547 | 8.3818 | 3.5886 | 6.8672 | 7.844 |
| 2.8057 | 7.0 | 24269 | 2.2472 | 8.3809 | 3.5696 | 6.8575 | 7.8327 |
| 2.8057 | 8.0 | 27736 | 2.2454 | 8.3826 | 3.5524 | 6.8656 | 7.8362 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["summarization", "generated_from_trainer"], "metrics": ["rouge"], "model-index": [{"name": "mt5-base-wikinewssum-all-languages", "results": []}]}
|
airKlizz/mt5-base-wikinewssum-all-languages
| null |
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #mt5 #text2text-generation #summarization #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
mt5-base-wikinewssum-all-languages
==================================
This model is a fine-tuned version of google/mt5-base on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 2.2454
* Rouge1: 8.3826
* Rouge2: 3.5524
* Rougel: 6.8656
* Rougelsum: 7.8362
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5.6e-05
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 8
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 8
### Training results
### Framework versions
* Transformers 4.13.0
* Pytorch 1.10.1
* Datasets 1.16.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5.6e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 8",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0\n* Pytorch 1.10.1\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #mt5 #text2text-generation #summarization #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5.6e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 8",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0\n* Pytorch 1.10.1\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
summarization
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-wikinewssum-english-100
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 6.6225
- Rouge1: 3.909
- Rouge2: 0.9312
- Rougel: 3.3835
- Rougelsum: 3.7786
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| No log | 0.96 | 12 | 14.4949 | 2.7398 | 0.7181 | 2.491 | 2.6561 |
| No log | 1.96 | 24 | 10.5056 | 4.4428 | 1.4293 | 3.8469 | 4.2869 |
| No log | 2.96 | 36 | 8.9856 | 4.1179 | 1.229 | 3.5726 | 3.9693 |
| No log | 3.96 | 48 | 7.7950 | 3.9217 | 1.1339 | 3.4256 | 3.7905 |
| No log | 4.96 | 60 | 7.0734 | 3.8004 | 1.0326 | 3.3246 | 3.6766 |
| No log | 5.96 | 72 | 6.7897 | 3.6351 | 0.9162 | 3.1839 | 3.5149 |
| No log | 6.96 | 84 | 6.6610 | 3.7486 | 0.8829 | 3.2583 | 3.6193 |
| No log | 7.96 | 96 | 6.6225 | 3.909 | 0.9312 | 3.3835 | 3.7786 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["summarization", "generated_from_trainer"], "metrics": ["rouge"], "model-index": [{"name": "mt5-base-wikinewssum-english-100", "results": []}]}
|
airKlizz/mt5-base-wikinewssum-english-100
| null |
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #mt5 #text2text-generation #summarization #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
mt5-base-wikinewssum-english-100
================================
This model is a fine-tuned version of google/mt5-base on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 6.6225
* Rouge1: 3.909
* Rouge2: 0.9312
* Rougel: 3.3835
* Rougelsum: 3.7786
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5.6e-05
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 8
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 8
### Training results
### Framework versions
* Transformers 4.13.0
* Pytorch 1.10.1
* Datasets 1.16.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5.6e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 8",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0\n* Pytorch 1.10.1\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #mt5 #text2text-generation #summarization #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5.6e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 8",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0\n* Pytorch 1.10.1\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
summarization
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-wikinewssum-english-1000
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4724
- Rouge1: 7.7389
- Rouge2: 3.1606
- Rougel: 6.3317
- Rougelsum: 7.2487
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| No log | 1.0 | 125 | 2.6981 | 7.1504 | 2.6253 | 5.8261 | 6.7427 |
| No log | 2.0 | 250 | 2.5597 | 7.4666 | 2.9362 | 6.0965 | 6.9699 |
| No log | 3.0 | 375 | 2.5145 | 7.4599 | 2.9449 | 6.0941 | 6.9734 |
| No log | 4.0 | 500 | 2.4904 | 7.5063 | 2.975 | 6.137 | 7.0027 |
| No log | 5.0 | 625 | 2.4904 | 7.6027 | 3.0582 | 6.2161 | 7.0832 |
| No log | 6.0 | 750 | 2.4801 | 7.7601 | 3.1916 | 6.3689 | 7.2686 |
| No log | 7.0 | 875 | 2.4737 | 7.7162 | 3.1332 | 6.3113 | 7.2283 |
| No log | 8.0 | 1000 | 2.4724 | 7.7389 | 3.1606 | 6.3317 | 7.2487 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["summarization", "generated_from_trainer"], "metrics": ["rouge"], "model-index": [{"name": "mt5-base-wikinewssum-english-1000", "results": []}]}
|
airKlizz/mt5-base-wikinewssum-english-1000
| null |
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #mt5 #text2text-generation #summarization #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
mt5-base-wikinewssum-english-1000
=================================
This model is a fine-tuned version of google/mt5-base on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 2.4724
* Rouge1: 7.7389
* Rouge2: 3.1606
* Rougel: 6.3317
* Rougelsum: 7.2487
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5.6e-05
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 8
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 8
### Training results
### Framework versions
* Transformers 4.13.0
* Pytorch 1.10.1
* Datasets 1.16.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5.6e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 8",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0\n* Pytorch 1.10.1\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #mt5 #text2text-generation #summarization #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5.6e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 8",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0\n* Pytorch 1.10.1\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
summarization
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-wikinewssum-english
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3040
- Rouge1: 8.9565
- Rouge2: 3.6563
- Rougel: 7.1346
- Rougelsum: 8.3802
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| No log | 1.0 | 1010 | 2.4360 | 8.7287 | 3.5817 | 7.0093 | 8.1879 |
| No log | 2.0 | 2020 | 2.3922 | 8.7227 | 3.5385 | 6.96 | 8.1887 |
| No log | 3.0 | 3030 | 2.3422 | 8.8565 | 3.5772 | 7.0203 | 8.2957 |
| No log | 4.0 | 4040 | 2.3288 | 8.89 | 3.645 | 7.0602 | 8.3314 |
| 3.1253 | 5.0 | 5050 | 2.3209 | 8.868 | 3.6109 | 7.0537 | 8.299 |
| 3.1253 | 6.0 | 6060 | 2.3127 | 8.9488 | 3.6615 | 7.1044 | 8.3785 |
| 3.1253 | 7.0 | 7070 | 2.3056 | 8.9366 | 3.6507 | 7.1338 | 8.3615 |
| 3.1253 | 8.0 | 8080 | 2.3040 | 8.9565 | 3.6563 | 7.1346 | 8.3802 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["summarization", "generated_from_trainer"], "metrics": ["rouge"], "model-index": [{"name": "mt5-base-wikinewssum-english", "results": []}]}
|
airKlizz/mt5-base-wikinewssum-english
| null |
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #mt5 #text2text-generation #summarization #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
mt5-base-wikinewssum-english
============================
This model is a fine-tuned version of google/mt5-base on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 2.3040
* Rouge1: 8.9565
* Rouge2: 3.6563
* Rougel: 7.1346
* Rougelsum: 8.3802
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5.6e-05
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 8
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 8
### Training results
### Framework versions
* Transformers 4.13.0
* Pytorch 1.10.1
* Datasets 1.16.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5.6e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 8",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0\n* Pytorch 1.10.1\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #mt5 #text2text-generation #summarization #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5.6e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 8",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0\n* Pytorch 1.10.1\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
summarization
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-wikinewssum-french
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0917
- Rouge1: 12.0984
- Rouge2: 5.7289
- Rougel: 9.9245
- Rougelsum: 11.0697
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:------:|:---------:|
| No log | 1.0 | 549 | 2.3203 | 11.5172 | 4.9352 | 9.3617 | 10.4605 |
| No log | 2.0 | 1098 | 2.2057 | 11.8469 | 5.2369 | 9.6452 | 10.8337 |
| No log | 3.0 | 1647 | 2.1525 | 11.9096 | 5.4027 | 9.7648 | 10.9315 |
| 3.1825 | 4.0 | 2196 | 2.1307 | 12.0782 | 5.5848 | 9.9614 | 11.1081 |
| 3.1825 | 5.0 | 2745 | 2.1172 | 11.9821 | 5.6042 | 9.8216 | 11.0077 |
| 3.1825 | 6.0 | 3294 | 2.1012 | 12.0845 | 5.6834 | 9.9119 | 11.0741 |
| 3.1825 | 7.0 | 3843 | 2.0964 | 12.1296 | 5.7271 | 9.9495 | 11.1227 |
| 2.3376 | 8.0 | 4392 | 2.0917 | 12.0984 | 5.7289 | 9.9245 | 11.0697 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["summarization", "generated_from_trainer"], "metrics": ["rouge"], "model-index": [{"name": "mt5-base-wikinewssum-french", "results": []}]}
|
airKlizz/mt5-base-wikinewssum-french
| null |
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #mt5 #text2text-generation #summarization #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
mt5-base-wikinewssum-french
===========================
This model is a fine-tuned version of google/mt5-base on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 2.0917
* Rouge1: 12.0984
* Rouge2: 5.7289
* Rougel: 9.9245
* Rougelsum: 11.0697
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5.6e-05
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 8
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 8
### Training results
### Framework versions
* Transformers 4.13.0
* Pytorch 1.10.1
* Datasets 1.16.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5.6e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 8",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0\n* Pytorch 1.10.1\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #mt5 #text2text-generation #summarization #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5.6e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 8",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0\n* Pytorch 1.10.1\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
summarization
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-wikinewssum-german
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5135
- Rouge1: 8.0553
- Rouge2: 2.7846
- Rougel: 6.2182
- Rougelsum: 7.6203
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| No log | 1.0 | 723 | 2.7112 | 7.3681 | 2.3679 | 5.5705 | 6.7588 |
| No log | 2.0 | 1446 | 2.6178 | 7.8539 | 2.7551 | 6.2081 | 7.4139 |
| No log | 3.0 | 2169 | 2.5756 | 7.8401 | 2.6075 | 6.0135 | 7.4303 |
| No log | 4.0 | 2892 | 2.5465 | 8.1097 | 2.8525 | 6.268 | 7.6482 |
| 3.4589 | 5.0 | 3615 | 2.5315 | 8.0192 | 2.7848 | 6.2484 | 7.5859 |
| 3.4589 | 6.0 | 4338 | 2.5222 | 8.1063 | 2.8986 | 6.337 | 7.6564 |
| 3.4589 | 7.0 | 5061 | 2.5136 | 8.0565 | 2.8707 | 6.2732 | 7.6105 |
| 3.4589 | 8.0 | 5784 | 2.5135 | 8.0553 | 2.7846 | 6.2182 | 7.6203 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["summarization", "generated_from_trainer"], "metrics": ["rouge"], "model-index": [{"name": "mt5-base-wikinewssum-german", "results": []}]}
|
airKlizz/mt5-base-wikinewssum-german
| null |
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #mt5 #text2text-generation #summarization #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
mt5-base-wikinewssum-german
===========================
This model is a fine-tuned version of google/mt5-base on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 2.5135
* Rouge1: 8.0553
* Rouge2: 2.7846
* Rougel: 6.2182
* Rougelsum: 7.6203
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5.6e-05
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 8
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 8
### Training results
### Framework versions
* Transformers 4.13.0
* Pytorch 1.10.1
* Datasets 1.16.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5.6e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 8",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0\n* Pytorch 1.10.1\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #mt5 #text2text-generation #summarization #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5.6e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 8",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0\n* Pytorch 1.10.1\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
summarization
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-wikinewssum-italian
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 10.5739
- Rouge1: 2.1728
- Rouge2: 0.1516
- Rougel: 2.0846
- Rougelsum: 2.0515
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| No log | 1.0 | 8 | 16.6193 | 2.4011 | 0.3829 | 2.1505 | 2.2161 |
| No log | 2.0 | 16 | 15.8909 | 2.5165 | 0.2799 | 2.3403 | 2.3523 |
| No log | 3.0 | 24 | 15.4843 | 2.2794 | 0.2252 | 2.1849 | 2.1382 |
| 17.2559 | 4.0 | 32 | 13.0850 | 2.2448 | 0.1516 | 2.1426 | 2.0859 |
| 17.2559 | 5.0 | 40 | 11.7838 | 2.2448 | 0.1516 | 2.1426 | 2.0859 |
| 17.2559 | 6.0 | 48 | 11.3207 | 2.2424 | 0.1516 | 2.1423 | 2.1171 |
| 17.2559 | 7.0 | 56 | 10.7871 | 2.1081 | 0.1516 | 2.0227 | 1.9838 |
| 14.6026 | 8.0 | 64 | 10.5739 | 2.1728 | 0.1516 | 2.0846 | 2.0515 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["summarization", "generated_from_trainer"], "metrics": ["rouge"], "model-index": [{"name": "mt5-base-wikinewssum-italian", "results": []}]}
|
airKlizz/mt5-base-wikinewssum-italian
| null |
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #mt5 #text2text-generation #summarization #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
mt5-base-wikinewssum-italian
============================
This model is a fine-tuned version of google/mt5-base on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 10.5739
* Rouge1: 2.1728
* Rouge2: 0.1516
* Rougel: 2.0846
* Rougelsum: 2.0515
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5.6e-05
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 8
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 8
### Training results
### Framework versions
* Transformers 4.13.0
* Pytorch 1.10.1
* Datasets 1.16.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5.6e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 8",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0\n* Pytorch 1.10.1\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #mt5 #text2text-generation #summarization #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5.6e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 8",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0\n* Pytorch 1.10.1\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
summarization
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-wikinewssum-polish
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3179
- Rouge1: 7.911
- Rouge2: 3.2189
- Rougel: 6.7856
- Rougelsum: 7.4485
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| No log | 1.0 | 315 | 2.5391 | 5.9874 | 2.3594 | 5.1303 | 5.6116 |
| No log | 2.0 | 630 | 2.4446 | 7.7294 | 3.0152 | 6.6024 | 7.2757 |
| No log | 3.0 | 945 | 2.3912 | 7.6451 | 2.9785 | 6.5714 | 7.2011 |
| 3.5311 | 4.0 | 1260 | 2.3720 | 7.8007 | 3.0913 | 6.7067 | 7.3451 |
| 3.5311 | 5.0 | 1575 | 2.3411 | 7.8374 | 3.1208 | 6.7288 | 7.3459 |
| 3.5311 | 6.0 | 1890 | 2.3354 | 7.8664 | 3.1655 | 6.762 | 7.4364 |
| 3.5311 | 7.0 | 2205 | 2.3175 | 7.9529 | 3.2225 | 6.8438 | 7.4904 |
| 2.692 | 8.0 | 2520 | 2.3179 | 7.911 | 3.2189 | 6.7856 | 7.4485 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["summarization", "generated_from_trainer"], "metrics": ["rouge"], "model-index": [{"name": "mt5-base-wikinewssum-polish", "results": []}]}
|
airKlizz/mt5-base-wikinewssum-polish
| null |
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #mt5 #text2text-generation #summarization #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
mt5-base-wikinewssum-polish
===========================
This model is a fine-tuned version of google/mt5-base on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 2.3179
* Rouge1: 7.911
* Rouge2: 3.2189
* Rougel: 6.7856
* Rougelsum: 7.4485
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5.6e-05
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 8
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 8
### Training results
### Framework versions
* Transformers 4.13.0
* Pytorch 1.10.1
* Datasets 1.16.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5.6e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 8",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0\n* Pytorch 1.10.1\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #mt5 #text2text-generation #summarization #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5.6e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 8",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0\n* Pytorch 1.10.1\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
summarization
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-wikinewssum-portuguese
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0428
- Rouge1: 9.4966
- Rouge2: 4.2224
- Rougel: 7.9845
- Rougelsum: 8.8641
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| No log | 1.0 | 334 | 2.2258 | 7.3686 | 2.9066 | 6.3167 | 6.8758 |
| No log | 2.0 | 668 | 2.1389 | 9.0551 | 3.8395 | 7.6578 | 8.4641 |
| No log | 3.0 | 1002 | 2.1030 | 9.2792 | 3.9352 | 7.8259 | 8.663 |
| No log | 4.0 | 1336 | 2.0841 | 9.337 | 4.0647 | 7.8662 | 8.693 |
| 3.2831 | 5.0 | 1670 | 2.0487 | 9.4244 | 4.0821 | 7.8633 | 8.7111 |
| 3.2831 | 6.0 | 2004 | 2.0580 | 9.4598 | 4.1598 | 7.9511 | 8.8299 |
| 3.2831 | 7.0 | 2338 | 2.0426 | 9.501 | 4.1885 | 7.9803 | 8.8612 |
| 3.2831 | 8.0 | 2672 | 2.0428 | 9.4966 | 4.2224 | 7.9845 | 8.8641 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["summarization", "generated_from_trainer"], "metrics": ["rouge"], "model-index": [{"name": "mt5-base-wikinewssum-portuguese", "results": []}]}
|
airKlizz/mt5-base-wikinewssum-portuguese
| null |
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #mt5 #text2text-generation #summarization #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
mt5-base-wikinewssum-portuguese
===============================
This model is a fine-tuned version of google/mt5-base on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 2.0428
* Rouge1: 9.4966
* Rouge2: 4.2224
* Rougel: 7.9845
* Rougelsum: 8.8641
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5.6e-05
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 8
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 8
### Training results
### Framework versions
* Transformers 4.13.0
* Pytorch 1.10.1
* Datasets 1.16.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5.6e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 8",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0\n* Pytorch 1.10.1\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #mt5 #text2text-generation #summarization #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5.6e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 8",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0\n* Pytorch 1.10.1\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
summarization
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-base-wikinewssum-spanish
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2394
- Rouge1: 7.9732
- Rouge2: 3.5041
- Rougel: 6.6713
- Rougelsum: 7.5229
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| No log | 1.0 | 528 | 2.3707 | 6.687 | 2.9169 | 5.6793 | 6.2978 |
| No log | 2.0 | 1056 | 2.3140 | 7.9518 | 3.4529 | 6.7265 | 7.4984 |
| No log | 3.0 | 1584 | 2.2848 | 7.9708 | 3.5344 | 6.7272 | 7.534 |
| No log | 4.0 | 2112 | 2.2668 | 8.0252 | 3.5323 | 6.7319 | 7.5819 |
| 3.2944 | 5.0 | 2640 | 2.2532 | 8.0143 | 3.534 | 6.7155 | 7.582 |
| 3.2944 | 6.0 | 3168 | 2.2399 | 7.9525 | 3.4849 | 6.6716 | 7.5155 |
| 3.2944 | 7.0 | 3696 | 2.2376 | 7.9405 | 3.4661 | 6.6559 | 7.5043 |
| 3.2944 | 8.0 | 4224 | 2.2394 | 7.9732 | 3.5041 | 6.6713 | 7.5229 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["summarization", "generated_from_trainer"], "metrics": ["rouge"], "model-index": [{"name": "mt5-base-wikinewssum-spanish", "results": []}]}
|
airKlizz/mt5-base-wikinewssum-spanish
| null |
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #mt5 #text2text-generation #summarization #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
mt5-base-wikinewssum-spanish
============================
This model is a fine-tuned version of google/mt5-base on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 2.2394
* Rouge1: 7.9732
* Rouge2: 3.5041
* Rougel: 6.6713
* Rougelsum: 7.5229
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5.6e-05
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 8
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 8
### Training results
### Framework versions
* Transformers 4.13.0
* Pytorch 1.10.1
* Datasets 1.16.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5.6e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 8",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0\n* Pytorch 1.10.1\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #mt5 #text2text-generation #summarization #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5.6e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 8",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0\n* Pytorch 1.10.1\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
summarization
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-wikinewssum-test
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9354
- Rouge1: 6.8433
- Rouge2: 2.5498
- Rougel: 5.6114
- Rougelsum: 6.353
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| No log | 1.0 | 661 | 3.2810 | 6.4161 | 2.403 | 5.3674 | 6.0329 |
| No log | 2.0 | 1322 | 3.1515 | 6.9291 | 2.6826 | 5.6839 | 6.4359 |
| No log | 3.0 | 1983 | 3.0565 | 6.7939 | 2.6113 | 5.6133 | 6.3126 |
| No log | 4.0 | 2644 | 2.9815 | 6.0279 | 2.1637 | 4.9892 | 5.5962 |
| No log | 5.0 | 3305 | 2.9645 | 6.3926 | 2.339 | 5.2716 | 5.9443 |
| 3.9937 | 6.0 | 3966 | 2.9476 | 6.4739 | 2.3615 | 5.3473 | 6.0089 |
| 3.9937 | 7.0 | 4627 | 2.9405 | 6.615 | 2.4309 | 5.4493 | 6.1445 |
| 3.9937 | 8.0 | 5288 | 2.9354 | 6.8433 | 2.5498 | 5.6114 | 6.353 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["summarization", "generated_from_trainer"], "metrics": ["rouge"], "model-index": [{"name": "mt5-small-wikinewssum-test", "results": []}]}
|
airKlizz/mt5-small-wikinewssum-test
| null |
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #mt5 #text2text-generation #summarization #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
mt5-small-wikinewssum-test
==========================
This model is a fine-tuned version of google/mt5-small on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 2.9354
* Rouge1: 6.8433
* Rouge2: 2.5498
* Rougel: 5.6114
* Rougelsum: 6.353
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5.6e-05
* train\_batch\_size: 12
* eval\_batch\_size: 12
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 8
### Training results
### Framework versions
* Transformers 4.13.0
* Pytorch 1.10.1
* Datasets 1.16.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5.6e-05\n* train\\_batch\\_size: 12\n* eval\\_batch\\_size: 12\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 8",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0\n* Pytorch 1.10.1\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #mt5 #text2text-generation #summarization #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5.6e-05\n* train\\_batch\\_size: 12\n* eval\\_batch\\_size: 12\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 8",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0\n* Pytorch 1.10.1\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
question-answering
|
transformers
|
# bert-base-multilingual-cased
Finetuning `bert-base-multilingual-cased` with the training set of `iapp_wiki_qa_squad`, `thaiqa_squad`, and `nsc_qa` (removed examples which have cosine similarity with validation and test examples over 0.8; contexts of the latter two are trimmed to be around 300 `newmm` words). Benchmarks shared on [wandb](https://wandb.ai/cstorm125/wangchanberta-qa) using validation and test sets of `iapp_wiki_qa_squad`.
Trained with [thai2transformers](https://github.com/vistec-AI/thai2transformers/blob/dev/scripts/downstream/train_question_answering_lm_finetuning.py).
Run with:
```
export MODEL_NAME=bert-base-multilingual-cased
python train_question_answering_lm_finetuning.py \
--model_name $MODEL_NAME \
--dataset_name chimera_qa \
--output_dir $MODEL_NAME-finetune-chimera_qa-model \
--log_dir $MODEL_NAME-finetune-chimera_qa-log \
--pad_on_right \
--fp16
```
|
{"widget": [{"text": "\u0e2a\u0e27\u0e19\u0e01\u0e38\u0e2b\u0e25\u0e32\u0e1a\u0e40\u0e1b\u0e47\u0e19\u0e42\u0e23\u0e07\u0e40\u0e23\u0e35\u0e22\u0e19\u0e2d\u0e30\u0e44\u0e23", "context": "\u0e42\u0e23\u0e07\u0e40\u0e23\u0e35\u0e22\u0e19\u0e2a\u0e27\u0e19\u0e01\u0e38\u0e2b\u0e25\u0e32\u0e1a\u0e27\u0e34\u0e17\u0e22\u0e32\u0e25\u0e31\u0e22 (Suankularb Wittayalai School) (\u0e2d\u0e31\u0e01\u0e29\u0e23\u0e22\u0e48\u0e2d : \u0e2a.\u0e01. / S.K.) \u0e40\u0e1b\u0e47\u0e19\u0e42\u0e23\u0e07\u0e40\u0e23\u0e35\u0e22\u0e19\u0e0a\u0e32\u0e22\u0e25\u0e49\u0e27\u0e19 \u0e23\u0e30\u0e14\u0e31\u0e1a\u0e0a\u0e31\u0e49\u0e19\u0e21\u0e31\u0e18\u0e22\u0e21\u0e28\u0e36\u0e01\u0e29\u0e32\u0e02\u0e19\u0e32\u0e14\u0e43\u0e2b\u0e0d\u0e48\u0e1e\u0e34\u0e40\u0e28\u0e29 \u0e2a\u0e31\u0e07\u0e01\u0e31\u0e14\u0e2a\u0e33\u0e19\u0e31\u0e01\u0e07\u0e32\u0e19\u0e40\u0e02\u0e15\u0e1e\u0e37\u0e49\u0e19\u0e17\u0e35\u0e48\u0e01\u0e32\u0e23\u0e28\u0e36\u0e01\u0e29\u0e32\u0e21\u0e31\u0e18\u0e22\u0e21\u0e28\u0e36\u0e01\u0e29\u0e32\u0e40\u0e02\u0e15 1 \u0e2a\u0e33\u0e19\u0e31\u0e01\u0e07\u0e32\u0e19\u0e04\u0e13\u0e30\u0e01\u0e23\u0e23\u0e21\u0e01\u0e32\u0e23\u0e01\u0e32\u0e23\u0e28\u0e36\u0e01\u0e29\u0e32\u0e02\u0e31\u0e49\u0e19\u0e1e\u0e37\u0e49\u0e19\u0e10\u0e32\u0e19 (\u0e0a\u0e37\u0e48\u0e2d\u0e40\u0e14\u0e34\u0e21: \u0e01\u0e23\u0e21\u0e2a\u0e32\u0e21\u0e31\u0e0d\u0e28\u0e36\u0e01\u0e29\u0e32) \u0e01\u0e23\u0e30\u0e17\u0e23\u0e27\u0e07\u0e28\u0e36\u0e01\u0e29\u0e32\u0e18\u0e34\u0e01\u0e32\u0e23 \u0e01\u0e48\u0e2d\u0e15\u0e31\u0e49\u0e07\u0e42\u0e14\u0e22 \u0e1e\u0e23\u0e30\u0e1a\u0e32\u0e17\u0e2a\u0e21\u0e40\u0e14\u0e47\u0e08\u0e1e\u0e23\u0e30\u0e08\u0e38\u0e25\u0e08\u0e2d\u0e21\u0e40\u0e01\u0e25\u0e49\u0e32\u0e40\u0e08\u0e49\u0e32\u0e2d\u0e22\u0e39\u0e48\u0e2b\u0e31\u0e27 \u0e44\u0e14\u0e49\u0e23\u0e31\u0e1a\u0e01\u0e32\u0e23\u0e2a\u0e16\u0e32\u0e1b\u0e19\u0e32\u0e02\u0e36\u0e49\u0e19\u0e43\u0e19\u0e27\u0e31\u0e19\u0e17\u0e35\u0e48 8 \u0e21\u0e35\u0e19\u0e32\u0e04\u0e21 \u0e1e.\u0e28. 2424 (\u0e02\u0e13\u0e30\u0e19\u0e31\u0e49\u0e19\u0e19\u0e31\u0e1a\u0e27\u0e31\u0e19\u0e17\u0e35\u0e48 1 \u0e40\u0e21\u0e29\u0e32\u0e22\u0e19 \u0e40\u0e1b\u0e47\u0e19\u0e27\u0e31\u0e19\u0e02\u0e36\u0e49\u0e19\u0e1b\u0e35\u0e43\u0e2b\u0e21\u0e48 \u0e40\u0e21\u0e37\u0e48\u0e2d\u0e19\u0e31\u0e1a\u0e2d\u0e22\u0e48\u0e32\u0e07\u0e2a\u0e32\u0e01\u0e25\u0e16\u0e37\u0e2d\u0e40\u0e1b\u0e47\u0e19 \u0e1e.\u0e28. 2425) \u0e42\u0e14\u0e22\u0e40\u0e1b\u0e47\u0e19\u0e42\u0e23\u0e07\u0e40\u0e23\u0e35\u0e22\u0e19\u0e23\u0e31\u0e10\u0e1a\u0e32\u0e25\u0e41\u0e2b\u0e48\u0e07\u0e41\u0e23\u0e01\u0e02\u0e2d\u0e07\u0e1b\u0e23\u0e30\u0e40\u0e17\u0e28\u0e44\u0e17\u0e22"}]}
|
airesearch/bert-base-multilingual-cased-finetune-qa
| null |
[
"transformers",
"pytorch",
"bert",
"question-answering",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #question-answering #endpoints_compatible #region-us
|
# bert-base-multilingual-cased
Finetuning 'bert-base-multilingual-cased' with the training set of 'iapp_wiki_qa_squad', 'thaiqa_squad', and 'nsc_qa' (removed examples which have cosine similarity with validation and test examples over 0.8; contexts of the latter two are trimmed to be around 300 'newmm' words). Benchmarks shared on wandb using validation and test sets of 'iapp_wiki_qa_squad'.
Trained with thai2transformers.
Run with:
|
[
"# bert-base-multilingual-cased\n\nFinetuning 'bert-base-multilingual-cased' with the training set of 'iapp_wiki_qa_squad', 'thaiqa_squad', and 'nsc_qa' (removed examples which have cosine similarity with validation and test examples over 0.8; contexts of the latter two are trimmed to be around 300 'newmm' words). Benchmarks shared on wandb using validation and test sets of 'iapp_wiki_qa_squad'.\nTrained with thai2transformers.\n\nRun with:"
] |
[
"TAGS\n#transformers #pytorch #bert #question-answering #endpoints_compatible #region-us \n",
"# bert-base-multilingual-cased\n\nFinetuning 'bert-base-multilingual-cased' with the training set of 'iapp_wiki_qa_squad', 'thaiqa_squad', and 'nsc_qa' (removed examples which have cosine similarity with validation and test examples over 0.8; contexts of the latter two are trimmed to be around 300 'newmm' words). Benchmarks shared on wandb using validation and test sets of 'iapp_wiki_qa_squad'.\nTrained with thai2transformers.\n\nRun with:"
] |
fill-mask
|
transformers
|
# Finetuend `bert-base-multilignual-cased` model on Thai sequence and token classification datasets
<br>
Finetuned XLM Roberta BASE model on Thai sequence and token classification datasets
The script and documentation can be found at [this repository](https://github.com/vistec-AI/thai2transformers).
<br>
## Model description
<br>
We use the pretrained cross-lingual BERT model (mBERT) as proposed by [[Devlin et al., 2018]](https://arxiv.org/abs/1810.04805). We download the pretrained PyTorch model via HuggingFace's Model Hub (https://huggingface.co/bert-base-multilignual-cased)
<br>
## Intended uses & limitations
<br>
You can use the finetuned models for multiclass/multilabel text classification and token classification task.
<br>
**Multiclass text classification**
- `wisesight_sentiment`
4-class text classification task (`positive`, `neutral`, `negative`, and `question`) based on social media posts and tweets.
- `wongnai_reivews`
Users' review rating classification task (scale is ranging from 1 to 5)
- `generated_reviews_enth` : (`review_star` as label)
Generated users' review rating classification task (scale is ranging from 1 to 5).
**Multilabel text classification**
- `prachathai67k`
Thai topic classification with 12 labels based on news article corpus from prachathai.com. The detail is described in this [page](https://huggingface.co/datasets/prachathai67k).
**Token classification**
- `thainer`
Named-entity recognition tagging with 13 named-entities as descibed in this [page](https://huggingface.co/datasets/thainer).
- `lst20` : NER NER and POS tagging
Named-entity recognition tagging with 10 named-entities and Part-of-Speech tagging with 16 tags as descibed in this [page](https://huggingface.co/datasets/lst20).
<br>
## How to use
<br>
The example notebook demonstrating how to use finetuned model for inference can be found at this [Colab notebook](https://colab.research.google.com/drive/1Kbk6sBspZLwcnOE61adAQo30xxqOQ9ko)
<br>
**BibTeX entry and citation info**
```
@misc{lowphansirikul2021wangchanberta,
title={WangchanBERTa: Pretraining transformer-based Thai Language Models},
author={Lalita Lowphansirikul and Charin Polpanumas and Nawat Jantrakulchai and Sarana Nutanong},
year={2021},
eprint={2101.09635},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{}
|
airesearch/bert-base-multilingual-cased-finetuned
| null |
[
"transformers",
"bert",
"fill-mask",
"arxiv:1810.04805",
"arxiv:2101.09635",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1810.04805",
"2101.09635"
] |
[] |
TAGS
#transformers #bert #fill-mask #arxiv-1810.04805 #arxiv-2101.09635 #autotrain_compatible #endpoints_compatible #region-us
|
# Finetuend 'bert-base-multilignual-cased' model on Thai sequence and token classification datasets
<br>
Finetuned XLM Roberta BASE model on Thai sequence and token classification datasets
The script and documentation can be found at this repository.
<br>
## Model description
<br>
We use the pretrained cross-lingual BERT model (mBERT) as proposed by [[Devlin et al., 2018]](URL We download the pretrained PyTorch model via HuggingFace's Model Hub (URL
<br>
## Intended uses & limitations
<br>
You can use the finetuned models for multiclass/multilabel text classification and token classification task.
<br>
Multiclass text classification
- 'wisesight_sentiment'
4-class text classification task ('positive', 'neutral', 'negative', and 'question') based on social media posts and tweets.
- 'wongnai_reivews'
Users' review rating classification task (scale is ranging from 1 to 5)
- 'generated_reviews_enth' : ('review_star' as label)
Generated users' review rating classification task (scale is ranging from 1 to 5).
Multilabel text classification
- 'prachathai67k'
Thai topic classification with 12 labels based on news article corpus from URL. The detail is described in this page.
Token classification
- 'thainer'
Named-entity recognition tagging with 13 named-entities as descibed in this page.
- 'lst20' : NER NER and POS tagging
Named-entity recognition tagging with 10 named-entities and Part-of-Speech tagging with 16 tags as descibed in this page.
<br>
## How to use
<br>
The example notebook demonstrating how to use finetuned model for inference can be found at this Colab notebook
<br>
BibTeX entry and citation info
|
[
"# Finetuend 'bert-base-multilignual-cased' model on Thai sequence and token classification datasets\n\n<br>\n\nFinetuned XLM Roberta BASE model on Thai sequence and token classification datasets\nThe script and documentation can be found at this repository.\n\n<br>",
"## Model description\n\n<br>\n\nWe use the pretrained cross-lingual BERT model (mBERT) as proposed by [[Devlin et al., 2018]](URL We download the pretrained PyTorch model via HuggingFace's Model Hub (URL\n<br>",
"## Intended uses & limitations\n\n<br>\n\nYou can use the finetuned models for multiclass/multilabel text classification and token classification task.\n\n<br>\n\nMulticlass text classification\n\n\n- 'wisesight_sentiment' \n \n 4-class text classification task ('positive', 'neutral', 'negative', and 'question') based on social media posts and tweets.\n\n- 'wongnai_reivews' \n\n Users' review rating classification task (scale is ranging from 1 to 5)\n\n- 'generated_reviews_enth' : ('review_star' as label)\n\n Generated users' review rating classification task (scale is ranging from 1 to 5).\n\nMultilabel text classification\n\n- 'prachathai67k'\n\n Thai topic classification with 12 labels based on news article corpus from URL. The detail is described in this page.\n \n\nToken classification\n\n- 'thainer' \n\n Named-entity recognition tagging with 13 named-entities as descibed in this page.\n\n- 'lst20' : NER NER and POS tagging\n \n Named-entity recognition tagging with 10 named-entities and Part-of-Speech tagging with 16 tags as descibed in this page.\n\n<br>",
"## How to use\n\n<br>\n\nThe example notebook demonstrating how to use finetuned model for inference can be found at this Colab notebook\n\n<br>\n\n\nBibTeX entry and citation info"
] |
[
"TAGS\n#transformers #bert #fill-mask #arxiv-1810.04805 #arxiv-2101.09635 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Finetuend 'bert-base-multilignual-cased' model on Thai sequence and token classification datasets\n\n<br>\n\nFinetuned XLM Roberta BASE model on Thai sequence and token classification datasets\nThe script and documentation can be found at this repository.\n\n<br>",
"## Model description\n\n<br>\n\nWe use the pretrained cross-lingual BERT model (mBERT) as proposed by [[Devlin et al., 2018]](URL We download the pretrained PyTorch model via HuggingFace's Model Hub (URL\n<br>",
"## Intended uses & limitations\n\n<br>\n\nYou can use the finetuned models for multiclass/multilabel text classification and token classification task.\n\n<br>\n\nMulticlass text classification\n\n\n- 'wisesight_sentiment' \n \n 4-class text classification task ('positive', 'neutral', 'negative', and 'question') based on social media posts and tweets.\n\n- 'wongnai_reivews' \n\n Users' review rating classification task (scale is ranging from 1 to 5)\n\n- 'generated_reviews_enth' : ('review_star' as label)\n\n Generated users' review rating classification task (scale is ranging from 1 to 5).\n\nMultilabel text classification\n\n- 'prachathai67k'\n\n Thai topic classification with 12 labels based on news article corpus from URL. The detail is described in this page.\n \n\nToken classification\n\n- 'thainer' \n\n Named-entity recognition tagging with 13 named-entities as descibed in this page.\n\n- 'lst20' : NER NER and POS tagging\n \n Named-entity recognition tagging with 10 named-entities and Part-of-Speech tagging with 16 tags as descibed in this page.\n\n<br>",
"## How to use\n\n<br>\n\nThe example notebook demonstrating how to use finetuned model for inference can be found at this Colab notebook\n\n<br>\n\n\nBibTeX entry and citation info"
] |
fill-mask
|
transformers
|
# WangchanBERTa base model: `wangchanberta-base-att-spm-uncased`
<br>
Pretrained RoBERTa BASE model on assorted Thai texts (78.5 GB).
The script and documentation can be found at [this repository](https://github.com/vistec-AI/thai2transformers).
<br>
## Model description
<br>
The architecture of the pretrained model is based on RoBERTa [[Liu et al., 2019]](https://arxiv.org/abs/1907.11692).
<br>
## Intended uses & limitations
<br>
You can use the pretrained model for masked language modeling (i.e. predicting a mask token in the input text). In addition, we also provide finetuned models for multiclass/multilabel text classification and token classification task.
<br>
**Multiclass text classification**
- `wisesight_sentiment`
4-class text classification task (`positive`, `neutral`, `negative`, and `question`) based on social media posts and tweets.
- `wongnai_reivews`
Users' review rating classification task (scale is ranging from 1 to 5)
- `generated_reviews_enth` : (`review_star` as label)
Generated users' review rating classification task (scale is ranging from 1 to 5).
**Multilabel text classification**
- `prachathai67k`
Thai topic classification with 12 labels based on news article corpus from prachathai.com. The detail is described in this [page](https://huggingface.co/datasets/prachathai67k).
**Token classification**
- `thainer`
Named-entity recognition tagging with 13 named-entities as described in this [page](https://huggingface.co/datasets/thainer).
- `lst20` : NER NER and POS tagging
Named-entity recognition tagging with 10 named-entities and Part-of-Speech tagging with 16 tags as described in this [page](https://huggingface.co/datasets/lst20).
<br>
## How to use
<br>
The getting started notebook of WangchanBERTa model can be found at this [Colab notebook](https://colab.research.google.com/drive/1Kbk6sBspZLwcnOE61adAQo30xxqOQ9ko)
<br>
## Training data
`wangchanberta-base-att-spm-uncased` model was pretrained on assorted Thai text dataset. The total size of uncompressed text is 78.5GB.
### Preprocessing
Texts are preprocessed with the following rules:
- Replace HTML forms of characters with the actual characters such asnbsp;with a space and \\\\\\\\\\\\\\\\<br /> with a line break [[Howard and Ruder, 2018]](https://arxiv.org/abs/1801.06146).
- Remove empty brackets ((), {}, and []) than sometimes come up as a result of text extraction such as from Wikipedia.
- Replace line breaks with spaces.
- Replace more than one spaces with a single space
- Remove more than 3 repetitive characters such as ดีมากกก to ดีมาก [Howard and Ruder, 2018]](https://arxiv.org/abs/1801.06146).
- Word-level tokenization using [[Phatthiyaphaibun et al., 2020]](https://zenodo.org/record/4319685#.YA4xEGQzaDU) ’s `newmm` dictionary-based maximal matching tokenizer.
- Replace repetitive words; this is done post-tokenization unlike [[Howard and Ruder, 2018]](https://arxiv.org/abs/1801.06146). since there is no delimitation by space in Thai as in English.
- Replace spaces with <\\\\\\\\\\\\\\\\_>. The SentencePiece tokenizer combines the spaces with other tokens. Since spaces serve as punctuation in Thai such as sentence boundaries similar to periods in English, combining it with other tokens will omit an important feature for tasks such as word tokenization and sentence breaking. Therefore, we opt to explicitly mark spaces with <\\\\\\\\\\\\\\\\_>.
<br>
Regarding the vocabulary, we use SentencePiece [[Kudo, 2018]](https://arxiv.org/abs/1808.06226) to train SentencePiece unigram model.
The tokenizer has a vocabulary size of 25,000 subwords, trained on 15M sentences sampled from the training set.
The length of each sequence is limited up to 416 subword tokens.
Regarding the masking procedure, for each sequence, we sampled 15% of the tokens and replace them with<mask>token.Out of the 15%, 80% is replaced with a<mask>token, 10% is left unchanged and 10% is replaced with a random token.
<br>
**Train/Val/Test splits**
After preprocessing and deduplication, we have a training set of 381,034,638 unique, mostly Thai sentences with sequence length of 5 to 300 words (78.5GB). The training set has a total of 16,957,775,412 words as tokenized by dictionary-based maximal matching [[Phatthiyaphaibun et al., 2020]](https://zenodo.org/record/4319685#.YA4xEGQzaDU), 8,680,485,067 subwords as tokenized by SentencePiece tokenizer, and 53,035,823,287 characters.
<br>
**Pretraining**
The model was trained on 8 V100 GPUs for 500,000 steps with the batch size of 4,096 (32 sequences per device with 16 accumulation steps) and a sequence length of 416 tokens. The optimizer we used is Adam with the learning rate of $3e-4$, $\\\\\\\\\\\\\\\\beta_1 = 0.9$, $\\\\\\\\\\\\\\\\beta_2= 0.999$ and $\\\\\\\\\\\\\\\\epsilon = 1e-6$. The learning rate is warmed up for the first 24,000 steps and linearly decayed to zero. The model checkpoint with minimum validation loss will be selected as the best model checkpoint.
As of Sun 24 Jan 2021, we release the model from the checkpoint @360,000 steps due to the model pretraining has not yet been completed
<br>
**BibTeX entry and citation info**
```
@misc{lowphansirikul2021wangchanberta,
title={WangchanBERTa: Pretraining transformer-based Thai Language Models},
author={Lalita Lowphansirikul and Charin Polpanumas and Nawat Jantrakulchai and Sarana Nutanong},
year={2021},
eprint={2101.09635},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "th", "widget": [{"text": "\u0e1c\u0e39\u0e49\u0e43\u0e0a\u0e49\u0e07\u0e32\u0e19\u0e17\u0e48\u0e32\u0e2d\u0e32\u0e01\u0e32\u0e28\u0e22\u0e32\u0e19\u0e19\u0e32\u0e19\u0e32\u0e0a\u0e32\u0e15\u0e34<mask>\u0e21\u0e35\u0e01\u0e27\u0e48\u0e32\u0e2a\u0e32\u0e21\u0e25\u0e49\u0e32\u0e19\u0e04\u0e19<pad>"}]}
|
airesearch/wangchanberta-base-att-spm-uncased
| null |
[
"transformers",
"pytorch",
"safetensors",
"camembert",
"fill-mask",
"th",
"arxiv:1907.11692",
"arxiv:1801.06146",
"arxiv:1808.06226",
"arxiv:2101.09635",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1907.11692",
"1801.06146",
"1808.06226",
"2101.09635"
] |
[
"th"
] |
TAGS
#transformers #pytorch #safetensors #camembert #fill-mask #th #arxiv-1907.11692 #arxiv-1801.06146 #arxiv-1808.06226 #arxiv-2101.09635 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# WangchanBERTa base model: 'wangchanberta-base-att-spm-uncased'
<br>
Pretrained RoBERTa BASE model on assorted Thai texts (78.5 GB).
The script and documentation can be found at this repository.
<br>
## Model description
<br>
The architecture of the pretrained model is based on RoBERTa [[Liu et al., 2019]](URL
<br>
## Intended uses & limitations
<br>
You can use the pretrained model for masked language modeling (i.e. predicting a mask token in the input text). In addition, we also provide finetuned models for multiclass/multilabel text classification and token classification task.
<br>
Multiclass text classification
- 'wisesight_sentiment'
4-class text classification task ('positive', 'neutral', 'negative', and 'question') based on social media posts and tweets.
- 'wongnai_reivews'
Users' review rating classification task (scale is ranging from 1 to 5)
- 'generated_reviews_enth' : ('review_star' as label)
Generated users' review rating classification task (scale is ranging from 1 to 5).
Multilabel text classification
- 'prachathai67k'
Thai topic classification with 12 labels based on news article corpus from URL. The detail is described in this page.
Token classification
- 'thainer'
Named-entity recognition tagging with 13 named-entities as described in this page.
- 'lst20' : NER NER and POS tagging
Named-entity recognition tagging with 10 named-entities and Part-of-Speech tagging with 16 tags as described in this page.
<br>
## How to use
<br>
The getting started notebook of WangchanBERTa model can be found at this Colab notebook
<br>
## Training data
'wangchanberta-base-att-spm-uncased' model was pretrained on assorted Thai text dataset. The total size of uncompressed text is 78.5GB.
### Preprocessing
Texts are preprocessed with the following rules:
- Replace HTML forms of characters with the actual characters such asnbsp;with a space and \\\\\\\\\\\\\\\\<br /> with a line break [[Howard and Ruder, 2018]](URL
- Remove empty brackets ((), {}, and []) than sometimes come up as a result of text extraction such as from Wikipedia.
- Replace line breaks with spaces.
- Replace more than one spaces with a single space
- Remove more than 3 repetitive characters such as ดีมากกก to ดีมาก [Howard and Ruder, 2018]](URL
- Word-level tokenization using [[Phatthiyaphaibun et al., 2020]](URL ’s 'newmm' dictionary-based maximal matching tokenizer.
- Replace repetitive words; this is done post-tokenization unlike [[Howard and Ruder, 2018]](URL since there is no delimitation by space in Thai as in English.
- Replace spaces with <\\\\\\\\\\\\\\\\_>. The SentencePiece tokenizer combines the spaces with other tokens. Since spaces serve as punctuation in Thai such as sentence boundaries similar to periods in English, combining it with other tokens will omit an important feature for tasks such as word tokenization and sentence breaking. Therefore, we opt to explicitly mark spaces with <\\\\\\\\\\\\\\\\_>.
<br>
Regarding the vocabulary, we use SentencePiece [[Kudo, 2018]](URL to train SentencePiece unigram model.
The tokenizer has a vocabulary size of 25,000 subwords, trained on 15M sentences sampled from the training set.
The length of each sequence is limited up to 416 subword tokens.
Regarding the masking procedure, for each sequence, we sampled 15% of the tokens and replace them with<mask>token.Out of the 15%, 80% is replaced with a<mask>token, 10% is left unchanged and 10% is replaced with a random token.
<br>
Train/Val/Test splits
After preprocessing and deduplication, we have a training set of 381,034,638 unique, mostly Thai sentences with sequence length of 5 to 300 words (78.5GB). The training set has a total of 16,957,775,412 words as tokenized by dictionary-based maximal matching [[Phatthiyaphaibun et al., 2020]](URL 8,680,485,067 subwords as tokenized by SentencePiece tokenizer, and 53,035,823,287 characters.
<br>
Pretraining
The model was trained on 8 V100 GPUs for 500,000 steps with the batch size of 4,096 (32 sequences per device with 16 accumulation steps) and a sequence length of 416 tokens. The optimizer we used is Adam with the learning rate of $3e-4$, $\\\\\\\\\\\\\\\\beta_1 = 0.9$, $\\\\\\\\\\\\\\\\beta_2= 0.999$ and $\\\\\\\\\\\\\\\\epsilon = 1e-6$. The learning rate is warmed up for the first 24,000 steps and linearly decayed to zero. The model checkpoint with minimum validation loss will be selected as the best model checkpoint.
As of Sun 24 Jan 2021, we release the model from the checkpoint @360,000 steps due to the model pretraining has not yet been completed
<br>
BibTeX entry and citation info
|
[
"# WangchanBERTa base model: 'wangchanberta-base-att-spm-uncased'\n\n<br>\n\nPretrained RoBERTa BASE model on assorted Thai texts (78.5 GB).\nThe script and documentation can be found at this repository.\n<br>",
"## Model description\n\n<br>\n\nThe architecture of the pretrained model is based on RoBERTa [[Liu et al., 2019]](URL \n\n<br>",
"## Intended uses & limitations\n\n<br>\n\nYou can use the pretrained model for masked language modeling (i.e. predicting a mask token in the input text). In addition, we also provide finetuned models for multiclass/multilabel text classification and token classification task.\n\n<br>\n\nMulticlass text classification\n\n\n- 'wisesight_sentiment' \n \n 4-class text classification task ('positive', 'neutral', 'negative', and 'question') based on social media posts and tweets.\n\n- 'wongnai_reivews' \n\n Users' review rating classification task (scale is ranging from 1 to 5)\n\n- 'generated_reviews_enth' : ('review_star' as label)\n\n Generated users' review rating classification task (scale is ranging from 1 to 5).\n\nMultilabel text classification\n\n- 'prachathai67k'\n\n Thai topic classification with 12 labels based on news article corpus from URL. The detail is described in this page.\n \n\n\n\nToken classification\n\n- 'thainer' \n\n Named-entity recognition tagging with 13 named-entities as described in this page.\n\n- 'lst20' : NER NER and POS tagging\n \n Named-entity recognition tagging with 10 named-entities and Part-of-Speech tagging with 16 tags as described in this page.\n\n<br>",
"## How to use\n\n<br>\n\nThe getting started notebook of WangchanBERTa model can be found at this Colab notebook\n\n<br>",
"## Training data\n\n'wangchanberta-base-att-spm-uncased' model was pretrained on assorted Thai text dataset. The total size of uncompressed text is 78.5GB.",
"### Preprocessing\n\nTexts are preprocessed with the following rules:\n\n- Replace HTML forms of characters with the actual characters such asnbsp;with a space and \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\<br /> with a line break [[Howard and Ruder, 2018]](URL\n- Remove empty brackets ((), {}, and []) than sometimes come up as a result of text extraction such as from Wikipedia.\n- Replace line breaks with spaces.\n- Replace more than one spaces with a single space\n- Remove more than 3 repetitive characters such as ดีมากกก to ดีมาก [Howard and Ruder, 2018]](URL\n- Word-level tokenization using [[Phatthiyaphaibun et al., 2020]](URL ’s 'newmm' dictionary-based maximal matching tokenizer.\n- Replace repetitive words; this is done post-tokenization unlike [[Howard and Ruder, 2018]](URL since there is no delimitation by space in Thai as in English.\n- Replace spaces with <\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\_>. The SentencePiece tokenizer combines the spaces with other tokens. Since spaces serve as punctuation in Thai such as sentence boundaries similar to periods in English, combining it with other tokens will omit an important feature for tasks such as word tokenization and sentence breaking. Therefore, we opt to explicitly mark spaces with <\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\_>.\n\n<br>\n\n\nRegarding the vocabulary, we use SentencePiece [[Kudo, 2018]](URL to train SentencePiece unigram model.\nThe tokenizer has a vocabulary size of 25,000 subwords, trained on 15M sentences sampled from the training set.\n\n\nThe length of each sequence is limited up to 416 subword tokens.\n\nRegarding the masking procedure, for each sequence, we sampled 15% of the tokens and replace them with<mask>token.Out of the 15%, 80% is replaced with a<mask>token, 10% is left unchanged and 10% is replaced with a random token.\n\n<br>\n\nTrain/Val/Test splits\n\nAfter preprocessing and deduplication, we have a training set of 381,034,638 unique, mostly Thai sentences with sequence length of 5 to 300 words (78.5GB). The training set has a total of 16,957,775,412 words as tokenized by dictionary-based maximal matching [[Phatthiyaphaibun et al., 2020]](URL 8,680,485,067 subwords as tokenized by SentencePiece tokenizer, and 53,035,823,287 characters.\n<br>\n\nPretraining\n\nThe model was trained on 8 V100 GPUs for 500,000 steps with the batch size of 4,096 (32 sequences per device with 16 accumulation steps) and a sequence length of 416 tokens. The optimizer we used is Adam with the learning rate of $3e-4$, $\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\beta_1 = 0.9$, $\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\beta_2= 0.999$ and $\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\epsilon = 1e-6$. The learning rate is warmed up for the first 24,000 steps and linearly decayed to zero. The model checkpoint with minimum validation loss will be selected as the best model checkpoint. \n\nAs of Sun 24 Jan 2021, we release the model from the checkpoint @360,000 steps due to the model pretraining has not yet been completed\n\n<br>\n\nBibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #safetensors #camembert #fill-mask #th #arxiv-1907.11692 #arxiv-1801.06146 #arxiv-1808.06226 #arxiv-2101.09635 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# WangchanBERTa base model: 'wangchanberta-base-att-spm-uncased'\n\n<br>\n\nPretrained RoBERTa BASE model on assorted Thai texts (78.5 GB).\nThe script and documentation can be found at this repository.\n<br>",
"## Model description\n\n<br>\n\nThe architecture of the pretrained model is based on RoBERTa [[Liu et al., 2019]](URL \n\n<br>",
"## Intended uses & limitations\n\n<br>\n\nYou can use the pretrained model for masked language modeling (i.e. predicting a mask token in the input text). In addition, we also provide finetuned models for multiclass/multilabel text classification and token classification task.\n\n<br>\n\nMulticlass text classification\n\n\n- 'wisesight_sentiment' \n \n 4-class text classification task ('positive', 'neutral', 'negative', and 'question') based on social media posts and tweets.\n\n- 'wongnai_reivews' \n\n Users' review rating classification task (scale is ranging from 1 to 5)\n\n- 'generated_reviews_enth' : ('review_star' as label)\n\n Generated users' review rating classification task (scale is ranging from 1 to 5).\n\nMultilabel text classification\n\n- 'prachathai67k'\n\n Thai topic classification with 12 labels based on news article corpus from URL. The detail is described in this page.\n \n\n\n\nToken classification\n\n- 'thainer' \n\n Named-entity recognition tagging with 13 named-entities as described in this page.\n\n- 'lst20' : NER NER and POS tagging\n \n Named-entity recognition tagging with 10 named-entities and Part-of-Speech tagging with 16 tags as described in this page.\n\n<br>",
"## How to use\n\n<br>\n\nThe getting started notebook of WangchanBERTa model can be found at this Colab notebook\n\n<br>",
"## Training data\n\n'wangchanberta-base-att-spm-uncased' model was pretrained on assorted Thai text dataset. The total size of uncompressed text is 78.5GB.",
"### Preprocessing\n\nTexts are preprocessed with the following rules:\n\n- Replace HTML forms of characters with the actual characters such asnbsp;with a space and \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\<br /> with a line break [[Howard and Ruder, 2018]](URL\n- Remove empty brackets ((), {}, and []) than sometimes come up as a result of text extraction such as from Wikipedia.\n- Replace line breaks with spaces.\n- Replace more than one spaces with a single space\n- Remove more than 3 repetitive characters such as ดีมากกก to ดีมาก [Howard and Ruder, 2018]](URL\n- Word-level tokenization using [[Phatthiyaphaibun et al., 2020]](URL ’s 'newmm' dictionary-based maximal matching tokenizer.\n- Replace repetitive words; this is done post-tokenization unlike [[Howard and Ruder, 2018]](URL since there is no delimitation by space in Thai as in English.\n- Replace spaces with <\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\_>. The SentencePiece tokenizer combines the spaces with other tokens. Since spaces serve as punctuation in Thai such as sentence boundaries similar to periods in English, combining it with other tokens will omit an important feature for tasks such as word tokenization and sentence breaking. Therefore, we opt to explicitly mark spaces with <\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\_>.\n\n<br>\n\n\nRegarding the vocabulary, we use SentencePiece [[Kudo, 2018]](URL to train SentencePiece unigram model.\nThe tokenizer has a vocabulary size of 25,000 subwords, trained on 15M sentences sampled from the training set.\n\n\nThe length of each sequence is limited up to 416 subword tokens.\n\nRegarding the masking procedure, for each sequence, we sampled 15% of the tokens and replace them with<mask>token.Out of the 15%, 80% is replaced with a<mask>token, 10% is left unchanged and 10% is replaced with a random token.\n\n<br>\n\nTrain/Val/Test splits\n\nAfter preprocessing and deduplication, we have a training set of 381,034,638 unique, mostly Thai sentences with sequence length of 5 to 300 words (78.5GB). The training set has a total of 16,957,775,412 words as tokenized by dictionary-based maximal matching [[Phatthiyaphaibun et al., 2020]](URL 8,680,485,067 subwords as tokenized by SentencePiece tokenizer, and 53,035,823,287 characters.\n<br>\n\nPretraining\n\nThe model was trained on 8 V100 GPUs for 500,000 steps with the batch size of 4,096 (32 sequences per device with 16 accumulation steps) and a sequence length of 416 tokens. The optimizer we used is Adam with the learning rate of $3e-4$, $\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\beta_1 = 0.9$, $\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\beta_2= 0.999$ and $\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\epsilon = 1e-6$. The learning rate is warmed up for the first 24,000 steps and linearly decayed to zero. The model checkpoint with minimum validation loss will be selected as the best model checkpoint. \n\nAs of Sun 24 Jan 2021, we release the model from the checkpoint @360,000 steps due to the model pretraining has not yet been completed\n\n<br>\n\nBibTeX entry and citation info"
] |
question-answering
|
transformers
|
# wangchanberta-base-wiki-20210520-spm-finetune-qa
Finetuning `airesearchth/wangchanberta-base-wiki-20210520-spmd` with the training set of `iapp_wiki_qa_squad`, `thaiqa_squad`, and `nsc_qa` (removed examples which have cosine similarity with validation and test examples over 0.8; contexts of the latter two are trimmed to be around 300 `newmm` words). Benchmarks shared on [wandb](https://wandb.ai/cstorm125/wangchanberta-qa) using validation and test sets of `iapp_wiki_qa_squad`.
Trained with [thai2transformers](https://github.com/vistec-AI/thai2transformers/blob/dev/scripts/downstream/train_question_answering_lm_finetuning.py).
Run with:
```
export MODEL_NAME=airesearchth/wangchanberta-base-wiki-20210520-news-spm
CUDA_LAUNCH_BLOCKING=1 python train_question_answering_lm_finetuning.py \\n --model_name $MODEL_NAME \\n --dataset_name chimera_qa \\n --output_dir $MODEL_NAME-finetune-chimera_qa-model \\n --log_dir $MODEL_NAME-finetune-chimera_qa-log \\n --model_max_length 400 \\n --pad_on_right \\n --fp16
```
|
{"language": "th", "widget": [{"text": "\u0e2a\u0e27\u0e19\u0e01\u0e38\u0e2b\u0e25\u0e32\u0e1a\u0e40\u0e1b\u0e47\u0e19\u0e42\u0e23\u0e07\u0e40\u0e23\u0e35\u0e22\u0e19\u0e2d\u0e30\u0e44\u0e23", "context": "\u0e42\u0e23\u0e07\u0e40\u0e23\u0e35\u0e22\u0e19\u0e2a\u0e27\u0e19\u0e01\u0e38\u0e2b\u0e25\u0e32\u0e1a\u0e27\u0e34\u0e17\u0e22\u0e32\u0e25\u0e31\u0e22 (Suankularb Wittayalai School) (\u0e2d\u0e31\u0e01\u0e29\u0e23\u0e22\u0e48\u0e2d : \u0e2a.\u0e01. / S.K.) \u0e40\u0e1b\u0e47\u0e19\u0e42\u0e23\u0e07\u0e40\u0e23\u0e35\u0e22\u0e19\u0e0a\u0e32\u0e22\u0e25\u0e49\u0e27\u0e19 \u0e23\u0e30\u0e14\u0e31\u0e1a\u0e0a\u0e31\u0e49\u0e19\u0e21\u0e31\u0e18\u0e22\u0e21\u0e28\u0e36\u0e01\u0e29\u0e32\u0e02\u0e19\u0e32\u0e14\u0e43\u0e2b\u0e0d\u0e48\u0e1e\u0e34\u0e40\u0e28\u0e29 \u0e2a\u0e31\u0e07\u0e01\u0e31\u0e14\u0e2a\u0e33\u0e19\u0e31\u0e01\u0e07\u0e32\u0e19\u0e40\u0e02\u0e15\u0e1e\u0e37\u0e49\u0e19\u0e17\u0e35\u0e48\u0e01\u0e32\u0e23\u0e28\u0e36\u0e01\u0e29\u0e32\u0e21\u0e31\u0e18\u0e22\u0e21\u0e28\u0e36\u0e01\u0e29\u0e32\u0e40\u0e02\u0e15 1 \u0e2a\u0e33\u0e19\u0e31\u0e01\u0e07\u0e32\u0e19\u0e04\u0e13\u0e30\u0e01\u0e23\u0e23\u0e21\u0e01\u0e32\u0e23\u0e01\u0e32\u0e23\u0e28\u0e36\u0e01\u0e29\u0e32\u0e02\u0e31\u0e49\u0e19\u0e1e\u0e37\u0e49\u0e19\u0e10\u0e32\u0e19 (\u0e0a\u0e37\u0e48\u0e2d\u0e40\u0e14\u0e34\u0e21: \u0e01\u0e23\u0e21\u0e2a\u0e32\u0e21\u0e31\u0e0d\u0e28\u0e36\u0e01\u0e29\u0e32) \u0e01\u0e23\u0e30\u0e17\u0e23\u0e27\u0e07\u0e28\u0e36\u0e01\u0e29\u0e32\u0e18\u0e34\u0e01\u0e32\u0e23 \u0e01\u0e48\u0e2d\u0e15\u0e31\u0e49\u0e07\u0e42\u0e14\u0e22 \u0e1e\u0e23\u0e30\u0e1a\u0e32\u0e17\u0e2a\u0e21\u0e40\u0e14\u0e47\u0e08\u0e1e\u0e23\u0e30\u0e08\u0e38\u0e25\u0e08\u0e2d\u0e21\u0e40\u0e01\u0e25\u0e49\u0e32\u0e40\u0e08\u0e49\u0e32\u0e2d\u0e22\u0e39\u0e48\u0e2b\u0e31\u0e27 \u0e44\u0e14\u0e49\u0e23\u0e31\u0e1a\u0e01\u0e32\u0e23\u0e2a\u0e16\u0e32\u0e1b\u0e19\u0e32\u0e02\u0e36\u0e49\u0e19\u0e43\u0e19\u0e27\u0e31\u0e19\u0e17\u0e35\u0e48 8 \u0e21\u0e35\u0e19\u0e32\u0e04\u0e21 \u0e1e.\u0e28. 2424 (\u0e02\u0e13\u0e30\u0e19\u0e31\u0e49\u0e19\u0e19\u0e31\u0e1a\u0e27\u0e31\u0e19\u0e17\u0e35\u0e48 1 \u0e40\u0e21\u0e29\u0e32\u0e22\u0e19 \u0e40\u0e1b\u0e47\u0e19\u0e27\u0e31\u0e19\u0e02\u0e36\u0e49\u0e19\u0e1b\u0e35\u0e43\u0e2b\u0e21\u0e48 \u0e40\u0e21\u0e37\u0e48\u0e2d\u0e19\u0e31\u0e1a\u0e2d\u0e22\u0e48\u0e32\u0e07\u0e2a\u0e32\u0e01\u0e25\u0e16\u0e37\u0e2d\u0e40\u0e1b\u0e47\u0e19 \u0e1e.\u0e28. 2425) \u0e42\u0e14\u0e22\u0e40\u0e1b\u0e47\u0e19\u0e42\u0e23\u0e07\u0e40\u0e23\u0e35\u0e22\u0e19\u0e23\u0e31\u0e10\u0e1a\u0e32\u0e25\u0e41\u0e2b\u0e48\u0e07\u0e41\u0e23\u0e01\u0e02\u0e2d\u0e07\u0e1b\u0e23\u0e30\u0e40\u0e17\u0e28\u0e44\u0e17\u0e22"}]}
|
airesearch/wangchanberta-base-wiki-20210520-spm-finetune-qa
| null |
[
"transformers",
"pytorch",
"camembert",
"question-answering",
"th",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"th"
] |
TAGS
#transformers #pytorch #camembert #question-answering #th #endpoints_compatible #region-us
|
# wangchanberta-base-wiki-20210520-spm-finetune-qa
Finetuning 'airesearchth/wangchanberta-base-wiki-20210520-spmd' with the training set of 'iapp_wiki_qa_squad', 'thaiqa_squad', and 'nsc_qa' (removed examples which have cosine similarity with validation and test examples over 0.8; contexts of the latter two are trimmed to be around 300 'newmm' words). Benchmarks shared on wandb using validation and test sets of 'iapp_wiki_qa_squad'.
Trained with thai2transformers.
Run with:
|
[
"# wangchanberta-base-wiki-20210520-spm-finetune-qa\n\nFinetuning 'airesearchth/wangchanberta-base-wiki-20210520-spmd' with the training set of 'iapp_wiki_qa_squad', 'thaiqa_squad', and 'nsc_qa' (removed examples which have cosine similarity with validation and test examples over 0.8; contexts of the latter two are trimmed to be around 300 'newmm' words). Benchmarks shared on wandb using validation and test sets of 'iapp_wiki_qa_squad'.\nTrained with thai2transformers.\n\nRun with:"
] |
[
"TAGS\n#transformers #pytorch #camembert #question-answering #th #endpoints_compatible #region-us \n",
"# wangchanberta-base-wiki-20210520-spm-finetune-qa\n\nFinetuning 'airesearchth/wangchanberta-base-wiki-20210520-spmd' with the training set of 'iapp_wiki_qa_squad', 'thaiqa_squad', and 'nsc_qa' (removed examples which have cosine similarity with validation and test examples over 0.8; contexts of the latter two are trimmed to be around 300 'newmm' words). Benchmarks shared on wandb using validation and test sets of 'iapp_wiki_qa_squad'.\nTrained with thai2transformers.\n\nRun with:"
] |
fill-mask
|
transformers
|
# WangchanBERTa base model: `wangchanberta-base-wiki-newmm`
<br>
Pretrained RoBERTa BASE model on Thai Wikipedia corpus.
The script and documentation can be found at [this reposiryory](https://github.com/vistec-AI/thai2transformers).
<br>
## Model description
<br>
The architecture of the pretrained model is based on RoBERTa [[Liu et al., 2019]](https://arxiv.org/abs/1907.11692).
<br>
## Intended uses & limitations
<br>
You can use the pretrained model for masked language modeling (i.e. predicting a mask token in the input text). In addition, we also provide finetuned models for multiclass/multilabel text classification and token classification task.
<br>
**Multiclass text classification**
- `wisesight_sentiment`
4-class text classification task (`positive`, `neutral`, `negative`, and `question`) based on social media posts and tweets.
- `wongnai_reivews`
Users' review rating classification task (scale is ranging from 1 to 5)
- `generated_reviews_enth` : (`review_star` as label)
Generated users' review rating classification task (scale is ranging from 1 to 5).
**Multilabel text classification**
- `prachathai67k`
Thai topic classification with 12 labels based on news article corpus from prachathai.com. The detail is described in this [page](https://huggingface.co/datasets/prachathai67k).
**Token classification**
- `thainer`
Named-entity recognition tagging with 13 named-entities as descibed in this [page](https://huggingface.co/datasets/thainer).
- `lst20` : NER NER and POS tagging
Named-entity recognition tagging with 10 named-entities and Part-of-Speech tagging with 16 tags as descibed in this [page](https://huggingface.co/datasets/lst20).
<br>
## How to use
<br>
The getting started notebook of WangchanBERTa model can be found at this [Colab notebook](https://colab.research.google.com/drive/1Kbk6sBspZLwcnOE61adAQo30xxqOQ9ko)
<br>
## Training data
`wangchanberta-base-wiki-newmm` model was pretrained on Thai Wikipedia. Specifically, we use the Wikipedia dump articles on 20 August 2020 (dumps.wikimedia.org/thwiki/20200820/). We opt out lists, and tables.
### Preprocessing
Texts are preprocessed with the following rules:
- Replace non-breaking space, zero-width non-breaking space, and soft hyphen with spaces.
- Remove an empty parenthesis that occur right after the title of the first paragraph.
- Replace spaces wtth <_>.
<br>
Regarding the vocabulary, we use wordl-level token from [PyThaiNLP](https://github.com/PyThaiNLP/pythainlp)'s dictionary-based tokenizer namedly `newmm`. The total number of word-level tokens in the vocabulary is 97,982.
We sample sentences contigously to have the length of at most 512 tokens. For some sentences that overlap the boundary of 512 tokens, we split such sentence with an additional token as document separator. This is the same approach as proposed by [[Liu et al., 2019]](https://arxiv.org/abs/1907.11692) (called "FULL-SENTENCES").
Regarding the masking procedure, for each sequence, we sampled 15% of the tokens and replace them with<mask>token.Out of the 15%, 80% is replaced with a<mask>token, 10% is left unchanged and 10% is replaced with a random token.
<br>
**Train/Val/Test splits**
We split sequencially 944,782 sentences for training set, 24,863 sentences for validation set and 24,862 sentences for test set.
<br>
**Pretraining**
The model was trained on 32 V100 GPUs for 31,250 steps with the batch size of 8,192 (16 sequences per device with 16 accumulation steps) and a sequence length of 512 tokens. The optimizer we used is Adam with the learning rate of $7e-4$, $\beta_1 = 0.9$, $\beta_2= 0.98$ and $\epsilon = 1e-6$. The learning rate is warmed up for the first 1250 steps and linearly decayed to zero. The model checkpoint with minimum validation loss will be selected as the best model checkpoint.
<br>
**BibTeX entry and citation info**
```
@misc{lowphansirikul2021wangchanberta,
title={WangchanBERTa: Pretraining transformer-based Thai Language Models},
author={Lalita Lowphansirikul and Charin Polpanumas and Nawat Jantrakulchai and Sarana Nutanong},
year={2021},
eprint={2101.09635},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "th"}
|
airesearch/wangchanberta-base-wiki-newmm
| null |
[
"transformers",
"pytorch",
"jax",
"roberta",
"fill-mask",
"th",
"arxiv:1907.11692",
"arxiv:2101.09635",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1907.11692",
"2101.09635"
] |
[
"th"
] |
TAGS
#transformers #pytorch #jax #roberta #fill-mask #th #arxiv-1907.11692 #arxiv-2101.09635 #autotrain_compatible #endpoints_compatible #region-us
|
# WangchanBERTa base model: 'wangchanberta-base-wiki-newmm'
<br>
Pretrained RoBERTa BASE model on Thai Wikipedia corpus.
The script and documentation can be found at this reposiryory.
<br>
## Model description
<br>
The architecture of the pretrained model is based on RoBERTa [[Liu et al., 2019]](URL
<br>
## Intended uses & limitations
<br>
You can use the pretrained model for masked language modeling (i.e. predicting a mask token in the input text). In addition, we also provide finetuned models for multiclass/multilabel text classification and token classification task.
<br>
Multiclass text classification
- 'wisesight_sentiment'
4-class text classification task ('positive', 'neutral', 'negative', and 'question') based on social media posts and tweets.
- 'wongnai_reivews'
Users' review rating classification task (scale is ranging from 1 to 5)
- 'generated_reviews_enth' : ('review_star' as label)
Generated users' review rating classification task (scale is ranging from 1 to 5).
Multilabel text classification
- 'prachathai67k'
Thai topic classification with 12 labels based on news article corpus from URL. The detail is described in this page.
Token classification
- 'thainer'
Named-entity recognition tagging with 13 named-entities as descibed in this page.
- 'lst20' : NER NER and POS tagging
Named-entity recognition tagging with 10 named-entities and Part-of-Speech tagging with 16 tags as descibed in this page.
<br>
## How to use
<br>
The getting started notebook of WangchanBERTa model can be found at this Colab notebook
<br>
## Training data
'wangchanberta-base-wiki-newmm' model was pretrained on Thai Wikipedia. Specifically, we use the Wikipedia dump articles on 20 August 2020 (URL We opt out lists, and tables.
### Preprocessing
Texts are preprocessed with the following rules:
- Replace non-breaking space, zero-width non-breaking space, and soft hyphen with spaces.
- Remove an empty parenthesis that occur right after the title of the first paragraph.
- Replace spaces wtth <_>.
<br>
Regarding the vocabulary, we use wordl-level token from PyThaiNLP's dictionary-based tokenizer namedly 'newmm'. The total number of word-level tokens in the vocabulary is 97,982.
We sample sentences contigously to have the length of at most 512 tokens. For some sentences that overlap the boundary of 512 tokens, we split such sentence with an additional token as document separator. This is the same approach as proposed by [[Liu et al., 2019]](URL (called "FULL-SENTENCES").
Regarding the masking procedure, for each sequence, we sampled 15% of the tokens and replace them with<mask>token.Out of the 15%, 80% is replaced with a<mask>token, 10% is left unchanged and 10% is replaced with a random token.
<br>
Train/Val/Test splits
We split sequencially 944,782 sentences for training set, 24,863 sentences for validation set and 24,862 sentences for test set.
<br>
Pretraining
The model was trained on 32 V100 GPUs for 31,250 steps with the batch size of 8,192 (16 sequences per device with 16 accumulation steps) and a sequence length of 512 tokens. The optimizer we used is Adam with the learning rate of $7e-4$, $\beta_1 = 0.9$, $\beta_2= 0.98$ and $\epsilon = 1e-6$. The learning rate is warmed up for the first 1250 steps and linearly decayed to zero. The model checkpoint with minimum validation loss will be selected as the best model checkpoint.
<br>
BibTeX entry and citation info
|
[
"# WangchanBERTa base model: 'wangchanberta-base-wiki-newmm'\n\n<br>\n\nPretrained RoBERTa BASE model on Thai Wikipedia corpus.\nThe script and documentation can be found at this reposiryory.\n<br>",
"## Model description\n\n<br>\n\nThe architecture of the pretrained model is based on RoBERTa [[Liu et al., 2019]](URL \n\n<br>",
"## Intended uses & limitations\n\n<br>\n\nYou can use the pretrained model for masked language modeling (i.e. predicting a mask token in the input text). In addition, we also provide finetuned models for multiclass/multilabel text classification and token classification task.\n\n<br>\n\nMulticlass text classification\n\n\n- 'wisesight_sentiment' \n \n 4-class text classification task ('positive', 'neutral', 'negative', and 'question') based on social media posts and tweets.\n\n- 'wongnai_reivews' \n\n Users' review rating classification task (scale is ranging from 1 to 5)\n\n- 'generated_reviews_enth' : ('review_star' as label)\n\n Generated users' review rating classification task (scale is ranging from 1 to 5).\n\nMultilabel text classification\n\n- 'prachathai67k'\n\n Thai topic classification with 12 labels based on news article corpus from URL. The detail is described in this page.\n \n\n\n\nToken classification\n\n- 'thainer' \n\n Named-entity recognition tagging with 13 named-entities as descibed in this page.\n\n- 'lst20' : NER NER and POS tagging\n \n Named-entity recognition tagging with 10 named-entities and Part-of-Speech tagging with 16 tags as descibed in this page.\n\n<br>",
"## How to use\n\n<br>\n\nThe getting started notebook of WangchanBERTa model can be found at this Colab notebook\n\n<br>",
"## Training data\n\n'wangchanberta-base-wiki-newmm' model was pretrained on Thai Wikipedia. Specifically, we use the Wikipedia dump articles on 20 August 2020 (URL We opt out lists, and tables.",
"### Preprocessing\n\nTexts are preprocessed with the following rules:\n\n- Replace non-breaking space, zero-width non-breaking space, and soft hyphen with spaces.\n- Remove an empty parenthesis that occur right after the title of the first paragraph.\n- Replace spaces wtth <_>.\n\n<br>\n\n\nRegarding the vocabulary, we use wordl-level token from PyThaiNLP's dictionary-based tokenizer namedly 'newmm'. The total number of word-level tokens in the vocabulary is 97,982. \n\n\nWe sample sentences contigously to have the length of at most 512 tokens. For some sentences that overlap the boundary of 512 tokens, we split such sentence with an additional token as document separator. This is the same approach as proposed by [[Liu et al., 2019]](URL (called \"FULL-SENTENCES\"). \n\nRegarding the masking procedure, for each sequence, we sampled 15% of the tokens and replace them with<mask>token.Out of the 15%, 80% is replaced with a<mask>token, 10% is left unchanged and 10% is replaced with a random token.\n\n<br>\n\nTrain/Val/Test splits\n\nWe split sequencially 944,782 sentences for training set, 24,863 sentences for validation set and 24,862 sentences for test set.\n\n<br>\n\nPretraining\n\nThe model was trained on 32 V100 GPUs for 31,250 steps with the batch size of 8,192 (16 sequences per device with 16 accumulation steps) and a sequence length of 512 tokens. The optimizer we used is Adam with the learning rate of $7e-4$, $\\beta_1 = 0.9$, $\\beta_2= 0.98$ and $\\epsilon = 1e-6$. The learning rate is warmed up for the first 1250 steps and linearly decayed to zero. The model checkpoint with minimum validation loss will be selected as the best model checkpoint. \n\n<br>\n\nBibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #jax #roberta #fill-mask #th #arxiv-1907.11692 #arxiv-2101.09635 #autotrain_compatible #endpoints_compatible #region-us \n",
"# WangchanBERTa base model: 'wangchanberta-base-wiki-newmm'\n\n<br>\n\nPretrained RoBERTa BASE model on Thai Wikipedia corpus.\nThe script and documentation can be found at this reposiryory.\n<br>",
"## Model description\n\n<br>\n\nThe architecture of the pretrained model is based on RoBERTa [[Liu et al., 2019]](URL \n\n<br>",
"## Intended uses & limitations\n\n<br>\n\nYou can use the pretrained model for masked language modeling (i.e. predicting a mask token in the input text). In addition, we also provide finetuned models for multiclass/multilabel text classification and token classification task.\n\n<br>\n\nMulticlass text classification\n\n\n- 'wisesight_sentiment' \n \n 4-class text classification task ('positive', 'neutral', 'negative', and 'question') based on social media posts and tweets.\n\n- 'wongnai_reivews' \n\n Users' review rating classification task (scale is ranging from 1 to 5)\n\n- 'generated_reviews_enth' : ('review_star' as label)\n\n Generated users' review rating classification task (scale is ranging from 1 to 5).\n\nMultilabel text classification\n\n- 'prachathai67k'\n\n Thai topic classification with 12 labels based on news article corpus from URL. The detail is described in this page.\n \n\n\n\nToken classification\n\n- 'thainer' \n\n Named-entity recognition tagging with 13 named-entities as descibed in this page.\n\n- 'lst20' : NER NER and POS tagging\n \n Named-entity recognition tagging with 10 named-entities and Part-of-Speech tagging with 16 tags as descibed in this page.\n\n<br>",
"## How to use\n\n<br>\n\nThe getting started notebook of WangchanBERTa model can be found at this Colab notebook\n\n<br>",
"## Training data\n\n'wangchanberta-base-wiki-newmm' model was pretrained on Thai Wikipedia. Specifically, we use the Wikipedia dump articles on 20 August 2020 (URL We opt out lists, and tables.",
"### Preprocessing\n\nTexts are preprocessed with the following rules:\n\n- Replace non-breaking space, zero-width non-breaking space, and soft hyphen with spaces.\n- Remove an empty parenthesis that occur right after the title of the first paragraph.\n- Replace spaces wtth <_>.\n\n<br>\n\n\nRegarding the vocabulary, we use wordl-level token from PyThaiNLP's dictionary-based tokenizer namedly 'newmm'. The total number of word-level tokens in the vocabulary is 97,982. \n\n\nWe sample sentences contigously to have the length of at most 512 tokens. For some sentences that overlap the boundary of 512 tokens, we split such sentence with an additional token as document separator. This is the same approach as proposed by [[Liu et al., 2019]](URL (called \"FULL-SENTENCES\"). \n\nRegarding the masking procedure, for each sequence, we sampled 15% of the tokens and replace them with<mask>token.Out of the 15%, 80% is replaced with a<mask>token, 10% is left unchanged and 10% is replaced with a random token.\n\n<br>\n\nTrain/Val/Test splits\n\nWe split sequencially 944,782 sentences for training set, 24,863 sentences for validation set and 24,862 sentences for test set.\n\n<br>\n\nPretraining\n\nThe model was trained on 32 V100 GPUs for 31,250 steps with the batch size of 8,192 (16 sequences per device with 16 accumulation steps) and a sequence length of 512 tokens. The optimizer we used is Adam with the learning rate of $7e-4$, $\\beta_1 = 0.9$, $\\beta_2= 0.98$ and $\\epsilon = 1e-6$. The learning rate is warmed up for the first 1250 steps and linearly decayed to zero. The model checkpoint with minimum validation loss will be selected as the best model checkpoint. \n\n<br>\n\nBibTeX entry and citation info"
] |
fill-mask
|
transformers
|
# WangchanBERTa base model: `wangchanberta-base-wiki-sefr`
<br>
Pretrained RoBERTa BASE model on Thai Wikipedia corpus.
The script and documentation can be found at [this reposiryory](https://github.com/vistec-AI/thai2transformers).
<br>
## Model description
<br>
The architecture of the pretrained model is based on RoBERTa [[Liu et al., 2019]](https://arxiv.org/abs/1907.11692).
<br>
## Intended uses & limitations
<br>
You can use the pretrained model for masked language modeling (i.e. predicting a mask token in the input text). In addition, we also provide finetuned models for multiclass/multilabel text classification and token classification task.
<br>
**Multiclass text classification**
- `wisesight_sentiment`
4-class text classification task (`positive`, `neutral`, `negative`, and `question`) based on social media posts and tweets.
- `wongnai_reivews`
Users' review rating classification task (scale is ranging from 1 to 5)
- `generated_reviews_enth` : (`review_star` as label)
Generated users' review rating classification task (scale is ranging from 1 to 5).
**Multilabel text classification**
- `prachathai67k`
Thai topic classification with 12 labels based on news article corpus from prachathai.com. The detail is described in this [page](https://huggingface.co/datasets/prachathai67k).
**Token classification**
- `thainer`
Named-entity recognition tagging with 13 named-entities as descibed in this [page](https://huggingface.co/datasets/thainer).
- `lst20` : NER NER and POS tagging
Named-entity recognition tagging with 10 named-entities and Part-of-Speech tagging with 16 tags as descibed in this [page](https://huggingface.co/datasets/lst20).
<br>
## How to use
<br>
The getting started notebook of WangchanBERTa model can be found at this [Colab notebook](https://colab.research.google.com/drive/1Kbk6sBspZLwcnOE61adAQo30xxqOQ9ko)
<br>
## Training data
`wangchanberta-base-wiki-sefr` model was pretrained on Thai Wikipedia. Specifically, we use the Wikipedia dump articles on 20 August 2020 (dumps.wikimedia.org/thwiki/20200820/). We opt out lists, and tables.
### Preprocessing
Texts are preprocessed with the following rules:
- Replace non-breaking space, zero-width non-breaking space, and soft hyphen with spaces.
- Remove an empty parenthesis that occur right after the title of the first paragraph.
- Replace spaces wtth <_>.
<br>
Regarding the vocabulary, we use Stacked Ensemble Filter and Refine (SEFR) tokenizer `(engine="best") `[[Limkonchotiwat et al., 2020]](https://www.aclweb.org/anthology/2020.emnlp-main.315/) based on probablities from CNN-based `deepcut` [[Kittinaradorn et al., 2019]](http://doi.org/10.5281/zenodo.3457707). The total number of word-level tokens in the vocabulary is 92,177.
We sample sentences contigously to have the length of at most 512 tokens. For some sentences that overlap the boundary of 512 tokens, we split such sentence with an additional token as document separator. This is the same approach as proposed by [[Liu et al., 2019]](https://arxiv.org/abs/1907.11692) (called "FULL-SENTENCES").
Regarding the masking procedure, for each sequence, we sampled 15% of the tokens and replace them with<mask>token.Out of the 15%, 80% is replaced with a<mask>token, 10% is left unchanged and 10% is replaced with a random token.
<br>
**Train/Val/Test splits**
We split sequencially 944,782 sentences for training set, 24,863 sentences for validation set and 24,862 sentences for test set.
<br>
**Pretraining**
The model was trained on 32 V100 GPUs for 31,250 steps with the batch size of 8,192 (16 sequences per device with 16 accumulation steps) and a sequence length of 512 tokens. The optimizer we used is Adam with the learning rate of $7e-4$, $\beta_1 = 0.9$, $\beta_2= 0.98$ and $\epsilon = 1e-6$. The learning rate is warmed up for the first 1250 steps and linearly decayed to zero. The model checkpoint with minimum validation loss will be selected as the best model checkpoint.
<br>
**BibTeX entry and citation info**
```
@misc{lowphansirikul2021wangchanberta,
title={WangchanBERTa: Pretraining transformer-based Thai Language Models},
author={Lalita Lowphansirikul and Charin Polpanumas and Nawat Jantrakulchai and Sarana Nutanong},
year={2021},
eprint={2101.09635},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "th"}
|
airesearch/wangchanberta-base-wiki-sefr
| null |
[
"transformers",
"pytorch",
"jax",
"roberta",
"fill-mask",
"th",
"arxiv:1907.11692",
"arxiv:2101.09635",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1907.11692",
"2101.09635"
] |
[
"th"
] |
TAGS
#transformers #pytorch #jax #roberta #fill-mask #th #arxiv-1907.11692 #arxiv-2101.09635 #autotrain_compatible #endpoints_compatible #region-us
|
# WangchanBERTa base model: 'wangchanberta-base-wiki-sefr'
<br>
Pretrained RoBERTa BASE model on Thai Wikipedia corpus.
The script and documentation can be found at this reposiryory.
<br>
## Model description
<br>
The architecture of the pretrained model is based on RoBERTa [[Liu et al., 2019]](URL
<br>
## Intended uses & limitations
<br>
You can use the pretrained model for masked language modeling (i.e. predicting a mask token in the input text). In addition, we also provide finetuned models for multiclass/multilabel text classification and token classification task.
<br>
Multiclass text classification
- 'wisesight_sentiment'
4-class text classification task ('positive', 'neutral', 'negative', and 'question') based on social media posts and tweets.
- 'wongnai_reivews'
Users' review rating classification task (scale is ranging from 1 to 5)
- 'generated_reviews_enth' : ('review_star' as label)
Generated users' review rating classification task (scale is ranging from 1 to 5).
Multilabel text classification
- 'prachathai67k'
Thai topic classification with 12 labels based on news article corpus from URL. The detail is described in this page.
Token classification
- 'thainer'
Named-entity recognition tagging with 13 named-entities as descibed in this page.
- 'lst20' : NER NER and POS tagging
Named-entity recognition tagging with 10 named-entities and Part-of-Speech tagging with 16 tags as descibed in this page.
<br>
## How to use
<br>
The getting started notebook of WangchanBERTa model can be found at this Colab notebook
<br>
## Training data
'wangchanberta-base-wiki-sefr' model was pretrained on Thai Wikipedia. Specifically, we use the Wikipedia dump articles on 20 August 2020 (URL We opt out lists, and tables.
### Preprocessing
Texts are preprocessed with the following rules:
- Replace non-breaking space, zero-width non-breaking space, and soft hyphen with spaces.
- Remove an empty parenthesis that occur right after the title of the first paragraph.
- Replace spaces wtth <_>.
<br>
Regarding the vocabulary, we use Stacked Ensemble Filter and Refine (SEFR) tokenizer '(engine="best") '[[Limkonchotiwat et al., 2020]](URL based on probablities from CNN-based 'deepcut' [[Kittinaradorn et al., 2019]](URL The total number of word-level tokens in the vocabulary is 92,177.
We sample sentences contigously to have the length of at most 512 tokens. For some sentences that overlap the boundary of 512 tokens, we split such sentence with an additional token as document separator. This is the same approach as proposed by [[Liu et al., 2019]](URL (called "FULL-SENTENCES").
Regarding the masking procedure, for each sequence, we sampled 15% of the tokens and replace them with<mask>token.Out of the 15%, 80% is replaced with a<mask>token, 10% is left unchanged and 10% is replaced with a random token.
<br>
Train/Val/Test splits
We split sequencially 944,782 sentences for training set, 24,863 sentences for validation set and 24,862 sentences for test set.
<br>
Pretraining
The model was trained on 32 V100 GPUs for 31,250 steps with the batch size of 8,192 (16 sequences per device with 16 accumulation steps) and a sequence length of 512 tokens. The optimizer we used is Adam with the learning rate of $7e-4$, $\beta_1 = 0.9$, $\beta_2= 0.98$ and $\epsilon = 1e-6$. The learning rate is warmed up for the first 1250 steps and linearly decayed to zero. The model checkpoint with minimum validation loss will be selected as the best model checkpoint.
<br>
BibTeX entry and citation info
|
[
"# WangchanBERTa base model: 'wangchanberta-base-wiki-sefr'\n\n<br>\n\nPretrained RoBERTa BASE model on Thai Wikipedia corpus.\nThe script and documentation can be found at this reposiryory.\n<br>",
"## Model description\n\n<br>\n\nThe architecture of the pretrained model is based on RoBERTa [[Liu et al., 2019]](URL \n\n<br>",
"## Intended uses & limitations\n\n<br>\n\nYou can use the pretrained model for masked language modeling (i.e. predicting a mask token in the input text). In addition, we also provide finetuned models for multiclass/multilabel text classification and token classification task.\n\n<br>\n\nMulticlass text classification\n\n\n- 'wisesight_sentiment' \n \n 4-class text classification task ('positive', 'neutral', 'negative', and 'question') based on social media posts and tweets.\n\n- 'wongnai_reivews' \n\n Users' review rating classification task (scale is ranging from 1 to 5)\n\n- 'generated_reviews_enth' : ('review_star' as label)\n\n Generated users' review rating classification task (scale is ranging from 1 to 5).\n\nMultilabel text classification\n\n- 'prachathai67k'\n\n Thai topic classification with 12 labels based on news article corpus from URL. The detail is described in this page.\n \n\n\n\nToken classification\n\n- 'thainer' \n\n Named-entity recognition tagging with 13 named-entities as descibed in this page.\n\n- 'lst20' : NER NER and POS tagging\n \n Named-entity recognition tagging with 10 named-entities and Part-of-Speech tagging with 16 tags as descibed in this page.\n\n<br>",
"## How to use\n\n<br>\n\nThe getting started notebook of WangchanBERTa model can be found at this Colab notebook\n\n<br>",
"## Training data\n\n'wangchanberta-base-wiki-sefr' model was pretrained on Thai Wikipedia. Specifically, we use the Wikipedia dump articles on 20 August 2020 (URL We opt out lists, and tables.",
"### Preprocessing\n\nTexts are preprocessed with the following rules:\n\n- Replace non-breaking space, zero-width non-breaking space, and soft hyphen with spaces.\n- Remove an empty parenthesis that occur right after the title of the first paragraph.\n- Replace spaces wtth <_>.\n\n<br>\n\n\nRegarding the vocabulary, we use Stacked Ensemble Filter and Refine (SEFR) tokenizer '(engine=\"best\") '[[Limkonchotiwat et al., 2020]](URL based on probablities from CNN-based 'deepcut' [[Kittinaradorn et al., 2019]](URL The total number of word-level tokens in the vocabulary is 92,177.\n\n\nWe sample sentences contigously to have the length of at most 512 tokens. For some sentences that overlap the boundary of 512 tokens, we split such sentence with an additional token as document separator. This is the same approach as proposed by [[Liu et al., 2019]](URL (called \"FULL-SENTENCES\"). \n\nRegarding the masking procedure, for each sequence, we sampled 15% of the tokens and replace them with<mask>token.Out of the 15%, 80% is replaced with a<mask>token, 10% is left unchanged and 10% is replaced with a random token.\n\n<br>\n\nTrain/Val/Test splits\n\nWe split sequencially 944,782 sentences for training set, 24,863 sentences for validation set and 24,862 sentences for test set.\n\n<br>\n\nPretraining\n\nThe model was trained on 32 V100 GPUs for 31,250 steps with the batch size of 8,192 (16 sequences per device with 16 accumulation steps) and a sequence length of 512 tokens. The optimizer we used is Adam with the learning rate of $7e-4$, $\\beta_1 = 0.9$, $\\beta_2= 0.98$ and $\\epsilon = 1e-6$. The learning rate is warmed up for the first 1250 steps and linearly decayed to zero. The model checkpoint with minimum validation loss will be selected as the best model checkpoint. \n\n<br>\n\nBibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #jax #roberta #fill-mask #th #arxiv-1907.11692 #arxiv-2101.09635 #autotrain_compatible #endpoints_compatible #region-us \n",
"# WangchanBERTa base model: 'wangchanberta-base-wiki-sefr'\n\n<br>\n\nPretrained RoBERTa BASE model on Thai Wikipedia corpus.\nThe script and documentation can be found at this reposiryory.\n<br>",
"## Model description\n\n<br>\n\nThe architecture of the pretrained model is based on RoBERTa [[Liu et al., 2019]](URL \n\n<br>",
"## Intended uses & limitations\n\n<br>\n\nYou can use the pretrained model for masked language modeling (i.e. predicting a mask token in the input text). In addition, we also provide finetuned models for multiclass/multilabel text classification and token classification task.\n\n<br>\n\nMulticlass text classification\n\n\n- 'wisesight_sentiment' \n \n 4-class text classification task ('positive', 'neutral', 'negative', and 'question') based on social media posts and tweets.\n\n- 'wongnai_reivews' \n\n Users' review rating classification task (scale is ranging from 1 to 5)\n\n- 'generated_reviews_enth' : ('review_star' as label)\n\n Generated users' review rating classification task (scale is ranging from 1 to 5).\n\nMultilabel text classification\n\n- 'prachathai67k'\n\n Thai topic classification with 12 labels based on news article corpus from URL. The detail is described in this page.\n \n\n\n\nToken classification\n\n- 'thainer' \n\n Named-entity recognition tagging with 13 named-entities as descibed in this page.\n\n- 'lst20' : NER NER and POS tagging\n \n Named-entity recognition tagging with 10 named-entities and Part-of-Speech tagging with 16 tags as descibed in this page.\n\n<br>",
"## How to use\n\n<br>\n\nThe getting started notebook of WangchanBERTa model can be found at this Colab notebook\n\n<br>",
"## Training data\n\n'wangchanberta-base-wiki-sefr' model was pretrained on Thai Wikipedia. Specifically, we use the Wikipedia dump articles on 20 August 2020 (URL We opt out lists, and tables.",
"### Preprocessing\n\nTexts are preprocessed with the following rules:\n\n- Replace non-breaking space, zero-width non-breaking space, and soft hyphen with spaces.\n- Remove an empty parenthesis that occur right after the title of the first paragraph.\n- Replace spaces wtth <_>.\n\n<br>\n\n\nRegarding the vocabulary, we use Stacked Ensemble Filter and Refine (SEFR) tokenizer '(engine=\"best\") '[[Limkonchotiwat et al., 2020]](URL based on probablities from CNN-based 'deepcut' [[Kittinaradorn et al., 2019]](URL The total number of word-level tokens in the vocabulary is 92,177.\n\n\nWe sample sentences contigously to have the length of at most 512 tokens. For some sentences that overlap the boundary of 512 tokens, we split such sentence with an additional token as document separator. This is the same approach as proposed by [[Liu et al., 2019]](URL (called \"FULL-SENTENCES\"). \n\nRegarding the masking procedure, for each sequence, we sampled 15% of the tokens and replace them with<mask>token.Out of the 15%, 80% is replaced with a<mask>token, 10% is left unchanged and 10% is replaced with a random token.\n\n<br>\n\nTrain/Val/Test splits\n\nWe split sequencially 944,782 sentences for training set, 24,863 sentences for validation set and 24,862 sentences for test set.\n\n<br>\n\nPretraining\n\nThe model was trained on 32 V100 GPUs for 31,250 steps with the batch size of 8,192 (16 sequences per device with 16 accumulation steps) and a sequence length of 512 tokens. The optimizer we used is Adam with the learning rate of $7e-4$, $\\beta_1 = 0.9$, $\\beta_2= 0.98$ and $\\epsilon = 1e-6$. The learning rate is warmed up for the first 1250 steps and linearly decayed to zero. The model checkpoint with minimum validation loss will be selected as the best model checkpoint. \n\n<br>\n\nBibTeX entry and citation info"
] |
fill-mask
|
transformers
|
# WangchanBERTa base model: `wangchanberta-base-wiki-spm`
<br>
Pretrained RoBERTa BASE model on Thai Wikipedia corpus.
The script and documentation can be found at [this reposiryory](https://github.com/vistec-AI/thai2transformers).
<br>
## Model description
<br>
The architecture of the pretrained model is based on RoBERTa [[Liu et al., 2019]](https://arxiv.org/abs/1907.11692).
<br>
## Intended uses & limitations
<br>
You can use the pretrained model for masked language modeling (i.e. predicting a mask token in the input text). In addition, we also provide finetuned models for multiclass/multilabel text classification and token classification task.
<br>
**Multiclass text classification**
- `wisesight_sentiment`
4-class text classification task (`positive`, `neutral`, `negative`, and `question`) based on social media posts and tweets.
- `wongnai_reivews`
Users' review rating classification task (scale is ranging from 1 to 5)
- `generated_reviews_enth` : (`review_star` as label)
Generated users' review rating classification task (scale is ranging from 1 to 5).
**Multilabel text classification**
- `prachathai67k`
Thai topic classification with 12 labels based on news article corpus from prachathai.com. The detail is described in this [page](https://huggingface.co/datasets/prachathai67k).
**Token classification**
- `thainer`
Named-entity recognition tagging with 13 named-entities as descibed in this [page](https://huggingface.co/datasets/thainer).
- `lst20` : NER NER and POS tagging
Named-entity recognition tagging with 10 named-entities and Part-of-Speech tagging with 16 tags as descibed in this [page](https://huggingface.co/datasets/lst20).
<br>
## How to use
<br>
The getting started notebook of WangchanBERTa model can be found at this [Colab notebook](https://colab.research.google.com/drive/1Kbk6sBspZLwcnOE61adAQo30xxqOQ9ko)
<br>
## Training data
`wangchanberta-base-wiki-spm` model was pretrained on Thai Wikipedia. Specifically, we use the Wikipedia dump articles on 20 August 2020 (dumps.wikimedia.org/thwiki/20200820/). We opt out lists, and tables.
### Preprocessing
Texts are preprocessed with the following rules:
- Replace non-breaking space, zero-width non-breaking space, and soft hyphen with spaces.
- Remove an empty parenthesis that occur right after the title of the first paragraph.
- Replace spaces wtth <_>.
<br>
Regarding the vocabulary, we use subword token trained with [SentencePice](https://github.com/google/sentencepiece) library on the training set of Thai Wikipedia corpus. The total number of subword tokens is 24,000.
We sample sentences contigously to have the length of at most 512 tokens. For some sentences that overlap the boundary of 512 tokens, we split such sentence with an additional token as document separator. This is the same approach as proposed by [[Liu et al., 2019]](https://arxiv.org/abs/1907.11692) (called "FULL-SENTENCES").
Regarding the masking procedure, for each sequence, we sampled 15% of the tokens and replace them with<mask>token.Out of the 15%, 80% is replaced with a<mask>token, 10% is left unchanged and 10% is replaced with a random token.
<br>
**Train/Val/Test splits**
We split sequencially 944,782 sentences for training set, 24,863 sentences for validation set and 24,862 sentences for test set.
<br>
**Pretraining**
The model was trained on 32 V100 GPUs for 31,250 steps with the batch size of 8,192 (16 sequences per device with 16 accumulation steps) and a sequence length of 512 tokens. The optimizer we used is Adam with the learning rate of $7e-4$, $\beta_1 = 0.9$, $\beta_2= 0.98$ and $\epsilon = 1e-6$. The learning rate is warmed up for the first 1250 steps and linearly decayed to zero. The model checkpoint with minimum validation loss will be selected as the best model checkpoint.
<br>
**BibTeX entry and citation info**
```
@misc{lowphansirikul2021wangchanberta,
title={WangchanBERTa: Pretraining transformer-based Thai Language Models},
author={Lalita Lowphansirikul and Charin Polpanumas and Nawat Jantrakulchai and Sarana Nutanong},
year={2021},
eprint={2101.09635},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "th"}
|
airesearch/wangchanberta-base-wiki-spm
| null |
[
"transformers",
"pytorch",
"jax",
"roberta",
"fill-mask",
"th",
"arxiv:1907.11692",
"arxiv:2101.09635",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1907.11692",
"2101.09635"
] |
[
"th"
] |
TAGS
#transformers #pytorch #jax #roberta #fill-mask #th #arxiv-1907.11692 #arxiv-2101.09635 #autotrain_compatible #endpoints_compatible #region-us
|
# WangchanBERTa base model: 'wangchanberta-base-wiki-spm'
<br>
Pretrained RoBERTa BASE model on Thai Wikipedia corpus.
The script and documentation can be found at this reposiryory.
<br>
## Model description
<br>
The architecture of the pretrained model is based on RoBERTa [[Liu et al., 2019]](URL
<br>
## Intended uses & limitations
<br>
You can use the pretrained model for masked language modeling (i.e. predicting a mask token in the input text). In addition, we also provide finetuned models for multiclass/multilabel text classification and token classification task.
<br>
Multiclass text classification
- 'wisesight_sentiment'
4-class text classification task ('positive', 'neutral', 'negative', and 'question') based on social media posts and tweets.
- 'wongnai_reivews'
Users' review rating classification task (scale is ranging from 1 to 5)
- 'generated_reviews_enth' : ('review_star' as label)
Generated users' review rating classification task (scale is ranging from 1 to 5).
Multilabel text classification
- 'prachathai67k'
Thai topic classification with 12 labels based on news article corpus from URL. The detail is described in this page.
Token classification
- 'thainer'
Named-entity recognition tagging with 13 named-entities as descibed in this page.
- 'lst20' : NER NER and POS tagging
Named-entity recognition tagging with 10 named-entities and Part-of-Speech tagging with 16 tags as descibed in this page.
<br>
## How to use
<br>
The getting started notebook of WangchanBERTa model can be found at this Colab notebook
<br>
## Training data
'wangchanberta-base-wiki-spm' model was pretrained on Thai Wikipedia. Specifically, we use the Wikipedia dump articles on 20 August 2020 (URL We opt out lists, and tables.
### Preprocessing
Texts are preprocessed with the following rules:
- Replace non-breaking space, zero-width non-breaking space, and soft hyphen with spaces.
- Remove an empty parenthesis that occur right after the title of the first paragraph.
- Replace spaces wtth <_>.
<br>
Regarding the vocabulary, we use subword token trained with SentencePice library on the training set of Thai Wikipedia corpus. The total number of subword tokens is 24,000.
We sample sentences contigously to have the length of at most 512 tokens. For some sentences that overlap the boundary of 512 tokens, we split such sentence with an additional token as document separator. This is the same approach as proposed by [[Liu et al., 2019]](URL (called "FULL-SENTENCES").
Regarding the masking procedure, for each sequence, we sampled 15% of the tokens and replace them with<mask>token.Out of the 15%, 80% is replaced with a<mask>token, 10% is left unchanged and 10% is replaced with a random token.
<br>
Train/Val/Test splits
We split sequencially 944,782 sentences for training set, 24,863 sentences for validation set and 24,862 sentences for test set.
<br>
Pretraining
The model was trained on 32 V100 GPUs for 31,250 steps with the batch size of 8,192 (16 sequences per device with 16 accumulation steps) and a sequence length of 512 tokens. The optimizer we used is Adam with the learning rate of $7e-4$, $\beta_1 = 0.9$, $\beta_2= 0.98$ and $\epsilon = 1e-6$. The learning rate is warmed up for the first 1250 steps and linearly decayed to zero. The model checkpoint with minimum validation loss will be selected as the best model checkpoint.
<br>
BibTeX entry and citation info
|
[
"# WangchanBERTa base model: 'wangchanberta-base-wiki-spm'\n\n<br>\n\nPretrained RoBERTa BASE model on Thai Wikipedia corpus.\nThe script and documentation can be found at this reposiryory.\n<br>",
"## Model description\n\n<br>\n\nThe architecture of the pretrained model is based on RoBERTa [[Liu et al., 2019]](URL \n\n<br>",
"## Intended uses & limitations\n\n<br>\n\nYou can use the pretrained model for masked language modeling (i.e. predicting a mask token in the input text). In addition, we also provide finetuned models for multiclass/multilabel text classification and token classification task.\n\n<br>\n\nMulticlass text classification\n\n\n- 'wisesight_sentiment' \n \n 4-class text classification task ('positive', 'neutral', 'negative', and 'question') based on social media posts and tweets.\n\n- 'wongnai_reivews' \n\n Users' review rating classification task (scale is ranging from 1 to 5)\n\n- 'generated_reviews_enth' : ('review_star' as label)\n\n Generated users' review rating classification task (scale is ranging from 1 to 5).\n\nMultilabel text classification\n\n- 'prachathai67k'\n\n Thai topic classification with 12 labels based on news article corpus from URL. The detail is described in this page.\n \n\n\n\nToken classification\n\n- 'thainer' \n\n Named-entity recognition tagging with 13 named-entities as descibed in this page.\n\n- 'lst20' : NER NER and POS tagging\n \n Named-entity recognition tagging with 10 named-entities and Part-of-Speech tagging with 16 tags as descibed in this page.\n\n<br>",
"## How to use\n\n<br>\n\nThe getting started notebook of WangchanBERTa model can be found at this Colab notebook\n\n<br>",
"## Training data\n\n'wangchanberta-base-wiki-spm' model was pretrained on Thai Wikipedia. Specifically, we use the Wikipedia dump articles on 20 August 2020 (URL We opt out lists, and tables.",
"### Preprocessing\n\nTexts are preprocessed with the following rules:\n\n- Replace non-breaking space, zero-width non-breaking space, and soft hyphen with spaces.\n- Remove an empty parenthesis that occur right after the title of the first paragraph.\n- Replace spaces wtth <_>.\n\n<br>\n\n\nRegarding the vocabulary, we use subword token trained with SentencePice library on the training set of Thai Wikipedia corpus. The total number of subword tokens is 24,000. \n\n\nWe sample sentences contigously to have the length of at most 512 tokens. For some sentences that overlap the boundary of 512 tokens, we split such sentence with an additional token as document separator. This is the same approach as proposed by [[Liu et al., 2019]](URL (called \"FULL-SENTENCES\"). \n\nRegarding the masking procedure, for each sequence, we sampled 15% of the tokens and replace them with<mask>token.Out of the 15%, 80% is replaced with a<mask>token, 10% is left unchanged and 10% is replaced with a random token.\n\n<br>\n\nTrain/Val/Test splits\n\nWe split sequencially 944,782 sentences for training set, 24,863 sentences for validation set and 24,862 sentences for test set.\n\n<br>\n\nPretraining\n\nThe model was trained on 32 V100 GPUs for 31,250 steps with the batch size of 8,192 (16 sequences per device with 16 accumulation steps) and a sequence length of 512 tokens. The optimizer we used is Adam with the learning rate of $7e-4$, $\\beta_1 = 0.9$, $\\beta_2= 0.98$ and $\\epsilon = 1e-6$. The learning rate is warmed up for the first 1250 steps and linearly decayed to zero. The model checkpoint with minimum validation loss will be selected as the best model checkpoint. \n\n<br>\n\nBibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #jax #roberta #fill-mask #th #arxiv-1907.11692 #arxiv-2101.09635 #autotrain_compatible #endpoints_compatible #region-us \n",
"# WangchanBERTa base model: 'wangchanberta-base-wiki-spm'\n\n<br>\n\nPretrained RoBERTa BASE model on Thai Wikipedia corpus.\nThe script and documentation can be found at this reposiryory.\n<br>",
"## Model description\n\n<br>\n\nThe architecture of the pretrained model is based on RoBERTa [[Liu et al., 2019]](URL \n\n<br>",
"## Intended uses & limitations\n\n<br>\n\nYou can use the pretrained model for masked language modeling (i.e. predicting a mask token in the input text). In addition, we also provide finetuned models for multiclass/multilabel text classification and token classification task.\n\n<br>\n\nMulticlass text classification\n\n\n- 'wisesight_sentiment' \n \n 4-class text classification task ('positive', 'neutral', 'negative', and 'question') based on social media posts and tweets.\n\n- 'wongnai_reivews' \n\n Users' review rating classification task (scale is ranging from 1 to 5)\n\n- 'generated_reviews_enth' : ('review_star' as label)\n\n Generated users' review rating classification task (scale is ranging from 1 to 5).\n\nMultilabel text classification\n\n- 'prachathai67k'\n\n Thai topic classification with 12 labels based on news article corpus from URL. The detail is described in this page.\n \n\n\n\nToken classification\n\n- 'thainer' \n\n Named-entity recognition tagging with 13 named-entities as descibed in this page.\n\n- 'lst20' : NER NER and POS tagging\n \n Named-entity recognition tagging with 10 named-entities and Part-of-Speech tagging with 16 tags as descibed in this page.\n\n<br>",
"## How to use\n\n<br>\n\nThe getting started notebook of WangchanBERTa model can be found at this Colab notebook\n\n<br>",
"## Training data\n\n'wangchanberta-base-wiki-spm' model was pretrained on Thai Wikipedia. Specifically, we use the Wikipedia dump articles on 20 August 2020 (URL We opt out lists, and tables.",
"### Preprocessing\n\nTexts are preprocessed with the following rules:\n\n- Replace non-breaking space, zero-width non-breaking space, and soft hyphen with spaces.\n- Remove an empty parenthesis that occur right after the title of the first paragraph.\n- Replace spaces wtth <_>.\n\n<br>\n\n\nRegarding the vocabulary, we use subword token trained with SentencePice library on the training set of Thai Wikipedia corpus. The total number of subword tokens is 24,000. \n\n\nWe sample sentences contigously to have the length of at most 512 tokens. For some sentences that overlap the boundary of 512 tokens, we split such sentence with an additional token as document separator. This is the same approach as proposed by [[Liu et al., 2019]](URL (called \"FULL-SENTENCES\"). \n\nRegarding the masking procedure, for each sequence, we sampled 15% of the tokens and replace them with<mask>token.Out of the 15%, 80% is replaced with a<mask>token, 10% is left unchanged and 10% is replaced with a random token.\n\n<br>\n\nTrain/Val/Test splits\n\nWe split sequencially 944,782 sentences for training set, 24,863 sentences for validation set and 24,862 sentences for test set.\n\n<br>\n\nPretraining\n\nThe model was trained on 32 V100 GPUs for 31,250 steps with the batch size of 8,192 (16 sequences per device with 16 accumulation steps) and a sequence length of 512 tokens. The optimizer we used is Adam with the learning rate of $7e-4$, $\\beta_1 = 0.9$, $\\beta_2= 0.98$ and $\\epsilon = 1e-6$. The learning rate is warmed up for the first 1250 steps and linearly decayed to zero. The model checkpoint with minimum validation loss will be selected as the best model checkpoint. \n\n<br>\n\nBibTeX entry and citation info"
] |
fill-mask
|
transformers
|
# WangchanBERTa base model: `wangchanberta-base-wiki-syllable`
<br>
Pretrained RoBERTa BASE model on Thai Wikipedia corpus.
The script and documentation can be found at [this reposiryory](https://github.com/vistec-AI/thai2transformers).
<br>
## Model description
<br>
The architecture of the pretrained model is based on RoBERTa [[Liu et al., 2019]](https://arxiv.org/abs/1907.11692).
<br>
## Intended uses & limitations
<br>
You can use the pretrained model for masked language modeling (i.e. predicting a mask token in the input text). In addition, we also provide finetuned models for multiclass/multilabel text classification and token classification task.
<br>
**Multiclass text classification**
- `wisesight_sentiment`
4-class text classification task (`positive`, `neutral`, `negative`, and `question`) based on social media posts and tweets.
- `wongnai_reivews`
Users' review rating classification task (scale is ranging from 1 to 5)
- `generated_reviews_enth` : (`review_star` as label)
Generated users' review rating classification task (scale is ranging from 1 to 5).
**Multilabel text classification**
- `prachathai67k`
Thai topic classification with 12 labels based on news article corpus from prachathai.com. The detail is described in this [page](https://huggingface.co/datasets/prachathai67k).
**Token classification**
- `thainer`
Named-entity recognition tagging with 13 named-entities as descibed in this [page](https://huggingface.co/datasets/thainer).
- `lst20` : NER NER and POS tagging
Named-entity recognition tagging with 10 named-entities and Part-of-Speech tagging with 16 tags as descibed in this [page](https://huggingface.co/datasets/lst20).
<br>
## How to use
<br>
The getting started notebook of WangchanBERTa model can be found at this [Colab notebook](https://colab.research.google.com/drive/1Kbk6sBspZLwcnOE61adAQo30xxqOQ9ko)
<br>
## Training data
`wangchanberta-base-wiki-syllable` model was pretrained on Thai Wikipedia. Specifically, we use the Wikipedia dump articles on 20 August 2020 (dumps.wikimedia.org/thwiki/20200820/). We opt out lists, and tables.
### Preprocessing
Texts are preprocessed with the following rules:
- Replace non-breaking space, zero-width non-breaking space, and soft hyphen with spaces.
- Remove an empty parenthesis that occur right after the title of the first paragraph.
- Replace spaces wtth <_>.
<br>
Regarding the vocabulary, we use a Thai syllable-level dictionary-based tokenizer denoted as `syllable` from PyThaiNLP [Phatthiyaphaibun et al., 2016]. The total number of word-level tokens in the vocabulary is 59,235.
We sample sentences contigously to have the length of at most 512 tokens. For some sentences that overlap the boundary of 512 tokens, we split such sentence with an additional token as document separator. This is the same approach as proposed by [[Liu et al., 2019]](https://arxiv.org/abs/1907.11692) (called "FULL-SENTENCES").
Regarding the masking procedure, for each sequence, we sampled 15% of the tokens and replace them with<mask>token.Out of the 15%, 80% is replaced with a<mask>token, 10% is left unchanged and 10% is replaced with a random token.
<br>
**Train/Val/Test splits**
We split sequencially 944,782 sentences for training set, 24,863 sentences for validation set and 24,862 sentences for test set.
<br>
**Pretraining**
The model was trained on 32 V100 GPUs for 31,250 steps with the batch size of 8,192 (16 sequences per device with 16 accumulation steps) and a sequence length of 512 tokens. The optimizer we used is Adam with the learning rate of $7e-4$, $\beta_1 = 0.9$, $\beta_2= 0.98$ and $\epsilon = 1e-6$. The learning rate is warmed up for the first 1250 steps and linearly decayed to zero. The model checkpoint with minimum validation loss will be selected as the best model checkpoint.
<br>
**BibTeX entry and citation info**
```
@misc{lowphansirikul2021wangchanberta,
title={WangchanBERTa: Pretraining transformer-based Thai Language Models},
author={Lalita Lowphansirikul and Charin Polpanumas and Nawat Jantrakulchai and Sarana Nutanong},
year={2021},
eprint={2101.09635},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "th"}
|
airesearch/wangchanberta-base-wiki-syllable
| null |
[
"transformers",
"pytorch",
"jax",
"roberta",
"fill-mask",
"th",
"arxiv:1907.11692",
"arxiv:2101.09635",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1907.11692",
"2101.09635"
] |
[
"th"
] |
TAGS
#transformers #pytorch #jax #roberta #fill-mask #th #arxiv-1907.11692 #arxiv-2101.09635 #autotrain_compatible #endpoints_compatible #region-us
|
# WangchanBERTa base model: 'wangchanberta-base-wiki-syllable'
<br>
Pretrained RoBERTa BASE model on Thai Wikipedia corpus.
The script and documentation can be found at this reposiryory.
<br>
## Model description
<br>
The architecture of the pretrained model is based on RoBERTa [[Liu et al., 2019]](URL
<br>
## Intended uses & limitations
<br>
You can use the pretrained model for masked language modeling (i.e. predicting a mask token in the input text). In addition, we also provide finetuned models for multiclass/multilabel text classification and token classification task.
<br>
Multiclass text classification
- 'wisesight_sentiment'
4-class text classification task ('positive', 'neutral', 'negative', and 'question') based on social media posts and tweets.
- 'wongnai_reivews'
Users' review rating classification task (scale is ranging from 1 to 5)
- 'generated_reviews_enth' : ('review_star' as label)
Generated users' review rating classification task (scale is ranging from 1 to 5).
Multilabel text classification
- 'prachathai67k'
Thai topic classification with 12 labels based on news article corpus from URL. The detail is described in this page.
Token classification
- 'thainer'
Named-entity recognition tagging with 13 named-entities as descibed in this page.
- 'lst20' : NER NER and POS tagging
Named-entity recognition tagging with 10 named-entities and Part-of-Speech tagging with 16 tags as descibed in this page.
<br>
## How to use
<br>
The getting started notebook of WangchanBERTa model can be found at this Colab notebook
<br>
## Training data
'wangchanberta-base-wiki-syllable' model was pretrained on Thai Wikipedia. Specifically, we use the Wikipedia dump articles on 20 August 2020 (URL We opt out lists, and tables.
### Preprocessing
Texts are preprocessed with the following rules:
- Replace non-breaking space, zero-width non-breaking space, and soft hyphen with spaces.
- Remove an empty parenthesis that occur right after the title of the first paragraph.
- Replace spaces wtth <_>.
<br>
Regarding the vocabulary, we use a Thai syllable-level dictionary-based tokenizer denoted as 'syllable' from PyThaiNLP [Phatthiyaphaibun et al., 2016]. The total number of word-level tokens in the vocabulary is 59,235.
We sample sentences contigously to have the length of at most 512 tokens. For some sentences that overlap the boundary of 512 tokens, we split such sentence with an additional token as document separator. This is the same approach as proposed by [[Liu et al., 2019]](URL (called "FULL-SENTENCES").
Regarding the masking procedure, for each sequence, we sampled 15% of the tokens and replace them with<mask>token.Out of the 15%, 80% is replaced with a<mask>token, 10% is left unchanged and 10% is replaced with a random token.
<br>
Train/Val/Test splits
We split sequencially 944,782 sentences for training set, 24,863 sentences for validation set and 24,862 sentences for test set.
<br>
Pretraining
The model was trained on 32 V100 GPUs for 31,250 steps with the batch size of 8,192 (16 sequences per device with 16 accumulation steps) and a sequence length of 512 tokens. The optimizer we used is Adam with the learning rate of $7e-4$, $\beta_1 = 0.9$, $\beta_2= 0.98$ and $\epsilon = 1e-6$. The learning rate is warmed up for the first 1250 steps and linearly decayed to zero. The model checkpoint with minimum validation loss will be selected as the best model checkpoint.
<br>
BibTeX entry and citation info
|
[
"# WangchanBERTa base model: 'wangchanberta-base-wiki-syllable'\n\n<br>\n\nPretrained RoBERTa BASE model on Thai Wikipedia corpus.\nThe script and documentation can be found at this reposiryory.\n<br>",
"## Model description\n\n<br>\n\nThe architecture of the pretrained model is based on RoBERTa [[Liu et al., 2019]](URL \n\n<br>",
"## Intended uses & limitations\n\n<br>\n\nYou can use the pretrained model for masked language modeling (i.e. predicting a mask token in the input text). In addition, we also provide finetuned models for multiclass/multilabel text classification and token classification task.\n\n<br>\n\nMulticlass text classification\n\n\n- 'wisesight_sentiment' \n \n 4-class text classification task ('positive', 'neutral', 'negative', and 'question') based on social media posts and tweets.\n\n- 'wongnai_reivews' \n\n Users' review rating classification task (scale is ranging from 1 to 5)\n\n- 'generated_reviews_enth' : ('review_star' as label)\n\n Generated users' review rating classification task (scale is ranging from 1 to 5).\n\nMultilabel text classification\n\n- 'prachathai67k'\n\n Thai topic classification with 12 labels based on news article corpus from URL. The detail is described in this page.\n \n\n\n\nToken classification\n\n- 'thainer' \n\n Named-entity recognition tagging with 13 named-entities as descibed in this page.\n\n- 'lst20' : NER NER and POS tagging\n \n Named-entity recognition tagging with 10 named-entities and Part-of-Speech tagging with 16 tags as descibed in this page.\n\n<br>",
"## How to use\n\n<br>\n\nThe getting started notebook of WangchanBERTa model can be found at this Colab notebook\n\n<br>",
"## Training data\n\n'wangchanberta-base-wiki-syllable' model was pretrained on Thai Wikipedia. Specifically, we use the Wikipedia dump articles on 20 August 2020 (URL We opt out lists, and tables.",
"### Preprocessing\n\nTexts are preprocessed with the following rules:\n\n- Replace non-breaking space, zero-width non-breaking space, and soft hyphen with spaces.\n- Remove an empty parenthesis that occur right after the title of the first paragraph.\n- Replace spaces wtth <_>.\n\n<br>\n\n\nRegarding the vocabulary, we use a Thai syllable-level dictionary-based tokenizer denoted as 'syllable' from PyThaiNLP [Phatthiyaphaibun et al., 2016]. The total number of word-level tokens in the vocabulary is 59,235.\n\nWe sample sentences contigously to have the length of at most 512 tokens. For some sentences that overlap the boundary of 512 tokens, we split such sentence with an additional token as document separator. This is the same approach as proposed by [[Liu et al., 2019]](URL (called \"FULL-SENTENCES\"). \n\nRegarding the masking procedure, for each sequence, we sampled 15% of the tokens and replace them with<mask>token.Out of the 15%, 80% is replaced with a<mask>token, 10% is left unchanged and 10% is replaced with a random token.\n\n<br>\n\nTrain/Val/Test splits\n\nWe split sequencially 944,782 sentences for training set, 24,863 sentences for validation set and 24,862 sentences for test set.\n\n<br>\n\nPretraining\n\nThe model was trained on 32 V100 GPUs for 31,250 steps with the batch size of 8,192 (16 sequences per device with 16 accumulation steps) and a sequence length of 512 tokens. The optimizer we used is Adam with the learning rate of $7e-4$, $\\beta_1 = 0.9$, $\\beta_2= 0.98$ and $\\epsilon = 1e-6$. The learning rate is warmed up for the first 1250 steps and linearly decayed to zero. The model checkpoint with minimum validation loss will be selected as the best model checkpoint. \n\n<br>\n\nBibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #jax #roberta #fill-mask #th #arxiv-1907.11692 #arxiv-2101.09635 #autotrain_compatible #endpoints_compatible #region-us \n",
"# WangchanBERTa base model: 'wangchanberta-base-wiki-syllable'\n\n<br>\n\nPretrained RoBERTa BASE model on Thai Wikipedia corpus.\nThe script and documentation can be found at this reposiryory.\n<br>",
"## Model description\n\n<br>\n\nThe architecture of the pretrained model is based on RoBERTa [[Liu et al., 2019]](URL \n\n<br>",
"## Intended uses & limitations\n\n<br>\n\nYou can use the pretrained model for masked language modeling (i.e. predicting a mask token in the input text). In addition, we also provide finetuned models for multiclass/multilabel text classification and token classification task.\n\n<br>\n\nMulticlass text classification\n\n\n- 'wisesight_sentiment' \n \n 4-class text classification task ('positive', 'neutral', 'negative', and 'question') based on social media posts and tweets.\n\n- 'wongnai_reivews' \n\n Users' review rating classification task (scale is ranging from 1 to 5)\n\n- 'generated_reviews_enth' : ('review_star' as label)\n\n Generated users' review rating classification task (scale is ranging from 1 to 5).\n\nMultilabel text classification\n\n- 'prachathai67k'\n\n Thai topic classification with 12 labels based on news article corpus from URL. The detail is described in this page.\n \n\n\n\nToken classification\n\n- 'thainer' \n\n Named-entity recognition tagging with 13 named-entities as descibed in this page.\n\n- 'lst20' : NER NER and POS tagging\n \n Named-entity recognition tagging with 10 named-entities and Part-of-Speech tagging with 16 tags as descibed in this page.\n\n<br>",
"## How to use\n\n<br>\n\nThe getting started notebook of WangchanBERTa model can be found at this Colab notebook\n\n<br>",
"## Training data\n\n'wangchanberta-base-wiki-syllable' model was pretrained on Thai Wikipedia. Specifically, we use the Wikipedia dump articles on 20 August 2020 (URL We opt out lists, and tables.",
"### Preprocessing\n\nTexts are preprocessed with the following rules:\n\n- Replace non-breaking space, zero-width non-breaking space, and soft hyphen with spaces.\n- Remove an empty parenthesis that occur right after the title of the first paragraph.\n- Replace spaces wtth <_>.\n\n<br>\n\n\nRegarding the vocabulary, we use a Thai syllable-level dictionary-based tokenizer denoted as 'syllable' from PyThaiNLP [Phatthiyaphaibun et al., 2016]. The total number of word-level tokens in the vocabulary is 59,235.\n\nWe sample sentences contigously to have the length of at most 512 tokens. For some sentences that overlap the boundary of 512 tokens, we split such sentence with an additional token as document separator. This is the same approach as proposed by [[Liu et al., 2019]](URL (called \"FULL-SENTENCES\"). \n\nRegarding the masking procedure, for each sequence, we sampled 15% of the tokens and replace them with<mask>token.Out of the 15%, 80% is replaced with a<mask>token, 10% is left unchanged and 10% is replaced with a random token.\n\n<br>\n\nTrain/Val/Test splits\n\nWe split sequencially 944,782 sentences for training set, 24,863 sentences for validation set and 24,862 sentences for test set.\n\n<br>\n\nPretraining\n\nThe model was trained on 32 V100 GPUs for 31,250 steps with the batch size of 8,192 (16 sequences per device with 16 accumulation steps) and a sequence length of 512 tokens. The optimizer we used is Adam with the learning rate of $7e-4$, $\\beta_1 = 0.9$, $\\beta_2= 0.98$ and $\\epsilon = 1e-6$. The learning rate is warmed up for the first 1250 steps and linearly decayed to zero. The model checkpoint with minimum validation loss will be selected as the best model checkpoint. \n\n<br>\n\nBibTeX entry and citation info"
] |
automatic-speech-recognition
|
transformers
|
# `wav2vec2-large-xlsr-53-th`
Finetuning `wav2vec2-large-xlsr-53` on Thai [Common Voice 7.0](https://commonvoice.mozilla.org/en/datasets)
[Read more on our blog](https://medium.com/airesearch-in-th/airesearch-in-th-3c1019a99cd)
We finetune [wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) based on [Fine-tuning Wav2Vec2 for English ASR](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Fine_tuning_Wav2Vec2_for_English_ASR.ipynb) using Thai examples of [Common Voice Corpus 7.0](https://commonvoice.mozilla.org/en/datasets). The notebooks and scripts can be found in [vistec-ai/wav2vec2-large-xlsr-53-th](https://github.com/vistec-ai/wav2vec2-large-xlsr-53-th). The pretrained model and processor can be found at [airesearch/wav2vec2-large-xlsr-53-th](https://huggingface.co/airesearch/wav2vec2-large-xlsr-53-th).
## `robust-speech-event`
Add `syllable_tokenize`, `word_tokenize` ([PyThaiNLP](https://github.com/PyThaiNLP/pythainlp)) and [deepcut](https://github.com/rkcosmos/deepcut) tokenizers to `eval.py` from [robust-speech-event](https://github.com/huggingface/transformers/tree/master/examples/research_projects/robust-speech-event#evaluation)
```
> python eval.py --model_id ./ --dataset mozilla-foundation/common_voice_7_0 --config th --split test --log_outputs --thai_tokenizer newmm/syllable/deepcut/cer
```
### Eval results on Common Voice 7 "test":
| | WER PyThaiNLP 2.3.1 | WER deepcut | SER | CER |
|---------------------------------|---------------------|-------------|---------|---------|
| Only Tokenization | 0.9524% | 2.5316% | 1.2346% | 0.1623% |
| Cleaning rules and Tokenization | TBD | TBD | TBD | TBD |
## Usage
```
#load pretrained processor and model
processor = Wav2Vec2Processor.from_pretrained("airesearch/wav2vec2-large-xlsr-53-th")
model = Wav2Vec2ForCTC.from_pretrained("airesearch/wav2vec2-large-xlsr-53-th")
#function to resample to 16_000
def speech_file_to_array_fn(batch,
text_col="sentence",
fname_col="path",
resampling_to=16000):
speech_array, sampling_rate = torchaudio.load(batch[fname_col])
resampler=torchaudio.transforms.Resample(sampling_rate, resampling_to)
batch["speech"] = resampler(speech_array)[0].numpy()
batch["sampling_rate"] = resampling_to
batch["target_text"] = batch[text_col]
return batch
#get 2 examples as sample input
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
#infer
with torch.no_grad():
logits = model(inputs.input_values,).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
>> Prediction: ['และ เขา ก็ สัมผัส ดีบุก', 'คุณ สามารถ รับทราบ เมื่อ ข้อความ นี้ ถูก อ่าน แล้ว']
>> Reference: ['และเขาก็สัมผัสดีบุก', 'คุณสามารถรับทราบเมื่อข้อความนี้ถูกอ่านแล้ว']
```
## Datasets
Common Voice Corpus 7.0](https://commonvoice.mozilla.org/en/datasets) contains 133 validated hours of Thai (255 total hours) at 5GB. We pre-tokenize with `pythainlp.tokenize.word_tokenize`. We preprocess the dataset using cleaning rules described in `notebooks/cv-preprocess.ipynb` by [@tann9949](https://github.com/tann9949). We then deduplicate and split as described in [ekapolc/Thai_commonvoice_split](https://github.com/ekapolc/Thai_commonvoice_split) in order to 1) avoid data leakage due to random splits after cleaning in [Common Voice Corpus 7.0](https://commonvoice.mozilla.org/en/datasets) and 2) preserve the majority of the data for the training set. The dataset loading script is `scripts/th_common_voice_70.py`. You can use this scripts together with `train_cleand.tsv`, `validation_cleaned.tsv` and `test_cleaned.tsv` to have the same splits as we do. The resulting dataset is as follows:
```
DatasetDict({
train: Dataset({
features: ['path', 'sentence'],
num_rows: 86586
})
test: Dataset({
features: ['path', 'sentence'],
num_rows: 2502
})
validation: Dataset({
features: ['path', 'sentence'],
num_rows: 3027
})
})
```
## Training
We fintuned using the following configuration on a single V100 GPU and chose the checkpoint with the lowest validation loss. The finetuning script is `scripts/wav2vec2_finetune.py`
```
# create model
model = Wav2Vec2ForCTC.from_pretrained(
"facebook/wav2vec2-large-xlsr-53",
attention_dropout=0.1,
hidden_dropout=0.1,
feat_proj_dropout=0.0,
mask_time_prob=0.05,
layerdrop=0.1,
gradient_checkpointing=True,
ctc_loss_reduction="mean",
pad_token_id=processor.tokenizer.pad_token_id,
vocab_size=len(processor.tokenizer)
)
model.freeze_feature_extractor()
training_args = TrainingArguments(
output_dir="../data/wav2vec2-large-xlsr-53-thai",
group_by_length=True,
per_device_train_batch_size=32,
gradient_accumulation_steps=1,
per_device_eval_batch_size=16,
metric_for_best_model='wer',
evaluation_strategy="steps",
eval_steps=1000,
logging_strategy="steps",
logging_steps=1000,
save_strategy="steps",
save_steps=1000,
num_train_epochs=100,
fp16=True,
learning_rate=1e-4,
warmup_steps=1000,
save_total_limit=3,
report_to="tensorboard"
)
```
## Evaluation
We benchmark on the test set using WER with words tokenized by [PyThaiNLP](https://github.com/PyThaiNLP/pythainlp) 2.3.1 and [deepcut](https://github.com/rkcosmos/deepcut), and CER. We also measure performance when spell correction using [TNC](http://www.arts.chula.ac.th/ling/tnc/) ngrams is applied. Evaluation codes can be found in `notebooks/wav2vec2_finetuning_tutorial.ipynb`. Benchmark is performed on `test-unique` split.
| | WER PyThaiNLP 2.3.1 | WER deepcut | CER |
|--------------------------------|---------------------|----------------|----------------|
| [Kaldi from scratch](https://github.com/vistec-AI/commonvoice-th) | 23.04 | | 7.57 |
| Ours without spell correction | 13.634024 | **8.152052** | **2.813019** |
| Ours with spell correction | 17.996397 | 14.167975 | 5.225761 |
| Google Web Speech API※ | 13.711234 | 10.860058 | 7.357340 |
| Microsoft Bing Speech API※ | **12.578819** | 9.620991 | 5.016620 |
| Amazon Transcribe※ | 21.86334 | 14.487553 | 7.077562 |
| NECTEC AI for Thai Partii API※ | 20.105887 | 15.515631 | 9.551027 |
※ APIs are not finetuned with Common Voice 7.0 data
## LICENSE
[cc-by-sa 4.0](https://github.com/vistec-AI/wav2vec2-large-xlsr-53-th/blob/main/LICENSE)
## Ackowledgements
* model training and validation notebooks/scripts [@cstorm125](https://github.com/cstorm125/)
* dataset cleaning scripts [@tann9949](https://github.com/tann9949)
* dataset splits [@ekapolc](https://github.com/ekapolc/) and [@14mss](https://github.com/14mss)
* running the training [@mrpeerat](https://github.com/mrpeerat)
* spell correction [@wannaphong](https://github.com/wannaphong)
|
{"language": "th", "license": "cc-by-sa-4.0", "tags": ["audio", "automatic-speech-recognition", "hf-asr-leaderboard", "robust-speech-event", "speech", "xlsr-fine-tuning"], "datasets": ["common_voice"], "model-index": [{"name": "XLS-R-53 - Thai", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 7", "type": "mozilla-foundation/common_voice_7_0", "args": "th"}, "metrics": [{"type": "wer", "value": 0.9524, "name": "Test WER"}, {"type": "ser", "value": 1.2346, "name": "Test SER"}, {"type": "cer", "value": 0.1623, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "sv"}, "metrics": [{"type": "wer", "name": "Test WER"}, {"type": "ser", "name": "Test SER"}, {"type": "cer", "name": "Test CER"}]}]}]}
|
airesearch/wav2vec2-large-xlsr-53-th
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"hf-asr-leaderboard",
"robust-speech-event",
"speech",
"xlsr-fine-tuning",
"th",
"dataset:common_voice",
"doi:10.57967/hf/0404",
"license:cc-by-sa-4.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"th"
] |
TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #hf-asr-leaderboard #robust-speech-event #speech #xlsr-fine-tuning #th #dataset-common_voice #doi-10.57967/hf/0404 #license-cc-by-sa-4.0 #model-index #endpoints_compatible #has_space #region-us
|
'wav2vec2-large-xlsr-53-th'
===========================
Finetuning 'wav2vec2-large-xlsr-53' on Thai Common Voice 7.0
Read more on our blog
We finetune wav2vec2-large-xlsr-53 based on Fine-tuning Wav2Vec2 for English ASR using Thai examples of Common Voice Corpus 7.0. The notebooks and scripts can be found in vistec-ai/wav2vec2-large-xlsr-53-th. The pretrained model and processor can be found at airesearch/wav2vec2-large-xlsr-53-th.
'robust-speech-event'
---------------------
Add 'syllable\_tokenize', 'word\_tokenize' (PyThaiNLP) and deepcut tokenizers to 'URL' from robust-speech-event
### Eval results on Common Voice 7 "test":
Usage
-----
Datasets
--------
Common Voice Corpus 7.0](URL contains 133 validated hours of Thai (255 total hours) at 5GB. We pre-tokenize with 'pythainlp.tokenize.word\_tokenize'. We preprocess the dataset using cleaning rules described in 'notebooks/URL' by @tann9949. We then deduplicate and split as described in ekapolc/Thai\_commonvoice\_split in order to 1) avoid data leakage due to random splits after cleaning in Common Voice Corpus 7.0 and 2) preserve the majority of the data for the training set. The dataset loading script is 'scripts/th\_common\_voice\_70.py'. You can use this scripts together with 'train\_cleand.tsv', 'validation\_cleaned.tsv' and 'test\_cleaned.tsv' to have the same splits as we do. The resulting dataset is as follows:
Training
--------
We fintuned using the following configuration on a single V100 GPU and chose the checkpoint with the lowest validation loss. The finetuning script is 'scripts/wav2vec2\_finetune.py'
Evaluation
----------
We benchmark on the test set using WER with words tokenized by PyThaiNLP 2.3.1 and deepcut, and CER. We also measure performance when spell correction using TNC ngrams is applied. Evaluation codes can be found in 'notebooks/wav2vec2\_finetuning\_tutorial.ipynb'. Benchmark is performed on 'test-unique' split.
※ APIs are not finetuned with Common Voice 7.0 data
LICENSE
-------
cc-by-sa 4.0
Ackowledgements
---------------
* model training and validation notebooks/scripts @cstorm125
* dataset cleaning scripts @tann9949
* dataset splits @ekapolc and @14mss
* running the training @mrpeerat
* spell correction @wannaphong
|
[
"### Eval results on Common Voice 7 \"test\":\n\n\n\nUsage\n-----\n\n\nDatasets\n--------\n\n\nCommon Voice Corpus 7.0](URL contains 133 validated hours of Thai (255 total hours) at 5GB. We pre-tokenize with 'pythainlp.tokenize.word\\_tokenize'. We preprocess the dataset using cleaning rules described in 'notebooks/URL' by @tann9949. We then deduplicate and split as described in ekapolc/Thai\\_commonvoice\\_split in order to 1) avoid data leakage due to random splits after cleaning in Common Voice Corpus 7.0 and 2) preserve the majority of the data for the training set. The dataset loading script is 'scripts/th\\_common\\_voice\\_70.py'. You can use this scripts together with 'train\\_cleand.tsv', 'validation\\_cleaned.tsv' and 'test\\_cleaned.tsv' to have the same splits as we do. The resulting dataset is as follows:\n\n\nTraining\n--------\n\n\nWe fintuned using the following configuration on a single V100 GPU and chose the checkpoint with the lowest validation loss. The finetuning script is 'scripts/wav2vec2\\_finetune.py'\n\n\nEvaluation\n----------\n\n\nWe benchmark on the test set using WER with words tokenized by PyThaiNLP 2.3.1 and deepcut, and CER. We also measure performance when spell correction using TNC ngrams is applied. Evaluation codes can be found in 'notebooks/wav2vec2\\_finetuning\\_tutorial.ipynb'. Benchmark is performed on 'test-unique' split.\n\n\n\n※ APIs are not finetuned with Common Voice 7.0 data\n\n\nLICENSE\n-------\n\n\ncc-by-sa 4.0\n\n\nAckowledgements\n---------------\n\n\n* model training and validation notebooks/scripts @cstorm125\n* dataset cleaning scripts @tann9949\n* dataset splits @ekapolc and @14mss\n* running the training @mrpeerat\n* spell correction @wannaphong"
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #audio #hf-asr-leaderboard #robust-speech-event #speech #xlsr-fine-tuning #th #dataset-common_voice #doi-10.57967/hf/0404 #license-cc-by-sa-4.0 #model-index #endpoints_compatible #has_space #region-us \n",
"### Eval results on Common Voice 7 \"test\":\n\n\n\nUsage\n-----\n\n\nDatasets\n--------\n\n\nCommon Voice Corpus 7.0](URL contains 133 validated hours of Thai (255 total hours) at 5GB. We pre-tokenize with 'pythainlp.tokenize.word\\_tokenize'. We preprocess the dataset using cleaning rules described in 'notebooks/URL' by @tann9949. We then deduplicate and split as described in ekapolc/Thai\\_commonvoice\\_split in order to 1) avoid data leakage due to random splits after cleaning in Common Voice Corpus 7.0 and 2) preserve the majority of the data for the training set. The dataset loading script is 'scripts/th\\_common\\_voice\\_70.py'. You can use this scripts together with 'train\\_cleand.tsv', 'validation\\_cleaned.tsv' and 'test\\_cleaned.tsv' to have the same splits as we do. The resulting dataset is as follows:\n\n\nTraining\n--------\n\n\nWe fintuned using the following configuration on a single V100 GPU and chose the checkpoint with the lowest validation loss. The finetuning script is 'scripts/wav2vec2\\_finetune.py'\n\n\nEvaluation\n----------\n\n\nWe benchmark on the test set using WER with words tokenized by PyThaiNLP 2.3.1 and deepcut, and CER. We also measure performance when spell correction using TNC ngrams is applied. Evaluation codes can be found in 'notebooks/wav2vec2\\_finetuning\\_tutorial.ipynb'. Benchmark is performed on 'test-unique' split.\n\n\n\n※ APIs are not finetuned with Common Voice 7.0 data\n\n\nLICENSE\n-------\n\n\ncc-by-sa 4.0\n\n\nAckowledgements\n---------------\n\n\n* model training and validation notebooks/scripts @cstorm125\n* dataset cleaning scripts @tann9949\n* dataset splits @ekapolc and @14mss\n* running the training @mrpeerat\n* spell correction @wannaphong"
] |
question-answering
|
transformers
|
# xlm-roberta-base-finetune-qa
Finetuning `xlm-roberta-base` with the training set of `iapp_wiki_qa_squad`, `thaiqa_squad`, and `nsc_qa` (removed examples which have cosine similarity with validation and test examples over 0.8; contexts of the latter two are trimmed to be around 300 `newmm` words). Benchmarks shared on [wandb](https://wandb.ai/cstorm125/wangchanberta-qa) using validation and test sets of `iapp_wiki_qa_squad`.
Trained with [thai2transformers](https://github.com/vistec-AI/thai2transformers/blob/dev/scripts/downstream/train_question_answering_lm_finetuning.py).
Train with:
```
export WANDB_PROJECT=wangchanberta-qa
export MODEL_NAME=xlm-roberta-base
python train_question_answering_lm_finetuning.py \
--model_name $MODEL_NAME \
--dataset_name chimera_qa \
--output_dir $MODEL_NAME-finetune-chimera_qa-model \
--log_dir $MODEL_NAME-finetune-chimera_qa-log \
--pad_on_right \
--fp16
```
|
{"widget": [{"text": "\u0e2a\u0e27\u0e19\u0e01\u0e38\u0e2b\u0e25\u0e32\u0e1a\u0e40\u0e1b\u0e47\u0e19\u0e42\u0e23\u0e07\u0e40\u0e23\u0e35\u0e22\u0e19\u0e2d\u0e30\u0e44\u0e23", "context": "\u0e42\u0e23\u0e07\u0e40\u0e23\u0e35\u0e22\u0e19\u0e2a\u0e27\u0e19\u0e01\u0e38\u0e2b\u0e25\u0e32\u0e1a\u0e27\u0e34\u0e17\u0e22\u0e32\u0e25\u0e31\u0e22 (Suankularb Wittayalai School) (\u0e2d\u0e31\u0e01\u0e29\u0e23\u0e22\u0e48\u0e2d : \u0e2a.\u0e01. / S.K.) \u0e40\u0e1b\u0e47\u0e19\u0e42\u0e23\u0e07\u0e40\u0e23\u0e35\u0e22\u0e19\u0e0a\u0e32\u0e22\u0e25\u0e49\u0e27\u0e19 \u0e23\u0e30\u0e14\u0e31\u0e1a\u0e0a\u0e31\u0e49\u0e19\u0e21\u0e31\u0e18\u0e22\u0e21\u0e28\u0e36\u0e01\u0e29\u0e32\u0e02\u0e19\u0e32\u0e14\u0e43\u0e2b\u0e0d\u0e48\u0e1e\u0e34\u0e40\u0e28\u0e29 \u0e2a\u0e31\u0e07\u0e01\u0e31\u0e14\u0e2a\u0e33\u0e19\u0e31\u0e01\u0e07\u0e32\u0e19\u0e40\u0e02\u0e15\u0e1e\u0e37\u0e49\u0e19\u0e17\u0e35\u0e48\u0e01\u0e32\u0e23\u0e28\u0e36\u0e01\u0e29\u0e32\u0e21\u0e31\u0e18\u0e22\u0e21\u0e28\u0e36\u0e01\u0e29\u0e32\u0e40\u0e02\u0e15 1 \u0e2a\u0e33\u0e19\u0e31\u0e01\u0e07\u0e32\u0e19\u0e04\u0e13\u0e30\u0e01\u0e23\u0e23\u0e21\u0e01\u0e32\u0e23\u0e01\u0e32\u0e23\u0e28\u0e36\u0e01\u0e29\u0e32\u0e02\u0e31\u0e49\u0e19\u0e1e\u0e37\u0e49\u0e19\u0e10\u0e32\u0e19 (\u0e0a\u0e37\u0e48\u0e2d\u0e40\u0e14\u0e34\u0e21: \u0e01\u0e23\u0e21\u0e2a\u0e32\u0e21\u0e31\u0e0d\u0e28\u0e36\u0e01\u0e29\u0e32) \u0e01\u0e23\u0e30\u0e17\u0e23\u0e27\u0e07\u0e28\u0e36\u0e01\u0e29\u0e32\u0e18\u0e34\u0e01\u0e32\u0e23 \u0e01\u0e48\u0e2d\u0e15\u0e31\u0e49\u0e07\u0e42\u0e14\u0e22 \u0e1e\u0e23\u0e30\u0e1a\u0e32\u0e17\u0e2a\u0e21\u0e40\u0e14\u0e47\u0e08\u0e1e\u0e23\u0e30\u0e08\u0e38\u0e25\u0e08\u0e2d\u0e21\u0e40\u0e01\u0e25\u0e49\u0e32\u0e40\u0e08\u0e49\u0e32\u0e2d\u0e22\u0e39\u0e48\u0e2b\u0e31\u0e27 \u0e44\u0e14\u0e49\u0e23\u0e31\u0e1a\u0e01\u0e32\u0e23\u0e2a\u0e16\u0e32\u0e1b\u0e19\u0e32\u0e02\u0e36\u0e49\u0e19\u0e43\u0e19\u0e27\u0e31\u0e19\u0e17\u0e35\u0e48 8 \u0e21\u0e35\u0e19\u0e32\u0e04\u0e21 \u0e1e.\u0e28. 2424 (\u0e02\u0e13\u0e30\u0e19\u0e31\u0e49\u0e19\u0e19\u0e31\u0e1a\u0e27\u0e31\u0e19\u0e17\u0e35\u0e48 1 \u0e40\u0e21\u0e29\u0e32\u0e22\u0e19 \u0e40\u0e1b\u0e47\u0e19\u0e27\u0e31\u0e19\u0e02\u0e36\u0e49\u0e19\u0e1b\u0e35\u0e43\u0e2b\u0e21\u0e48 \u0e40\u0e21\u0e37\u0e48\u0e2d\u0e19\u0e31\u0e1a\u0e2d\u0e22\u0e48\u0e32\u0e07\u0e2a\u0e32\u0e01\u0e25\u0e16\u0e37\u0e2d\u0e40\u0e1b\u0e47\u0e19 \u0e1e.\u0e28. 2425) \u0e42\u0e14\u0e22\u0e40\u0e1b\u0e47\u0e19\u0e42\u0e23\u0e07\u0e40\u0e23\u0e35\u0e22\u0e19\u0e23\u0e31\u0e10\u0e1a\u0e32\u0e25\u0e41\u0e2b\u0e48\u0e07\u0e41\u0e23\u0e01\u0e02\u0e2d\u0e07\u0e1b\u0e23\u0e30\u0e40\u0e17\u0e28\u0e44\u0e17\u0e22"}]}
|
airesearch/xlm-roberta-base-finetune-qa
| null |
[
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #xlm-roberta #question-answering #endpoints_compatible #region-us
|
# xlm-roberta-base-finetune-qa
Finetuning 'xlm-roberta-base' with the training set of 'iapp_wiki_qa_squad', 'thaiqa_squad', and 'nsc_qa' (removed examples which have cosine similarity with validation and test examples over 0.8; contexts of the latter two are trimmed to be around 300 'newmm' words). Benchmarks shared on wandb using validation and test sets of 'iapp_wiki_qa_squad'.
Trained with thai2transformers.
Train with:
|
[
"# xlm-roberta-base-finetune-qa\n\nFinetuning 'xlm-roberta-base' with the training set of 'iapp_wiki_qa_squad', 'thaiqa_squad', and 'nsc_qa' (removed examples which have cosine similarity with validation and test examples over 0.8; contexts of the latter two are trimmed to be around 300 'newmm' words). Benchmarks shared on wandb using validation and test sets of 'iapp_wiki_qa_squad'.\nTrained with thai2transformers.\n\nTrain with:"
] |
[
"TAGS\n#transformers #pytorch #xlm-roberta #question-answering #endpoints_compatible #region-us \n",
"# xlm-roberta-base-finetune-qa\n\nFinetuning 'xlm-roberta-base' with the training set of 'iapp_wiki_qa_squad', 'thaiqa_squad', and 'nsc_qa' (removed examples which have cosine similarity with validation and test examples over 0.8; contexts of the latter two are trimmed to be around 300 'newmm' words). Benchmarks shared on wandb using validation and test sets of 'iapp_wiki_qa_squad'.\nTrained with thai2transformers.\n\nTrain with:"
] |
fill-mask
|
transformers
|
# Finetuend `xlm-roberta-base` model on Thai sequence and token classification datasets
<br>
Finetuned XLM Roberta BASE model on Thai sequence and token classification datasets
The script and documentation can be found at [this repository](https://github.com/vistec-AI/thai2transformers).
<br>
## Model description
<br>
We use the pretrained cross-lingual RoBERTa model as proposed by [[Conneau et al., 2020]](https://arxiv.org/abs/1911.02116). We download the pretrained PyTorch model via HuggingFace's Model Hub (https://huggingface.co/xlm-roberta-base)
<br>
## Intended uses & limitations
<br>
You can use the finetuned models for multiclass/multilabel text classification and token classification task.
<br>
**Multiclass text classification**
- `wisesight_sentiment`
4-class text classification task (`positive`, `neutral`, `negative`, and `question`) based on social media posts and tweets.
- `wongnai_reivews`
Users' review rating classification task (scale is ranging from 1 to 5)
- `generated_reviews_enth` : (`review_star` as label)
Generated users' review rating classification task (scale is ranging from 1 to 5).
**Multilabel text classification**
- `prachathai67k`
Thai topic classification with 12 labels based on news article corpus from prachathai.com. The detail is described in this [page](https://huggingface.co/datasets/prachathai67k).
**Token classification**
- `thainer`
Named-entity recognition tagging with 13 named-entities as descibed in this [page](https://huggingface.co/datasets/thainer).
- `lst20` : NER NER and POS tagging
Named-entity recognition tagging with 10 named-entities and Part-of-Speech tagging with 16 tags as descibed in this [page](https://huggingface.co/datasets/lst20).
<br>
## How to use
<br>
The example notebook demonstrating how to use finetuned model for inference can be found at this [Colab notebook](https://colab.research.google.com/drive/1Kbk6sBspZLwcnOE61adAQo30xxqOQ9ko)
<br>
**BibTeX entry and citation info**
```
@misc{lowphansirikul2021wangchanberta,
title={WangchanBERTa: Pretraining transformer-based Thai Language Models},
author={Lalita Lowphansirikul and Charin Polpanumas and Nawat Jantrakulchai and Sarana Nutanong},
year={2021},
eprint={2101.09635},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{}
|
airesearch/xlm-roberta-base-finetuned
| null |
[
"transformers",
"xlm-roberta",
"fill-mask",
"arxiv:1911.02116",
"arxiv:2101.09635",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1911.02116",
"2101.09635"
] |
[] |
TAGS
#transformers #xlm-roberta #fill-mask #arxiv-1911.02116 #arxiv-2101.09635 #autotrain_compatible #endpoints_compatible #region-us
|
# Finetuend 'xlm-roberta-base' model on Thai sequence and token classification datasets
<br>
Finetuned XLM Roberta BASE model on Thai sequence and token classification datasets
The script and documentation can be found at this repository.
<br>
## Model description
<br>
We use the pretrained cross-lingual RoBERTa model as proposed by [[Conneau et al., 2020]](URL We download the pretrained PyTorch model via HuggingFace's Model Hub (URL
<br>
## Intended uses & limitations
<br>
You can use the finetuned models for multiclass/multilabel text classification and token classification task.
<br>
Multiclass text classification
- 'wisesight_sentiment'
4-class text classification task ('positive', 'neutral', 'negative', and 'question') based on social media posts and tweets.
- 'wongnai_reivews'
Users' review rating classification task (scale is ranging from 1 to 5)
- 'generated_reviews_enth' : ('review_star' as label)
Generated users' review rating classification task (scale is ranging from 1 to 5).
Multilabel text classification
- 'prachathai67k'
Thai topic classification with 12 labels based on news article corpus from URL. The detail is described in this page.
Token classification
- 'thainer'
Named-entity recognition tagging with 13 named-entities as descibed in this page.
- 'lst20' : NER NER and POS tagging
Named-entity recognition tagging with 10 named-entities and Part-of-Speech tagging with 16 tags as descibed in this page.
<br>
## How to use
<br>
The example notebook demonstrating how to use finetuned model for inference can be found at this Colab notebook
<br>
BibTeX entry and citation info
|
[
"# Finetuend 'xlm-roberta-base' model on Thai sequence and token classification datasets\n\n<br>\n\nFinetuned XLM Roberta BASE model on Thai sequence and token classification datasets\nThe script and documentation can be found at this repository.\n\n<br>",
"## Model description\n\n<br>\n\nWe use the pretrained cross-lingual RoBERTa model as proposed by [[Conneau et al., 2020]](URL We download the pretrained PyTorch model via HuggingFace's Model Hub (URL\n<br>",
"## Intended uses & limitations\n\n<br>\n\nYou can use the finetuned models for multiclass/multilabel text classification and token classification task.\n\n<br>\n\nMulticlass text classification\n\n\n- 'wisesight_sentiment' \n \n 4-class text classification task ('positive', 'neutral', 'negative', and 'question') based on social media posts and tweets.\n\n- 'wongnai_reivews' \n\n Users' review rating classification task (scale is ranging from 1 to 5)\n\n- 'generated_reviews_enth' : ('review_star' as label)\n\n Generated users' review rating classification task (scale is ranging from 1 to 5).\n\nMultilabel text classification\n\n- 'prachathai67k'\n\n Thai topic classification with 12 labels based on news article corpus from URL. The detail is described in this page.\n \n\nToken classification\n\n- 'thainer' \n\n Named-entity recognition tagging with 13 named-entities as descibed in this page.\n\n- 'lst20' : NER NER and POS tagging\n \n Named-entity recognition tagging with 10 named-entities and Part-of-Speech tagging with 16 tags as descibed in this page.\n\n<br>",
"## How to use\n\n<br>\n\nThe example notebook demonstrating how to use finetuned model for inference can be found at this Colab notebook\n\n<br>\n\n\nBibTeX entry and citation info"
] |
[
"TAGS\n#transformers #xlm-roberta #fill-mask #arxiv-1911.02116 #arxiv-2101.09635 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Finetuend 'xlm-roberta-base' model on Thai sequence and token classification datasets\n\n<br>\n\nFinetuned XLM Roberta BASE model on Thai sequence and token classification datasets\nThe script and documentation can be found at this repository.\n\n<br>",
"## Model description\n\n<br>\n\nWe use the pretrained cross-lingual RoBERTa model as proposed by [[Conneau et al., 2020]](URL We download the pretrained PyTorch model via HuggingFace's Model Hub (URL\n<br>",
"## Intended uses & limitations\n\n<br>\n\nYou can use the finetuned models for multiclass/multilabel text classification and token classification task.\n\n<br>\n\nMulticlass text classification\n\n\n- 'wisesight_sentiment' \n \n 4-class text classification task ('positive', 'neutral', 'negative', and 'question') based on social media posts and tweets.\n\n- 'wongnai_reivews' \n\n Users' review rating classification task (scale is ranging from 1 to 5)\n\n- 'generated_reviews_enth' : ('review_star' as label)\n\n Generated users' review rating classification task (scale is ranging from 1 to 5).\n\nMultilabel text classification\n\n- 'prachathai67k'\n\n Thai topic classification with 12 labels based on news article corpus from URL. The detail is described in this page.\n \n\nToken classification\n\n- 'thainer' \n\n Named-entity recognition tagging with 13 named-entities as descibed in this page.\n\n- 'lst20' : NER NER and POS tagging\n \n Named-entity recognition tagging with 10 named-entities and Part-of-Speech tagging with 16 tags as descibed in this page.\n\n<br>",
"## How to use\n\n<br>\n\nThe example notebook demonstrating how to use finetuned model for inference can be found at this Colab notebook\n\n<br>\n\n\nBibTeX entry and citation info"
] |
text-generation
|
transformers
|
# Michael Scott DialoGPT Model
|
{"tags": ["conversational"]}
|
aishanisingh/DiagloGPT-small-michaelscott
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Michael Scott DialoGPT Model
|
[
"# Michael Scott DialoGPT Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Michael Scott DialoGPT Model"
] |
text-generation
|
transformers
|
# Harry Potter DialoGPT Model
|
{"tags": ["conversational"]}
|
aishanisingh/DialoGPT-small-harrypotter
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Harry Potter DialoGPT Model
|
[
"# Harry Potter DialoGPT Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Harry Potter DialoGPT Model"
] |
null | null |
pip install vaderSentiment
from vaderSentiment.vaderSentiment import SentimentIntensityAnalyzer
analyser = SentimentIntensityAnalyzer()
analyser.polarity_scores("I hate watching movies")
import nltk
from nltk.tokenize import word_tokenize, RegexpTokenizer
from nltk.sentiment.vader import SentimentIntensityAnalyzer
nltk.download('all')
import numpy as np
sentence = """I love dancing & painting"""
tokenized_sentence = nltk.word_tokenize(sentence)
from nltk import word_tokenize
from typing import List
Analyzer = SentimentIntensityAnalyzer()
pos_word_list=[]
neu_word_list=[]
neg_word_list=[]
pos_score_list=[]
neg_score_list=[]
score_list=[]
for word in tokenized_sentence:
if (Analyzer.polarity_scores(word)['compound']) >= 0.1:
pos_word_list.append(word)
score_list.append(Analyzer.polarity_scores(word)['compound'])
elif (Analyzer.polarity_scores(word)['compound']) <= -0.1:
neg_word_list.append(word)
score_list.append(Analyzer.polarity_scores(word)['compound'])
else:
neu_word_list.append(word)
score_list.append(Analyzer.polarity_scores(word)['compound'])
print('Positive:',pos_word_list)
print('Neutral:',neu_word_list)
print('Negative:',neg_word_list)
print('Score:', score_list)
score = Analyzer.polarity_scores(sentence)
print('\nScores:', score)
predict_log=score.values()
value_iterator=iter(predict_log)
neg_prediction=next(value_iterator)
neu_prediction=next(value_iterator)
pos_prediction=next(value_iterator)
prediction_list=[neg_prediction, pos_prediction]
prediction_list_array=np.array(prediction_list)
def predict():
probs = []
for text in texts:
offset = (self.score(text) + 1) / 2.
binned = np.digitize(5 * offset, self.classes) + 1
simulated_probs = scipy.stats.norm.pdf(self.classes, binned, scale=0.5)
probs.append(simulated_probs)
return np.array(probs)
latex_special_token = ["!@#$%^&*()"]
import operator
def generate(text_list, attention_list, latex_file, color_neg='red', color_pos='green', rescale_value = False):
print("hello")
attention_list = rescale(attention_list)
word_num = len(text_list)
print(len(attention_list))
print(len(text_list))
text_list = clean_word(text_list)
with open(latex_file,'w') as f:
f.write(r'''\documentclass[varwidth]{standalone}
\special{papersize=210mm,297mm}
\usepackage{color}
\usepackage{tcolorbox}
\usepackage{CJK}
\usepackage{adjustbox}
\tcbset{width=0.9\textwidth,boxrule=0pt,colback=red,arc=0pt,auto outer arc,left=0pt,right=0pt,boxsep=5pt}
\begin{document}
\begin{CJK*}{UTF8}{gbsn}'''+'\n')
string = r'''{\setlength{\fboxsep}{0pt}\colorbox{white!0}{\parbox{0.9\textwidth}{'''+"\n"
for idx in range(len(attention_list)):
if attention_list[idx] > 0:
string += "\\colorbox{%s!%s}{"%(color_pos, attention_list[idx])+"\\strut " + text_list[idx]+"} "
else:
string += "\\colorbox{%s!%s}{"%(color_neg, -attention_list[idx])+"\\strut " + text_list[idx]+"} "
string += "\n}}}"
f.write(string+'\n')
f.write(r'''\end{CJK*}
\end{document}''')
def rescale(input_list):
the_array = np.asarray(input_list)
the_max = np.max(abs(the_array))
rescale = the_array/the_max
rescale = rescale*100
rescale = np.round(rescale, 3)
'''
the_array = np.asarray(input_list)
the_max = np.max(the_array)
the_min = np.min(the_array)
rescale = ((the_array - the_min)/(the_max-the_min))*100
for i in rescale:
print(rescale)
'''
return rescale.tolist()
def clean_word(word_list):
new_word_list = []
for word in word_list:
for latex_sensitive in ["\\", "%", "&", "^", "#", "_", "{", "}"]:
if latex_sensitive in word:
word = word.replace(latex_sensitive, '\\'+latex_sensitive)
new_word_list.append(word)
return new_word_list
if __name__ == '__main__':
color_1 = 'red'
color_2 = 'green'
words = word_tokenize(sentence)
word_num = len(words)
generate(words, score_list, "sple.tex", color_1, color_2)
|
{}
|
aishoo1612/VADER-With-heatmaps
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#region-us
|
pip install vaderSentiment
from vaderSentiment.vaderSentiment import SentimentIntensityAnalyzer
analyser = SentimentIntensityAnalyzer()
analyser.polarity_scores("I hate watching movies")
import nltk
from nltk.tokenize import word_tokenize, RegexpTokenizer
from URL import SentimentIntensityAnalyzer
nltk.download('all')
import numpy as np
sentence = """I love dancing & painting"""
tokenized_sentence = nltk.word_tokenize(sentence)
from nltk import word_tokenize
from typing import List
Analyzer = SentimentIntensityAnalyzer()
pos_word_list=[]
neu_word_list=[]
neg_word_list=[]
pos_score_list=[]
neg_score_list=[]
score_list=[]
for word in tokenized_sentence:
if (Analyzer.polarity_scores(word)['compound']) >= 0.1:
pos_word_list.append(word)
score_list.append(Analyzer.polarity_scores(word)['compound'])
elif (Analyzer.polarity_scores(word)['compound']) <= -0.1:
neg_word_list.append(word)
score_list.append(Analyzer.polarity_scores(word)['compound'])
else:
neu_word_list.append(word)
score_list.append(Analyzer.polarity_scores(word)['compound'])
print('Positive:',pos_word_list)
print('Neutral:',neu_word_list)
print('Negative:',neg_word_list)
print('Score:', score_list)
score = Analyzer.polarity_scores(sentence)
print('\nScores:', score)
predict_log=URL()
value_iterator=iter(predict_log)
neg_prediction=next(value_iterator)
neu_prediction=next(value_iterator)
pos_prediction=next(value_iterator)
prediction_list=[neg_prediction, pos_prediction]
prediction_list_array=URL(prediction_list)
def predict():
probs = []
for text in texts:
offset = (URL(text) + 1) / 2.
binned = np.digitize(5 * offset, self.classes) + 1
simulated_probs = URL(self.classes, binned, scale=0.5)
URL(simulated_probs)
return URL(probs)
latex_special_token = ["!@#$%^&*()"]
import operator
def generate(text_list, attention_list, latex_file, color_neg='red', color_pos='green', rescale_value = False):
print("hello")
attention_list = rescale(attention_list)
word_num = len(text_list)
print(len(attention_list))
print(len(text_list))
text_list = clean_word(text_list)
with open(latex_file,'w') as f:
f.write(r'''\documentclass[varwidth]{standalone}
\special{papersize=210mm,297mm}
\usepackage{color}
\usepackage{tcolorbox}
\usepackage{CJK}
\usepackage{adjustbox}
\tcbset{width=0.9\textwidth,boxrule=0pt,colback=red,arc=0pt,auto outer arc,left=0pt,right=0pt,boxsep=5pt}
\begin{document}
\begin{CJK*}{UTF8}{gbsn}'''+'\n')
string = r'''{\setlength{\fboxsep}{0pt}\colorbox{white!0}{\parbox{0.9\textwidth}{'''+"\n"
for idx in range(len(attention_list)):
if attention_list[idx] > 0:
string += "\\colorbox{%s!%s}{"%(color_pos, attention_list[idx])+"\\strut " + text_list[idx]+"} "
else:
string += "\\colorbox{%s!%s}{"%(color_neg, -attention_list[idx])+"\\strut " + text_list[idx]+"} "
string += "\n}}}"
f.write(string+'\n')
f.write(r'''\end{CJK*}
\end{document}''')
def rescale(input_list):
the_array = np.asarray(input_list)
the_max = URL(abs(the_array))
rescale = the_array/the_max
rescale = rescale*100
rescale = URL(rescale, 3)
'''
the_array = np.asarray(input_list)
the_max = URL(the_array)
the_min = URL(the_array)
rescale = ((the_array - the_min)/(the_max-the_min))*100
for i in rescale:
print(rescale)
'''
return URL()
def clean_word(word_list):
new_word_list = []
for word in word_list:
for latex_sensitive in ["\\", "%", "&", "^", "#", "_", "{", "}"]:
if latex_sensitive in word:
word = word.replace(latex_sensitive, '\\'+latex_sensitive)
new_word_list.append(word)
return new_word_list
if __name__ == '__main__':
color_1 = 'red'
color_2 = 'green'
words = word_tokenize(sentence)
word_num = len(words)
generate(words, score_list, "URL", color_1, color_2)
|
[] |
[
"TAGS\n#region-us \n"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-hi-colab_new
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-large-xls-r-300m-hi-colab_new", "results": []}]}
|
ajaiswal1008/wav2vec2-large-xls-r-300m-hi-colab_new
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
|
# wav2vec2-large-xls-r-300m-hi-colab_new
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
[
"# wav2vec2-large-xls-r-300m-hi-colab_new\n\nThis model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common_voice dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 30\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.11.3\n- Pytorch 1.10.0+cu111\n- Datasets 1.18.3\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n",
"# wav2vec2-large-xls-r-300m-hi-colab_new\n\nThis model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common_voice dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 30\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.11.3\n- Pytorch 1.10.0+cu111\n- Datasets 1.18.3\n- Tokenizers 0.10.3"
] |
image-classification
|
transformers
|
# greens
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### cucumber

#### green beans

#### okra

#### pickle

#### zucinni

|
{"tags": ["image-classification", "pytorch", "huggingpics"], "metrics": ["accuracy"]}
|
ajanco/greens
| null |
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #vit #image-classification #huggingpics #model-index #autotrain_compatible #endpoints_compatible #region-us
|
# greens
Autogenerated by HuggingPics️
Create your own image classifier for anything by running the demo on Google Colab.
Report any issues with the demo at the github repo.
## Example Images
#### cucumber
!cucumber
#### green beans
!green beans
#### okra
!okra
#### pickle
!pickle
#### zucinni
!zucinni
|
[
"# greens\n\n\nAutogenerated by HuggingPics️\n\nCreate your own image classifier for anything by running the demo on Google Colab.\n\nReport any issues with the demo at the github repo.",
"## Example Images",
"#### cucumber\n\n!cucumber",
"#### green beans\n\n!green beans",
"#### okra\n\n!okra",
"#### pickle\n\n!pickle",
"#### zucinni\n\n!zucinni"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #vit #image-classification #huggingpics #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"# greens\n\n\nAutogenerated by HuggingPics️\n\nCreate your own image classifier for anything by running the demo on Google Colab.\n\nReport any issues with the demo at the github repo.",
"## Example Images",
"#### cucumber\n\n!cucumber",
"#### green beans\n\n!green beans",
"#### okra\n\n!okra",
"#### pickle\n\n!pickle",
"#### zucinni\n\n!zucinni"
] |
fill-mask
|
transformers
|
This **cased model** was pretrained from scratch using a custom vocabulary on the following corpora
- Pubmed
- Clinical trials corpus
- and a small subset of Bookcorpus
The pretrained model was used to do NER **as is, with no fine-tuning**. The approach is described [in this post](https://ajitrajasekharan.github.io/2021/01/02/my-first-post.html). [Towards Data Science review](https://twitter.com/TDataScience/status/1486300137366466560?s=20)
[App in Spaces](https://huggingface.co/spaces/ajitrajasekharan/self-supervised-ner-biomedical) demonstrates this approach.
[Github link](https://github.com/ajitrajasekharan/unsupervised_NER) to perform NER using this model in an ensemble with bert-base cased.
The ensemble detects 69 entity subtypes (17 broad entity groups)
<img src="https://ajitrajasekharan.github.io/images/1.png" width="600">
### Ensemble model performance
<img src="https://ajitrajasekharan.github.io/images/6.png" width="600">
### Additional notes
- The model predictions on the right do not include [CLS] predictions. Hosted inference API only returns the masked position predictions. In practice, the [CLS] predictions are just as useful as the model predictions for the masked position _(if the next sentence prediction loss was low during pretraining)_ and are used for NER.
- Some of the top model predictions like "a", "the", punctuations, etc. while valid predictions, bear no entity information. These are filtered when harvesting descriptors for NER. The examples on the right are unfiltered results.
- [Use this link](https://huggingface.co/spaces/ajitrajasekharan/Qualitative-pretrained-model-evaluation) to examine both fill-mask prediction and [CLS] predictions
### License
MIT license
<a href="https://huggingface.co/exbert/?model=ajitrajasekharan/biomedical&modelKind=bidirectional&sentence=Gefitinib%20is%20an%20EGFR%20tyrosine%20kinase%20inhibitor,%20which%20is%20often%20used%20for%20breast%20cancer%20and%20NSCLC%20treatment.&layer=3&heads=..0,1,2,3,4,5,6,7,8,9,10,11&threshold=0.7&tokenInd=17&tokenSide=right&maskInds=..&hideClsSep=true">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
{"language": [{}], "license": "mit", "tags": [{}, "exbert"], "widget": [{"text": "Lou Gehrig who works for XCorp and lives in New York suffers from [MASK]", "example_title": "Test for entity type: Disease"}, {"text": "Overexpression of [MASK] occurs across a wide range of cancers", "example_title": "Test for entity type: Gene"}, {"text": "Patients treated with [MASK] are vulnerable to infectious diseases", "example_title": "Test for entity type: Drug"}, {"text": "A eGFR level below [MASK] indicates chronic kidney disease", "example_title": "Test for entity type: Measure "}, {"text": "In the [MASK], increased daily imatinib dose induced MMR", "example_title": "Test for entity type: STUDY/TRIAL"}, {"text": "Paul Erdos died at [MASK]", "example_title": "Test for entity type: TIME"}], "inference": {"parameters": {"top_k": 10}}}
|
ajitrajasekharan/biomedical
| null |
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #fill-mask #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us
|
This cased model was pretrained from scratch using a custom vocabulary on the following corpora
- Pubmed
- Clinical trials corpus
- and a small subset of Bookcorpus
The pretrained model was used to do NER as is, with no fine-tuning. The approach is described in this post. Towards Data Science review
App in Spaces demonstrates this approach.
Github link to perform NER using this model in an ensemble with bert-base cased.
The ensemble detects 69 entity subtypes (17 broad entity groups)
<img src="URL width="600">
### Ensemble model performance
<img src="URL width="600">
### Additional notes
- The model predictions on the right do not include [CLS] predictions. Hosted inference API only returns the masked position predictions. In practice, the [CLS] predictions are just as useful as the model predictions for the masked position _(if the next sentence prediction loss was low during pretraining)_ and are used for NER.
- Some of the top model predictions like "a", "the", punctuations, etc. while valid predictions, bear no entity information. These are filtered when harvesting descriptors for NER. The examples on the right are unfiltered results.
- Use this link to examine both fill-mask prediction and [CLS] predictions
### License
MIT license
<a href="URL
<img width="300px" src="URL
</a>
|
[
"### Ensemble model performance\n\n <img src=\"URL width=\"600\">",
"### Additional notes\n\n- The model predictions on the right do not include [CLS] predictions. Hosted inference API only returns the masked position predictions. In practice, the [CLS] predictions are just as useful as the model predictions for the masked position _(if the next sentence prediction loss was low during pretraining)_ and are used for NER.\n- Some of the top model predictions like \"a\", \"the\", punctuations, etc. while valid predictions, bear no entity information. These are filtered when harvesting descriptors for NER. The examples on the right are unfiltered results.\n- Use this link to examine both fill-mask prediction and [CLS] predictions",
"### License\n\nMIT license\n\n<a href=\"URL \n\t<img width=\"300px\" src=\"URL\n</a>"
] |
[
"TAGS\n#transformers #pytorch #bert #fill-mask #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### Ensemble model performance\n\n <img src=\"URL width=\"600\">",
"### Additional notes\n\n- The model predictions on the right do not include [CLS] predictions. Hosted inference API only returns the masked position predictions. In practice, the [CLS] predictions are just as useful as the model predictions for the masked position _(if the next sentence prediction loss was low during pretraining)_ and are used for NER.\n- Some of the top model predictions like \"a\", \"the\", punctuations, etc. while valid predictions, bear no entity information. These are filtered when harvesting descriptors for NER. The examples on the right are unfiltered results.\n- Use this link to examine both fill-mask prediction and [CLS] predictions",
"### License\n\nMIT license\n\n<a href=\"URL \n\t<img width=\"300px\" src=\"URL\n</a>"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-cola
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8385
- Matthews Correlation: 0.5865
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4887 | 1.0 | 535 | 0.5016 | 0.5107 |
| 0.286 | 2.0 | 1070 | 0.5473 | 0.5399 |
| 0.1864 | 3.0 | 1605 | 0.7114 | 0.5706 |
| 0.1163 | 4.0 | 2140 | 0.8385 | 0.5865 |
| 0.0834 | 5.0 | 2675 | 0.9610 | 0.5786 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["matthews_correlation"], "model-index": [{"name": "bert-base-uncased-finetuned-cola", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "cola"}, "metrics": [{"type": "matthews_correlation", "value": 0.5864941797290588, "name": "Matthews Correlation"}]}]}]}
|
ajrae/bert-base-uncased-finetuned-cola
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
|
bert-base-uncased-finetuned-cola
================================
This model is a fine-tuned version of bert-base-uncased on the glue dataset.
It achieves the following results on the evaluation set:
* Loss: 0.8385
* Matthews Correlation: 0.5865
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-mrpc
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4520
- Accuracy: 0.8578
- F1: 0.9003
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 230 | 0.4169 | 0.8039 | 0.8639 |
| No log | 2.0 | 460 | 0.4299 | 0.8137 | 0.875 |
| 0.4242 | 3.0 | 690 | 0.4520 | 0.8578 | 0.9003 |
| 0.4242 | 4.0 | 920 | 0.6323 | 0.8431 | 0.8926 |
| 0.1103 | 5.0 | 1150 | 0.6163 | 0.8578 | 0.8997 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "bert-base-uncased-finetuned-mrpc", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "mrpc"}, "metrics": [{"type": "accuracy", "value": 0.8578431372549019, "name": "Accuracy"}, {"type": "f1", "value": 0.9003436426116839, "name": "F1"}]}]}]}
|
ajrae/bert-base-uncased-finetuned-mrpc
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
|
bert-base-uncased-finetuned-mrpc
================================
This model is a fine-tuned version of bert-base-uncased on the glue dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4520
* Accuracy: 0.8578
* F1: 0.9003
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53-Total
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2814
- Wer: 0.2260
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 2.9157 | 0.2 | 400 | 2.8204 | 0.9707 |
| 0.9554 | 0.4 | 800 | 0.5295 | 0.5046 |
| 0.7585 | 0.6 | 1200 | 0.4007 | 0.3850 |
| 0.7288 | 0.8 | 1600 | 0.3632 | 0.3447 |
| 0.6792 | 1.0 | 2000 | 0.3433 | 0.3216 |
| 0.6085 | 1.2 | 2400 | 0.3254 | 0.2928 |
| 0.6225 | 1.4 | 2800 | 0.3161 | 0.2832 |
| 0.6183 | 1.6 | 3200 | 0.3111 | 0.2721 |
| 0.5947 | 1.8 | 3600 | 0.2969 | 0.2615 |
| 0.5953 | 2.0 | 4000 | 0.2912 | 0.2515 |
| 0.5358 | 2.2 | 4400 | 0.2920 | 0.2501 |
| 0.5535 | 2.4 | 4800 | 0.2939 | 0.2538 |
| 0.5408 | 2.6 | 5200 | 0.2854 | 0.2452 |
| 0.5272 | 2.8 | 5600 | 0.2816 | 0.2434 |
| 0.5248 | 3.0 | 6000 | 0.2755 | 0.2354 |
| 0.4923 | 3.2 | 6400 | 0.2795 | 0.2353 |
| 0.489 | 3.4 | 6800 | 0.2767 | 0.2330 |
| 0.4932 | 3.6 | 7200 | 0.2821 | 0.2335 |
| 0.4841 | 3.8 | 7600 | 0.2756 | 0.2349 |
| 0.4794 | 4.0 | 8000 | 0.2751 | 0.2265 |
| 0.444 | 4.2 | 8400 | 0.2809 | 0.2283 |
| 0.4533 | 4.4 | 8800 | 0.2804 | 0.2312 |
| 0.4563 | 4.6 | 9200 | 0.2830 | 0.2256 |
| 0.4498 | 4.8 | 9600 | 0.2819 | 0.2251 |
| 0.4532 | 5.0 | 10000 | 0.2814 | 0.2260 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "wav2vec2-large-xlsr-53-Total", "results": []}]}
|
akadriu/wav2vec2-large-xlsr-53-Total
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
|
wav2vec2-large-xlsr-53-Total
============================
This model is a fine-tuned version of facebook/wav2vec2-large-xlsr-53 on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2814
* Wer: 0.2260
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 16
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 5
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.10.3"
] |
text-generation
|
transformers
|
## how to use
```python
from transformers import pipeline, set_seed
path = "akahana/gpt2-indonesia"
generator = pipeline('text-generation',
model=path)
set_seed(42)
kalimat = "dahulu kala ada sebuah"
preds = generator(kalimat,
max_length=64,
num_return_sequences=3)
for data in preds:
print(data)
{'generated_text': 'dahulu kala ada sebuah perkampungan yang bernama pomere. namun kini kawasan ini sudah tidak dikembangkan lagi sebagai kawasan industri seperti perusahaan pupuk. sumber-sumber lain sudah sulit ditemukan karena belum adanya kilang pupuk milik indonesia yang sering di kembangkan sehingga belum ada satupun yang masih tersisa yang tersisa. kawasan ini juga memproduksi gula aren milik pt graha bina sarana'}
{'generated_text': 'dahulu kala ada sebuah desa kecil bernama desa. desa yang terkenal seperti halnya kota terdekat lainnya adalah desa tetangga yang bernama sama."\n"sebuah masjid merupakan suatu tempat suci yang digunakan umat islam untuk beribadah. beberapa masjid yang didaftarkan berikut memiliki suatu kehormatan tersendiri bagi masing-masing denominasi islam di dunia. sebuah masjid selain memiliki fungsi sebagai tempat'}
{'generated_text': 'dahulu kala ada sebuah peradaban yang dibangun di sebelah barat sungai mississippi di sekitar desa kecil desa yang bernama sama. penduduk asli di desa ini berasal dari etnis teweh yang berpindah agama menjadi kristen, namun kemudian pindah agama menjadi kristen. desa arawak mempunyai beberapa desa lain seperti adibei, deti, riuhut dan sa'}
```
|
{"language": "id", "widget": [{"text": "dahulu kala ada sebuah"}]}
|
akahana/gpt2-indonesia
| null |
[
"transformers",
"pytorch",
"tf",
"safetensors",
"gpt2",
"text-generation",
"id",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"id"
] |
TAGS
#transformers #pytorch #tf #safetensors #gpt2 #text-generation #id #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
## how to use
|
[
"## how to use"
] |
[
"TAGS\n#transformers #pytorch #tf #safetensors #gpt2 #text-generation #id #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"## how to use"
] |
text-classification
|
transformers
|
## how to use
```python
from transformers import pipeline, set_seed
path = "akahana/indonesia-emotion-roberta"
emotion = pipeline('text-classification',
model=path,device=0)
set_seed(42)
kalimat = "dia orang yang baik ya bunds."
preds = emotion(kalimat)
preds
[{'label': 'BAHAGIA', 'score': 0.8790940046310425}]
```
|
{"language": "id", "widget": [{"text": "dia orang yang baik ya bunds."}]}
|
akahana/indonesia-emotion-roberta
| null |
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"id",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"id"
] |
TAGS
#transformers #pytorch #tensorboard #safetensors #roberta #text-classification #id #autotrain_compatible #endpoints_compatible #region-us
|
## how to use
|
[
"## how to use"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #safetensors #roberta #text-classification #id #autotrain_compatible #endpoints_compatible #region-us \n",
"## how to use"
] |
text-classification
|
transformers
|
## how to use
```python
from transformers import pipeline, set_seed
path = "akahana/indonesia-sentiment-roberta"
emotion = pipeline('text-classification',
model=path,device=0)
set_seed(42)
kalimat = "dia orang yang baik ya bunds."
preds = emotion(kalimat)
preds
```
|
{"language": "id", "widget": [{"text": "dia orang yang baik ya bunds."}]}
|
akahana/indonesia-sentiment-roberta
| null |
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"id",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"id"
] |
TAGS
#transformers #pytorch #tensorboard #roberta #text-classification #id #autotrain_compatible #endpoints_compatible #region-us
|
## how to use
|
[
"## how to use"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #roberta #text-classification #id #autotrain_compatible #endpoints_compatible #region-us \n",
"## how to use"
] |
feature-extraction
|
transformers
|
# Indonesian RoBERTa Base
## How to Use
### As Masked Language Model
```python
from transformers import pipeline
pretrained_name = "akahana/roberta-base-indonesia"
fill_mask = pipeline(
"fill-mask",
model=pretrained_name,
tokenizer=pretrained_name
)
fill_mask("Gajah <mask> sedang makan di kebun binatang.")
```
### Feature Extraction in PyTorch
```python
from transformers import RobertaModel, RobertaTokenizerFast
pretrained_name = "akahana/roberta-base-indonesia"
model = RobertaModel.from_pretrained(pretrained_name)
tokenizer = RobertaTokenizerFast.from_pretrained(pretrained_name)
prompt = "Gajah <mask> sedang makan di kebun binatang."
encoded_input = tokenizer(prompt, return_tensors='pt')
output = model(**encoded_input)
```
|
{"language": "id", "license": "mit", "tags": ["roberta-base-indonesia"], "datasets": ["wikipedia"], "widget": [{"text": "Gajah <mask> sedang makan di kebun binatang."}]}
|
akahana/roberta-base-indonesia
| null |
[
"transformers",
"pytorch",
"tf",
"safetensors",
"roberta",
"feature-extraction",
"roberta-base-indonesia",
"id",
"dataset:wikipedia",
"license:mit",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"id"
] |
TAGS
#transformers #pytorch #tf #safetensors #roberta #feature-extraction #roberta-base-indonesia #id #dataset-wikipedia #license-mit #endpoints_compatible #region-us
|
# Indonesian RoBERTa Base
## How to Use
### As Masked Language Model
### Feature Extraction in PyTorch
|
[
"# Indonesian RoBERTa Base",
"## How to Use",
"### As Masked Language Model",
"### Feature Extraction in PyTorch"
] |
[
"TAGS\n#transformers #pytorch #tf #safetensors #roberta #feature-extraction #roberta-base-indonesia #id #dataset-wikipedia #license-mit #endpoints_compatible #region-us \n",
"# Indonesian RoBERTa Base",
"## How to Use",
"### As Masked Language Model",
"### Feature Extraction in PyTorch"
] |
feature-extraction
|
transformers
|
# Indonesian tiny-RoBERTa
## How to Use
### As Masked Language Model
```python
from transformers import pipeline
pretrained_name = "akahana/tiny-roberta-indonesia"
fill_mask = pipeline(
"fill-mask",
model=pretrained_name,
tokenizer=pretrained_name
)
fill_mask("ikiryo adalah <mask> hantu dalam mitologi jepang.")
```
### Feature Extraction in PyTorch
```python
from transformers import RobertaModel, RobertaTokenizerFast
pretrained_name = "akahana/tiny-roberta-indonesia"
model = RobertaModel.from_pretrained(pretrained_name)
tokenizer = RobertaTokenizerFast.from_pretrained(pretrained_name)
prompt = "ikiryo adalah <mask> hantu dalam mitologi jepang."
encoded_input = tokenizer(prompt, return_tensors='pt')
output = model(**encoded_input)
```
|
{"language": "id", "license": "mit", "tags": ["tiny-roberta-indonesia"], "datasets": ["wikipedia"], "widget": [{"text": "ikiryo adalah <mask> hantu dalam mitologi jepang."}]}
|
akahana/tiny-roberta-indonesia
| null |
[
"transformers",
"pytorch",
"tf",
"safetensors",
"roberta",
"feature-extraction",
"tiny-roberta-indonesia",
"id",
"dataset:wikipedia",
"license:mit",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"id"
] |
TAGS
#transformers #pytorch #tf #safetensors #roberta #feature-extraction #tiny-roberta-indonesia #id #dataset-wikipedia #license-mit #endpoints_compatible #region-us
|
# Indonesian tiny-RoBERTa
## How to Use
### As Masked Language Model
### Feature Extraction in PyTorch
|
[
"# Indonesian tiny-RoBERTa",
"## How to Use",
"### As Masked Language Model",
"### Feature Extraction in PyTorch"
] |
[
"TAGS\n#transformers #pytorch #tf #safetensors #roberta #feature-extraction #tiny-roberta-indonesia #id #dataset-wikipedia #license-mit #endpoints_compatible #region-us \n",
"# Indonesian tiny-RoBERTa",
"## How to Use",
"### As Masked Language Model",
"### Feature Extraction in PyTorch"
] |
image-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-cats-vs-dogs
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the cats_vs_dogs dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0369
- Accuracy: 0.9883
## how to use
```python
from transformers import ViTFeatureExtractor, ViTModel
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = ViTFeatureExtractor.from_pretrained('google/vit-base-patch16-224-in21k')
model = ViTModel.from_pretrained('akahana/vit-base-cats-vs-dogs')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
```
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1337
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0949 | 1.0 | 2488 | 0.0369 | 0.9883 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["image-classification", "generated_from_trainer"], "datasets": ["cats_vs_dogs"], "metrics": ["accuracy"], "base_model": "google/vit-base-patch16-224-in21k", "model-index": [{"name": "vit-base-cats-vs-dogs", "results": [{"task": {"type": "image-classification", "name": "Image Classification"}, "dataset": {"name": "cats_vs_dogs", "type": "cats_vs_dogs", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.9883257403189066, "name": "Accuracy"}]}]}]}
|
akahana/vit-base-cats-vs-dogs
| null |
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:cats_vs_dogs",
"base_model:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #safetensors #vit #image-classification #generated_from_trainer #dataset-cats_vs_dogs #base_model-google/vit-base-patch16-224-in21k #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #has_space #region-us
|
vit-base-cats-vs-dogs
=====================
This model is a fine-tuned version of google/vit-base-patch16-224-in21k on the cats\_vs\_dogs dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0369
* Accuracy: 0.9883
how to use
----------
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0002
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 1337
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 1.0
### Training results
### Framework versions
* Transformers 4.12.5
* Pytorch 1.10.0+cu111
* Datasets 1.16.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 1337\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #safetensors #vit #image-classification #generated_from_trainer #dataset-cats_vs_dogs #base_model-google/vit-base-patch16-224-in21k #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 1337\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-tamil-colab-final
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7539
- Wer: 0.6135
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 11.1466 | 1.0 | 118 | 4.3444 | 1.0 |
| 3.4188 | 2.0 | 236 | 3.2496 | 1.0 |
| 2.8617 | 3.0 | 354 | 1.6165 | 1.0003 |
| 0.958 | 4.0 | 472 | 0.7984 | 0.8720 |
| 0.5929 | 5.0 | 590 | 0.6733 | 0.7831 |
| 0.4628 | 6.0 | 708 | 0.6536 | 0.7621 |
| 0.3834 | 7.0 | 826 | 0.6037 | 0.7155 |
| 0.3242 | 8.0 | 944 | 0.6376 | 0.7184 |
| 0.2736 | 9.0 | 1062 | 0.6214 | 0.7070 |
| 0.2433 | 10.0 | 1180 | 0.6158 | 0.6944 |
| 0.2217 | 11.0 | 1298 | 0.6548 | 0.6830 |
| 0.1992 | 12.0 | 1416 | 0.6331 | 0.6775 |
| 0.1804 | 13.0 | 1534 | 0.6644 | 0.6874 |
| 0.1639 | 14.0 | 1652 | 0.6629 | 0.6649 |
| 0.143 | 15.0 | 1770 | 0.6927 | 0.6836 |
| 0.1394 | 16.0 | 1888 | 0.6933 | 0.6888 |
| 0.1296 | 17.0 | 2006 | 0.7039 | 0.6860 |
| 0.1212 | 18.0 | 2124 | 0.7042 | 0.6628 |
| 0.1121 | 19.0 | 2242 | 0.7132 | 0.6475 |
| 0.1069 | 20.0 | 2360 | 0.7423 | 0.6438 |
| 0.1063 | 21.0 | 2478 | 0.7171 | 0.6484 |
| 0.1025 | 22.0 | 2596 | 0.7396 | 0.6451 |
| 0.0946 | 23.0 | 2714 | 0.7400 | 0.6432 |
| 0.0902 | 24.0 | 2832 | 0.7385 | 0.6286 |
| 0.0828 | 25.0 | 2950 | 0.7368 | 0.6286 |
| 0.079 | 26.0 | 3068 | 0.7471 | 0.6306 |
| 0.0747 | 27.0 | 3186 | 0.7524 | 0.6201 |
| 0.0661 | 28.0 | 3304 | 0.7576 | 0.6201 |
| 0.0659 | 29.0 | 3422 | 0.7579 | 0.6130 |
| 0.0661 | 30.0 | 3540 | 0.7539 | 0.6135 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-large-xls-r-300m-tamil-colab-final", "results": []}]}
|
akashsivanandan/wav2vec2-large-xls-r-300m-tamil-colab-final
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
|
wav2vec2-large-xls-r-300m-tamil-colab-final
===========================================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common\_voice dataset.
It achieves the following results on the evaluation set:
* Loss: 0.7539
* Wer: 0.6135
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 30
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.10.0+cu111
* Datasets 1.13.3
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-tamil-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8072
- Wer: 0.6531
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 11.0967 | 1.0 | 118 | 4.6437 | 1.0 |
| 3.4973 | 2.0 | 236 | 3.2588 | 1.0 |
| 3.1305 | 3.0 | 354 | 2.6566 | 1.0 |
| 1.2931 | 4.0 | 472 | 0.9156 | 0.9944 |
| 0.6851 | 5.0 | 590 | 0.7474 | 0.8598 |
| 0.525 | 6.0 | 708 | 0.6649 | 0.7995 |
| 0.4325 | 7.0 | 826 | 0.6740 | 0.7752 |
| 0.3766 | 8.0 | 944 | 0.6220 | 0.7628 |
| 0.3256 | 9.0 | 1062 | 0.6316 | 0.7322 |
| 0.2802 | 10.0 | 1180 | 0.6442 | 0.7305 |
| 0.2575 | 11.0 | 1298 | 0.6885 | 0.7280 |
| 0.2248 | 12.0 | 1416 | 0.6702 | 0.7197 |
| 0.2089 | 13.0 | 1534 | 0.6781 | 0.7173 |
| 0.1893 | 14.0 | 1652 | 0.6981 | 0.7049 |
| 0.1652 | 15.0 | 1770 | 0.7154 | 0.7436 |
| 0.1643 | 16.0 | 1888 | 0.6798 | 0.7023 |
| 0.1472 | 17.0 | 2006 | 0.7381 | 0.6947 |
| 0.1372 | 18.0 | 2124 | 0.7240 | 0.7065 |
| 0.1318 | 19.0 | 2242 | 0.7305 | 0.6714 |
| 0.1211 | 20.0 | 2360 | 0.7288 | 0.6597 |
| 0.1178 | 21.0 | 2478 | 0.7417 | 0.6699 |
| 0.1118 | 22.0 | 2596 | 0.7476 | 0.6753 |
| 0.1016 | 23.0 | 2714 | 0.7973 | 0.6647 |
| 0.0998 | 24.0 | 2832 | 0.8027 | 0.6633 |
| 0.0917 | 25.0 | 2950 | 0.8045 | 0.6680 |
| 0.0907 | 26.0 | 3068 | 0.7884 | 0.6565 |
| 0.0835 | 27.0 | 3186 | 0.8009 | 0.6622 |
| 0.0749 | 28.0 | 3304 | 0.8123 | 0.6536 |
| 0.0755 | 29.0 | 3422 | 0.8006 | 0.6555 |
| 0.074 | 30.0 | 3540 | 0.8072 | 0.6531 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-large-xls-r-300m-tamil-colab", "results": []}]}
|
akashsivanandan/wav2vec2-large-xls-r-300m-tamil-colab
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
|
wav2vec2-large-xls-r-300m-tamil-colab
=====================================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common\_voice dataset.
It achieves the following results on the evaluation set:
* Loss: 0.8072
* Wer: 0.6531
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 30
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.10.0+cu111
* Datasets 1.13.3
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3"
] |
text-generation
|
transformers
|
# Kaiser DialoGPT Model
|
{"tags": ["conversational"]}
|
akaushik1/DialoGPT-small-kaiser
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Kaiser DialoGPT Model
|
[
"# Kaiser DialoGPT Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Kaiser DialoGPT Model"
] |
token-classification
|
transformers
|
# Hungarian Named Entity Recognition (NER) Model
This model is the fine-tuned model of "SZTAKI-HLT/hubert-base-cc"
using the famous WikiANN dataset presented
in the "Cross-lingual Name Tagging and Linking for 282 Languages" [paper](https://aclanthology.org/P17-1178.pdf).
# Fine-tuning parameters:
```
task = "ner"
model_checkpoint = "SZTAKI-HLT/hubert-base-cc"
batch_size = 8
label_list = ['O', 'B-PER', 'I-PER', 'B-ORG', 'I-ORG', 'B-LOC', 'I-LOC']
max_length = 512
learning_rate = 2e-5
num_train_epochs = 3
weight_decay = 0.01
```
# How to use:
```
model = AutoModelForTokenClassification.from_pretrained("akdeniz27/bert-base-hungarian-cased-ner")
tokenizer = AutoTokenizer.from_pretrained("akdeniz27/bert-base-hungarian-cased-ner")
ner = pipeline('ner', model=model, tokenizer=tokenizer, aggregation_strategy="first")
ner("<your text here>")
```
Pls refer "https://huggingface.co/transformers/_modules/transformers/pipelines/token_classification.html" for entity grouping with aggregation_strategy parameter.
# Reference test results:
* accuracy: 0.9774538310923768
* f1: 0.9462099085573904
* precision: 0.9425718667406271
* recall: 0.9498761426661113
|
{"language": "hu", "widget": [{"text": "Karik\u00f3 Katalin megkapja Szeged d\u00edszpolg\u00e1rs\u00e1g\u00e1t."}]}
|
akdeniz27/bert-base-hungarian-cased-ner
| null |
[
"transformers",
"pytorch",
"safetensors",
"bert",
"token-classification",
"hu",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"hu"
] |
TAGS
#transformers #pytorch #safetensors #bert #token-classification #hu #autotrain_compatible #endpoints_compatible #region-us
|
# Hungarian Named Entity Recognition (NER) Model
This model is the fine-tuned model of "SZTAKI-HLT/hubert-base-cc"
using the famous WikiANN dataset presented
in the "Cross-lingual Name Tagging and Linking for 282 Languages" paper.
# Fine-tuning parameters:
# How to use:
Pls refer "URL for entity grouping with aggregation_strategy parameter.
# Reference test results:
* accuracy: 0.9774538310923768
* f1: 0.9462099085573904
* precision: 0.9425718667406271
* recall: 0.9498761426661113
|
[
"# Hungarian Named Entity Recognition (NER) Model\nThis model is the fine-tuned model of \"SZTAKI-HLT/hubert-base-cc\" \nusing the famous WikiANN dataset presented\nin the \"Cross-lingual Name Tagging and Linking for 282 Languages\" paper.",
"# Fine-tuning parameters:",
"# How to use: \n\nPls refer \"URL for entity grouping with aggregation_strategy parameter.",
"# Reference test results:\n* accuracy: 0.9774538310923768\n* f1: 0.9462099085573904\n* precision: 0.9425718667406271\n* recall: 0.9498761426661113"
] |
[
"TAGS\n#transformers #pytorch #safetensors #bert #token-classification #hu #autotrain_compatible #endpoints_compatible #region-us \n",
"# Hungarian Named Entity Recognition (NER) Model\nThis model is the fine-tuned model of \"SZTAKI-HLT/hubert-base-cc\" \nusing the famous WikiANN dataset presented\nin the \"Cross-lingual Name Tagging and Linking for 282 Languages\" paper.",
"# Fine-tuning parameters:",
"# How to use: \n\nPls refer \"URL for entity grouping with aggregation_strategy parameter.",
"# Reference test results:\n* accuracy: 0.9774538310923768\n* f1: 0.9462099085573904\n* precision: 0.9425718667406271\n* recall: 0.9498761426661113"
] |
token-classification
|
transformers
|
# Turkish Named Entity Recognition (NER) Model
This model is the fine-tuned model of "dbmdz/bert-base-turkish-cased"
using a reviewed version of well known Turkish NER dataset
(https://github.com/stefan-it/turkish-bert/files/4558187/nerdata.txt).
# Fine-tuning parameters:
```
task = "ner"
model_checkpoint = "dbmdz/bert-base-turkish-cased"
batch_size = 8
label_list = ['O', 'B-PER', 'I-PER', 'B-ORG', 'I-ORG', 'B-LOC', 'I-LOC']
max_length = 512
learning_rate = 2e-5
num_train_epochs = 3
weight_decay = 0.01
```
# How to use:
```
model = AutoModelForTokenClassification.from_pretrained("akdeniz27/bert-base-turkish-cased-ner")
tokenizer = AutoTokenizer.from_pretrained("akdeniz27/bert-base-turkish-cased-ner")
ner = pipeline('ner', model=model, tokenizer=tokenizer, aggregation_strategy="first")
ner("your text here")
```
Pls refer "https://huggingface.co/transformers/_modules/transformers/pipelines/token_classification.html" for entity grouping with aggregation_strategy parameter.
# Reference test results:
* accuracy: 0.9933935699477056
* f1: 0.9592969472710453
* precision: 0.9543530277931161
* recall: 0.9642923563325274
Evaluation results with the test sets proposed in ["Küçük, D., Küçük, D., Arıcı, N. 2016. Türkçe Varlık İsmi Tanıma için bir Veri Kümesi ("A Named Entity Recognition Dataset for Turkish"). IEEE Sinyal İşleme, İletişim ve Uygulamaları Kurultayı. Zonguldak, Türkiye."](https://ieeexplore.ieee.org/document/7495744) paper.
* Test Set Acc. Prec. Rec. F1-Score
* 20010000 0.9946 0.9871 0.9463 0.9662
* 20020000 0.9928 0.9134 0.9206 0.9170
* 20030000 0.9942 0.9814 0.9186 0.9489
* 20040000 0.9943 0.9660 0.9522 0.9590
* 20050000 0.9971 0.9539 0.9932 0.9732
* 20060000 0.9993 0.9942 0.9942 0.9942
* 20070000 0.9970 0.9806 0.9439 0.9619
* 20080000 0.9988 0.9821 0.9649 0.9735
* 20090000 0.9977 0.9891 0.9479 0.9681
* 20100000 0.9961 0.9684 0.9293 0.9485
* Overall 0.9961 0.9720 0.9516 0.9617
|
{"language": "tr", "widget": [{"text": "Mustafa Kemal Atat\u00fcrk 19 May\u0131s 1919'da Samsun'a \u00e7\u0131kt\u0131."}]}
|
akdeniz27/bert-base-turkish-cased-ner
| null |
[
"transformers",
"pytorch",
"onnx",
"safetensors",
"bert",
"token-classification",
"tr",
"doi:10.57967/hf/0949",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"tr"
] |
TAGS
#transformers #pytorch #onnx #safetensors #bert #token-classification #tr #doi-10.57967/hf/0949 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# Turkish Named Entity Recognition (NER) Model
This model is the fine-tuned model of "dbmdz/bert-base-turkish-cased"
using a reviewed version of well known Turkish NER dataset
(URL
# Fine-tuning parameters:
# How to use:
Pls refer "URL for entity grouping with aggregation_strategy parameter.
# Reference test results:
* accuracy: 0.9933935699477056
* f1: 0.9592969472710453
* precision: 0.9543530277931161
* recall: 0.9642923563325274
Evaluation results with the test sets proposed in "Küçük, D., Küçük, D., Arıcı, N. 2016. Türkçe Varlık İsmi Tanıma için bir Veri Kümesi ("A Named Entity Recognition Dataset for Turkish"). IEEE Sinyal İşleme, İletişim ve Uygulamaları Kurultayı. Zonguldak, Türkiye." paper.
* Test Set Acc. Prec. Rec. F1-Score
* 20010000 0.9946 0.9871 0.9463 0.9662
* 20020000 0.9928 0.9134 0.9206 0.9170
* 20030000 0.9942 0.9814 0.9186 0.9489
* 20040000 0.9943 0.9660 0.9522 0.9590
* 20050000 0.9971 0.9539 0.9932 0.9732
* 20060000 0.9993 0.9942 0.9942 0.9942
* 20070000 0.9970 0.9806 0.9439 0.9619
* 20080000 0.9988 0.9821 0.9649 0.9735
* 20090000 0.9977 0.9891 0.9479 0.9681
* 20100000 0.9961 0.9684 0.9293 0.9485
* Overall 0.9961 0.9720 0.9516 0.9617
|
[
"# Turkish Named Entity Recognition (NER) Model\n\nThis model is the fine-tuned model of \"dbmdz/bert-base-turkish-cased\" \nusing a reviewed version of well known Turkish NER dataset \n(URL",
"# Fine-tuning parameters:",
"# How to use: \n\nPls refer \"URL for entity grouping with aggregation_strategy parameter.",
"# Reference test results:\n* accuracy: 0.9933935699477056\n* f1: 0.9592969472710453\n* precision: 0.9543530277931161\n* recall: 0.9642923563325274\n\nEvaluation results with the test sets proposed in \"Küçük, D., Küçük, D., Arıcı, N. 2016. Türkçe Varlık İsmi Tanıma için bir Veri Kümesi (\"A Named Entity Recognition Dataset for Turkish\"). IEEE Sinyal İşleme, İletişim ve Uygulamaları Kurultayı. Zonguldak, Türkiye.\" paper.\n\n* Test Set\tAcc.\tPrec.\tRec.\tF1-Score\n* 20010000\t0.9946 0.9871 0.9463\t0.9662\n* 20020000\t0.9928\t0.9134\t0.9206\t0.9170\n* 20030000\t0.9942\t0.9814\t0.9186\t0.9489\n* 20040000\t0.9943\t0.9660\t0.9522\t0.9590\n* 20050000\t0.9971\t0.9539\t0.9932\t0.9732\n* 20060000\t0.9993\t0.9942\t0.9942\t0.9942\n* 20070000\t0.9970\t0.9806\t0.9439\t0.9619\n* 20080000\t0.9988\t0.9821\t0.9649\t0.9735\n* 20090000\t0.9977\t0.9891\t0.9479\t0.9681\n* 20100000\t0.9961\t0.9684\t0.9293\t0.9485\n* Overall \t0.9961\t0.9720\t0.9516\t0.9617"
] |
[
"TAGS\n#transformers #pytorch #onnx #safetensors #bert #token-classification #tr #doi-10.57967/hf/0949 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# Turkish Named Entity Recognition (NER) Model\n\nThis model is the fine-tuned model of \"dbmdz/bert-base-turkish-cased\" \nusing a reviewed version of well known Turkish NER dataset \n(URL",
"# Fine-tuning parameters:",
"# How to use: \n\nPls refer \"URL for entity grouping with aggregation_strategy parameter.",
"# Reference test results:\n* accuracy: 0.9933935699477056\n* f1: 0.9592969472710453\n* precision: 0.9543530277931161\n* recall: 0.9642923563325274\n\nEvaluation results with the test sets proposed in \"Küçük, D., Küçük, D., Arıcı, N. 2016. Türkçe Varlık İsmi Tanıma için bir Veri Kümesi (\"A Named Entity Recognition Dataset for Turkish\"). IEEE Sinyal İşleme, İletişim ve Uygulamaları Kurultayı. Zonguldak, Türkiye.\" paper.\n\n* Test Set\tAcc.\tPrec.\tRec.\tF1-Score\n* 20010000\t0.9946 0.9871 0.9463\t0.9662\n* 20020000\t0.9928\t0.9134\t0.9206\t0.9170\n* 20030000\t0.9942\t0.9814\t0.9186\t0.9489\n* 20040000\t0.9943\t0.9660\t0.9522\t0.9590\n* 20050000\t0.9971\t0.9539\t0.9932\t0.9732\n* 20060000\t0.9993\t0.9942\t0.9942\t0.9942\n* 20070000\t0.9970\t0.9806\t0.9439\t0.9619\n* 20080000\t0.9988\t0.9821\t0.9649\t0.9735\n* 20090000\t0.9977\t0.9891\t0.9479\t0.9681\n* 20100000\t0.9961\t0.9684\t0.9293\t0.9485\n* Overall \t0.9961\t0.9720\t0.9516\t0.9617"
] |
text-classification
|
transformers
|
# Turkish Text Classification for Complaints Data Set
This model is a fine-tune model of https://github.com/stefan-it/turkish-bert by using text classification data with 9 categories as follows:
id_to_category = {0: 'KONFORSUZLUK', 1: 'TARİFE İHLALİ', 2: 'DURAKTA DURMAMA', 3: 'ŞOFÖR-PERSONEL ŞİKAYETİ',
4: 'YENİ GÜZERGAH/HAT/DURAK İSTEĞİ', 5: 'TRAFİK GÜVENLİĞİ', 6: 'DİĞER ŞİKAYETLER', 7: 'TEŞEKKÜR', 8: 'DİĞER TALEPLER'}
|
{"language": "tr"}
|
akdeniz27/bert-turkish-text-classification
| null |
[
"transformers",
"pytorch",
"jax",
"safetensors",
"bert",
"text-classification",
"tr",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"tr"
] |
TAGS
#transformers #pytorch #jax #safetensors #bert #text-classification #tr #autotrain_compatible #endpoints_compatible #region-us
|
# Turkish Text Classification for Complaints Data Set
This model is a fine-tune model of URL by using text classification data with 9 categories as follows:
id_to_category = {0: 'KONFORSUZLUK', 1: 'TARİFE İHLALİ', 2: 'DURAKTA DURMAMA', 3: 'ŞOFÖR-PERSONEL ŞİKAYETİ',
4: 'YENİ GÜZERGAH/HAT/DURAK İSTEĞİ', 5: 'TRAFİK GÜVENLİĞİ', 6: 'DİĞER ŞİKAYETLER', 7: 'TEŞEKKÜR', 8: 'DİĞER TALEPLER'}
|
[
"# Turkish Text Classification for Complaints Data Set\n\nThis model is a fine-tune model of URL by using text classification data with 9 categories as follows:\n\nid_to_category = {0: 'KONFORSUZLUK', 1: 'TARİFE İHLALİ', 2: 'DURAKTA DURMAMA', 3: 'ŞOFÖR-PERSONEL ŞİKAYETİ', \n 4: 'YENİ GÜZERGAH/HAT/DURAK İSTEĞİ', 5: 'TRAFİK GÜVENLİĞİ', 6: 'DİĞER ŞİKAYETLER', 7: 'TEŞEKKÜR', 8: 'DİĞER TALEPLER'}"
] |
[
"TAGS\n#transformers #pytorch #jax #safetensors #bert #text-classification #tr #autotrain_compatible #endpoints_compatible #region-us \n",
"# Turkish Text Classification for Complaints Data Set\n\nThis model is a fine-tune model of URL by using text classification data with 9 categories as follows:\n\nid_to_category = {0: 'KONFORSUZLUK', 1: 'TARİFE İHLALİ', 2: 'DURAKTA DURMAMA', 3: 'ŞOFÖR-PERSONEL ŞİKAYETİ', \n 4: 'YENİ GÜZERGAH/HAT/DURAK İSTEĞİ', 5: 'TRAFİK GÜVENLİĞİ', 6: 'DİĞER ŞİKAYETLER', 7: 'TEŞEKKÜR', 8: 'DİĞER TALEPLER'}"
] |
token-classification
|
transformers
|
# Turkish Named Entity Recognition (NER) Model
This model is the fine-tuned model of dbmdz/convbert-base-turkish-cased (ConvBERTurk)
using a reviewed version of well known Turkish NER dataset
(https://github.com/stefan-it/turkish-bert/files/4558187/nerdata.txt).
The ConvBERT architecture is presented in the ["ConvBERT: Improving BERT with Span-based Dynamic Convolution"](https://arxiv.org/abs/2008.02496) paper.
# Fine-tuning parameters:
```
task = "ner"
model_checkpoint = "dbmdz/convbert-base-turkish-cased"
batch_size = 8
label_list = ['O', 'B-PER', 'I-PER', 'B-ORG', 'I-ORG', 'B-LOC', 'I-LOC']
max_length = 512
learning_rate = 2e-5
num_train_epochs = 3
weight_decay = 0.01
```
# How to use:
```
model = AutoModelForTokenClassification.from_pretrained("akdeniz27/convbert-base-turkish-cased-ner")
tokenizer = AutoTokenizer.from_pretrained("akdeniz27/convbert-base-turkish-cased-ner")
ner = pipeline('ner', model=model, tokenizer=tokenizer, aggregation_strategy="first")
ner("<your text here>")
# Pls refer "https://huggingface.co/transformers/_modules/transformers/pipelines/token_classification.html" for entity grouping with aggregation_strategy parameter.
```
# Reference test results:
* accuracy: 0.9937648915431506
* f1: 0.9610945644080416
* precision: 0.9619899385131359
* recall: 0.9602008554956295
|
{"language": "tr", "widget": [{"text": "Almanya, koronavir\u00fcs a\u015f\u0131s\u0131n\u0131 geli\u015ftiren Dr. \u00d6zlem T\u00fcreci ve e\u015fi Prof. Dr. U\u011fur \u015eahin'e liyakat ni\u015fan\u0131 verdi"}]}
|
akdeniz27/convbert-base-turkish-cased-ner
| null |
[
"transformers",
"pytorch",
"onnx",
"safetensors",
"convbert",
"token-classification",
"tr",
"arxiv:2008.02496",
"doi:10.57967/hf/0015",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2008.02496"
] |
[
"tr"
] |
TAGS
#transformers #pytorch #onnx #safetensors #convbert #token-classification #tr #arxiv-2008.02496 #doi-10.57967/hf/0015 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# Turkish Named Entity Recognition (NER) Model
This model is the fine-tuned model of dbmdz/convbert-base-turkish-cased (ConvBERTurk)
using a reviewed version of well known Turkish NER dataset
(URL
The ConvBERT architecture is presented in the "ConvBERT: Improving BERT with Span-based Dynamic Convolution" paper.
# Fine-tuning parameters:
# How to use:
# Reference test results:
* accuracy: 0.9937648915431506
* f1: 0.9610945644080416
* precision: 0.9619899385131359
* recall: 0.9602008554956295
|
[
"# Turkish Named Entity Recognition (NER) Model\nThis model is the fine-tuned model of dbmdz/convbert-base-turkish-cased (ConvBERTurk)\nusing a reviewed version of well known Turkish NER dataset\n \n(URL\n\nThe ConvBERT architecture is presented in the \"ConvBERT: Improving BERT with Span-based Dynamic Convolution\" paper.",
"# Fine-tuning parameters:",
"# How to use:",
"# Reference test results:\n* accuracy: 0.9937648915431506\n* f1: 0.9610945644080416\n* precision: 0.9619899385131359\n* recall: 0.9602008554956295"
] |
[
"TAGS\n#transformers #pytorch #onnx #safetensors #convbert #token-classification #tr #arxiv-2008.02496 #doi-10.57967/hf/0015 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# Turkish Named Entity Recognition (NER) Model\nThis model is the fine-tuned model of dbmdz/convbert-base-turkish-cased (ConvBERTurk)\nusing a reviewed version of well known Turkish NER dataset\n \n(URL\n\nThe ConvBERT architecture is presented in the \"ConvBERT: Improving BERT with Span-based Dynamic Convolution\" paper.",
"# Fine-tuning parameters:",
"# How to use:",
"# Reference test results:\n* accuracy: 0.9937648915431506\n* f1: 0.9610945644080416\n* precision: 0.9619899385131359\n* recall: 0.9602008554956295"
] |
question-answering
|
transformers
|
# DeBERTa v2 XLarge Model fine-tuned with CUAD dataset
This model is the fine-tuned version of "DeBERTa v2 XLarge"
using CUAD dataset https://huggingface.co/datasets/cuad
Link for model checkpoint: https://github.com/TheAtticusProject/cuad
For the use of the model with CUAD: https://github.com/marshmellow77/cuad-demo
and https://huggingface.co/spaces/akdeniz27/contract-understanding-atticus-dataset-demo
|
{"language": "en", "datasets": ["cuad"]}
|
akdeniz27/deberta-v2-xlarge-cuad
| null |
[
"transformers",
"pytorch",
"safetensors",
"deberta-v2",
"question-answering",
"en",
"dataset:cuad",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #safetensors #deberta-v2 #question-answering #en #dataset-cuad #endpoints_compatible #has_space #region-us
|
# DeBERTa v2 XLarge Model fine-tuned with CUAD dataset
This model is the fine-tuned version of "DeBERTa v2 XLarge"
using CUAD dataset URL
Link for model checkpoint: URL
For the use of the model with CUAD: URL
and URL
|
[
"# DeBERTa v2 XLarge Model fine-tuned with CUAD dataset\nThis model is the fine-tuned version of \"DeBERTa v2 XLarge\" \nusing CUAD dataset URL\n\nLink for model checkpoint: URL\n\nFor the use of the model with CUAD: URL\nand URL"
] |
[
"TAGS\n#transformers #pytorch #safetensors #deberta-v2 #question-answering #en #dataset-cuad #endpoints_compatible #has_space #region-us \n",
"# DeBERTa v2 XLarge Model fine-tuned with CUAD dataset\nThis model is the fine-tuned version of \"DeBERTa v2 XLarge\" \nusing CUAD dataset URL\n\nLink for model checkpoint: URL\n\nFor the use of the model with CUAD: URL\nand URL"
] |
token-classification
|
transformers
|
# Turkish Named Entity Recognition (NER) Model
This model is the fine-tuned version of "microsoft/mDeBERTa-v3-base"
(a multilingual version of DeBERTa V3)
using a reviewed version of well known Turkish NER dataset
(https://github.com/stefan-it/turkish-bert/files/4558187/nerdata.txt).
# Fine-tuning parameters:
```
task = "ner"
model_checkpoint = "microsoft/mdeberta-v3-base"
batch_size = 8
label_list = ['O', 'B-PER', 'I-PER', 'B-ORG', 'I-ORG', 'B-LOC', 'I-LOC']
max_length = 512
learning_rate = 2e-5
num_train_epochs = 2
weight_decay = 0.01
```
# How to use:
```
model = AutoModelForTokenClassification.from_pretrained("akdeniz27/mDeBERTa-v3-base-turkish-ner")
tokenizer = AutoTokenizer.from_pretrained("akdeniz27/mDeBERTa-v3-base-turkish-ner")
ner = pipeline('ner', model=model, tokenizer=tokenizer, aggregation_strategy="simple")
ner("<your text here>")
```
Pls refer "https://huggingface.co/transformers/_modules/transformers/pipelines/token_classification.html" for entity grouping with aggregation_strategy parameter.
# Reference test results:
* f1: 0.95
* precision: 0.94
* recall: 0.96
|
{"language": "tr", "widget": [{"text": "Mustafa Kemal Atat\u00fcrk 19 May\u0131s 1919'da Samsun'a \u00e7\u0131kt\u0131."}]}
|
akdeniz27/mDeBERTa-v3-base-turkish-ner
| null |
[
"transformers",
"pytorch",
"safetensors",
"deberta-v2",
"token-classification",
"tr",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"tr"
] |
TAGS
#transformers #pytorch #safetensors #deberta-v2 #token-classification #tr #autotrain_compatible #endpoints_compatible #region-us
|
# Turkish Named Entity Recognition (NER) Model
This model is the fine-tuned version of "microsoft/mDeBERTa-v3-base"
(a multilingual version of DeBERTa V3)
using a reviewed version of well known Turkish NER dataset
(URL
# Fine-tuning parameters:
# How to use:
Pls refer "URL for entity grouping with aggregation_strategy parameter.
# Reference test results:
* f1: 0.95
* precision: 0.94
* recall: 0.96
|
[
"# Turkish Named Entity Recognition (NER) Model\nThis model is the fine-tuned version of \"microsoft/mDeBERTa-v3-base\"\n(a multilingual version of DeBERTa V3) \nusing a reviewed version of well known Turkish NER dataset \n(URL",
"# Fine-tuning parameters:",
"# How to use: \n\nPls refer \"URL for entity grouping with aggregation_strategy parameter.",
"# Reference test results:\n* f1: 0.95\n* precision: 0.94\n* recall: 0.96"
] |
[
"TAGS\n#transformers #pytorch #safetensors #deberta-v2 #token-classification #tr #autotrain_compatible #endpoints_compatible #region-us \n",
"# Turkish Named Entity Recognition (NER) Model\nThis model is the fine-tuned version of \"microsoft/mDeBERTa-v3-base\"\n(a multilingual version of DeBERTa V3) \nusing a reviewed version of well known Turkish NER dataset \n(URL",
"# Fine-tuning parameters:",
"# How to use: \n\nPls refer \"URL for entity grouping with aggregation_strategy parameter.",
"# Reference test results:\n* f1: 0.95\n* precision: 0.94\n* recall: 0.96"
] |
token-classification
|
transformers
|
# Albanian Named Entity Recognition (NER) Model
This model is the fine-tuned model of "bert-base-multilingual-cased"
using the famous WikiANN dataset presented
in the "Cross-lingual Name Tagging and Linking for 282 Languages" [paper](https://aclanthology.org/P17-1178.pdf).
# Fine-tuning parameters:
```
task = "ner"
model_checkpoint = "bert-base-multilingual-cased"
batch_size = 8
label_list = ['O', 'B-PER', 'I-PER', 'B-ORG', 'I-ORG', 'B-LOC', 'I-LOC']
max_length = 512
learning_rate = 2e-5
num_train_epochs = 3
weight_decay = 0.01
```
# How to use:
```
model = AutoModelForTokenClassification.from_pretrained("akdeniz27/mbert-base-albanian-cased-ner")
tokenizer = AutoTokenizer.from_pretrained("akdeniz27/mbert-base-albanian-cased-ner")
ner = pipeline('ner', model=model, tokenizer=tokenizer, aggregation_strategy="first")
ner("<your text here>")
```
Pls refer "https://huggingface.co/transformers/_modules/transformers/pipelines/token_classification.html" for entity grouping with aggregation_strategy parameter.
# Reference test results:
* accuracy: 0.9719268816143276
* f1: 0.9192366826444787
* precision: 0.9171629669734704
* recall: 0.9213197969543148
|
{"language": "sq", "widget": [{"text": "Varianti AY.4.2 \u00ebsht\u00eb m\u00eb i leht\u00eb p\u00ebr t'u transmetuar, thot\u00eb Francois Balu, drejtor i Institutit t\u00eb Gjenetik\u00ebs n\u00eb Lond\u00ebr."}]}
|
akdeniz27/mbert-base-albanian-cased-ner
| null |
[
"transformers",
"pytorch",
"safetensors",
"bert",
"token-classification",
"sq",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"sq"
] |
TAGS
#transformers #pytorch #safetensors #bert #token-classification #sq #autotrain_compatible #endpoints_compatible #region-us
|
# Albanian Named Entity Recognition (NER) Model
This model is the fine-tuned model of "bert-base-multilingual-cased"
using the famous WikiANN dataset presented
in the "Cross-lingual Name Tagging and Linking for 282 Languages" paper.
# Fine-tuning parameters:
# How to use:
Pls refer "URL for entity grouping with aggregation_strategy parameter.
# Reference test results:
* accuracy: 0.9719268816143276
* f1: 0.9192366826444787
* precision: 0.9171629669734704
* recall: 0.9213197969543148
|
[
"# Albanian Named Entity Recognition (NER) Model\nThis model is the fine-tuned model of \"bert-base-multilingual-cased\" \nusing the famous WikiANN dataset presented\nin the \"Cross-lingual Name Tagging and Linking for 282 Languages\" paper.",
"# Fine-tuning parameters:",
"# How to use: \n\nPls refer \"URL for entity grouping with aggregation_strategy parameter.",
"# Reference test results:\n* accuracy: 0.9719268816143276\n* f1: 0.9192366826444787\n* precision: 0.9171629669734704\n* recall: 0.9213197969543148"
] |
[
"TAGS\n#transformers #pytorch #safetensors #bert #token-classification #sq #autotrain_compatible #endpoints_compatible #region-us \n",
"# Albanian Named Entity Recognition (NER) Model\nThis model is the fine-tuned model of \"bert-base-multilingual-cased\" \nusing the famous WikiANN dataset presented\nin the \"Cross-lingual Name Tagging and Linking for 282 Languages\" paper.",
"# Fine-tuning parameters:",
"# How to use: \n\nPls refer \"URL for entity grouping with aggregation_strategy parameter.",
"# Reference test results:\n* accuracy: 0.9719268816143276\n* f1: 0.9192366826444787\n* precision: 0.9171629669734704\n* recall: 0.9213197969543148"
] |
question-answering
|
transformers
|
# RoBERTa Base Model fine-tuned with CUAD dataset
This model is the fine-tuned version of "RoBERTa Base"
using CUAD dataset https://huggingface.co/datasets/cuad
Link for model checkpoint: https://github.com/TheAtticusProject/cuad
For the use of the model with CUAD: https://github.com/marshmellow77/cuad-demo
and https://huggingface.co/spaces/akdeniz27/contract-understanding-atticus-dataset-demo
|
{"language": "en", "datasets": ["cuad"]}
|
akdeniz27/roberta-base-cuad
| null |
[
"transformers",
"pytorch",
"safetensors",
"roberta",
"question-answering",
"en",
"dataset:cuad",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #safetensors #roberta #question-answering #en #dataset-cuad #endpoints_compatible #has_space #region-us
|
# RoBERTa Base Model fine-tuned with CUAD dataset
This model is the fine-tuned version of "RoBERTa Base"
using CUAD dataset URL
Link for model checkpoint: URL
For the use of the model with CUAD: URL
and URL
|
[
"# RoBERTa Base Model fine-tuned with CUAD dataset\nThis model is the fine-tuned version of \"RoBERTa Base\" \nusing CUAD dataset URL\n\nLink for model checkpoint: URL\n\nFor the use of the model with CUAD: URL\nand URL"
] |
[
"TAGS\n#transformers #pytorch #safetensors #roberta #question-answering #en #dataset-cuad #endpoints_compatible #has_space #region-us \n",
"# RoBERTa Base Model fine-tuned with CUAD dataset\nThis model is the fine-tuned version of \"RoBERTa Base\" \nusing CUAD dataset URL\n\nLink for model checkpoint: URL\n\nFor the use of the model with CUAD: URL\nand URL"
] |
question-answering
|
transformers
|
# Model Card for RoBERTa Large Model fine-tuned with CUAD dataset
This model is the fine-tuned version of "RoBERTa Large" using CUAD dataset
# Model Details
## Model Description
The [Contract Understanding Atticus Dataset (CUAD)](https://www.atticusprojectai.org/cuad), pronounced "kwad", a dataset for legal contract review curated by the Atticus Project.
Contract review is a task about "finding needles in a haystack."
We find that Transformer models have nascent performance on CUAD, but that this performance is strongly influenced by model design and training dataset size. Despite some promising results, there is still substantial room for improvement. As one of the only large, specialized NLP benchmarks annotated by experts, CUAD can serve as a challenging research benchmark for the broader NLP community.
- **Developed by:** TheAtticusProject
- **Shared by [Optional]:** HuggingFace
- **Model type:** Language model
- **Language(s) (NLP):** en
- **License:** More information needed
- **Related Models:** RoBERTA
- **Parent Model:**RoBERTA Large
- **Resources for more information:**
- [GitHub Repo](https://github.com/TheAtticusProject/cuad)
- [Associated Paper](https://arxiv.org/abs/2103.06268)
# Uses
## Direct Use
Legal contract review
## Downstream Use [Optional]
More information needed
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recomendations.
# Training Details
## Training Data
See [cuad dataset card](https://huggingface.co/datasets/cuad) for further details
## Training Procedure
More information needed
### Preprocessing
More information needed
### Speeds, Sizes, Times
More information needed
# Evaluation
## Testing Data, Factors & Metrics
### Testing Data
#### Extra Data
Researchers may be interested in several gigabytes of unlabeled contract pretraining data, which is available [here](https://drive.google.com/file/d/1of37X0hAhECQ3BN_004D8gm6V88tgZaB/view?usp=sharing).
### Factors
More information needed
### Metrics
More information needed
## Results
We [provide checkpoints](https://zenodo.org/record/4599830) for three of the best models fine-tuned on CUAD: RoBERTa-base (~100M parameters), RoBERTa-large (~300M parameters), and DeBERTa-xlarge (~900M parameters).
# Model Examination
More information needed
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** More information needed
- **Hours used:** More information needed
- **Cloud Provider:** More information needed
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Technical Specifications [optional]
## Model Architecture and Objective
More information needed
## Compute Infrastructure
More information needed
### Hardware
More information needed
### Software
The HuggingFace [Transformers](https://huggingface.co/transformers) library. It was tested with Python 3.8, PyTorch 1.7, and Transformers 4.3/4.4.
# Citation
**BibTeX:**
@article{hendrycks2021cuad,
title={CUAD: An Expert-Annotated NLP Dataset for Legal Contract Review},
author={Dan Hendrycks and Collin Burns and Anya Chen and Spencer Ball},
journal={NeurIPS},
year={2021}
}
# Glossary [optional]
More information needed
# More Information [optional]
For more details about CUAD and legal contract review, see the [Atticus Project website](https://www.atticusprojectai.org/cuad).
# Model Card Authors [optional]
TheAtticusProject
# Model Card Contact
[TheAtticusProject](https://www.atticusprojectai.org/), in collaboration with the Ezi Ozoani and the HuggingFace Team
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
```python
from transformers import AutoTokenizer, AutoModelForQuestionAnswering
tokenizer = AutoTokenizer.from_pretrained("akdeniz27/roberta-large-cuad")
model = AutoModelForQuestionAnswering.from_pretrained("akdeniz27/roberta-large-cuad")
```
</details>
|
{"language": "en", "datasets": ["cuad"]}
|
akdeniz27/roberta-large-cuad
| null |
[
"transformers",
"pytorch",
"safetensors",
"roberta",
"question-answering",
"en",
"dataset:cuad",
"arxiv:2103.06268",
"arxiv:1910.09700",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2103.06268",
"1910.09700"
] |
[
"en"
] |
TAGS
#transformers #pytorch #safetensors #roberta #question-answering #en #dataset-cuad #arxiv-2103.06268 #arxiv-1910.09700 #endpoints_compatible #has_space #region-us
|
# Model Card for RoBERTa Large Model fine-tuned with CUAD dataset
This model is the fine-tuned version of "RoBERTa Large" using CUAD dataset
# Model Details
## Model Description
The Contract Understanding Atticus Dataset (CUAD), pronounced "kwad", a dataset for legal contract review curated by the Atticus Project.
Contract review is a task about "finding needles in a haystack."
We find that Transformer models have nascent performance on CUAD, but that this performance is strongly influenced by model design and training dataset size. Despite some promising results, there is still substantial room for improvement. As one of the only large, specialized NLP benchmarks annotated by experts, CUAD can serve as a challenging research benchmark for the broader NLP community.
- Developed by: TheAtticusProject
- Shared by [Optional]: HuggingFace
- Model type: Language model
- Language(s) (NLP): en
- License: More information needed
- Related Models: RoBERTA
- Parent Model:RoBERTA Large
- Resources for more information:
- GitHub Repo
- Associated Paper
# Uses
## Direct Use
Legal contract review
## Downstream Use [Optional]
More information needed
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recomendations.
# Training Details
## Training Data
See cuad dataset card for further details
## Training Procedure
More information needed
### Preprocessing
More information needed
### Speeds, Sizes, Times
More information needed
# Evaluation
## Testing Data, Factors & Metrics
### Testing Data
#### Extra Data
Researchers may be interested in several gigabytes of unlabeled contract pretraining data, which is available here.
### Factors
More information needed
### Metrics
More information needed
## Results
We provide checkpoints for three of the best models fine-tuned on CUAD: RoBERTa-base (~100M parameters), RoBERTa-large (~300M parameters), and DeBERTa-xlarge (~900M parameters).
# Model Examination
More information needed
# Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type: More information needed
- Hours used: More information needed
- Cloud Provider: More information needed
- Compute Region: More information needed
- Carbon Emitted: More information needed
# Technical Specifications [optional]
## Model Architecture and Objective
More information needed
## Compute Infrastructure
More information needed
### Hardware
More information needed
### Software
The HuggingFace Transformers library. It was tested with Python 3.8, PyTorch 1.7, and Transformers 4.3/4.4.
BibTeX:
@article{hendrycks2021cuad,
title={CUAD: An Expert-Annotated NLP Dataset for Legal Contract Review},
author={Dan Hendrycks and Collin Burns and Anya Chen and Spencer Ball},
journal={NeurIPS},
year={2021}
}
# Glossary [optional]
More information needed
# More Information [optional]
For more details about CUAD and legal contract review, see the Atticus Project website.
# Model Card Authors [optional]
TheAtticusProject
# Model Card Contact
TheAtticusProject, in collaboration with the Ezi Ozoani and the HuggingFace Team
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
</details>
|
[
"# Model Card for RoBERTa Large Model fine-tuned with CUAD dataset\n \nThis model is the fine-tuned version of \"RoBERTa Large\" using CUAD dataset",
"# Model Details",
"## Model Description\n \nThe Contract Understanding Atticus Dataset (CUAD), pronounced \"kwad\", a dataset for legal contract review curated by the Atticus Project. \n \nContract review is a task about \"finding needles in a haystack.\"\nWe find that Transformer models have nascent performance on CUAD, but that this performance is strongly influenced by model design and training dataset size. Despite some promising results, there is still substantial room for improvement. As one of the only large, specialized NLP benchmarks annotated by experts, CUAD can serve as a challenging research benchmark for the broader NLP community. \n \n- Developed by: TheAtticusProject\n- Shared by [Optional]: HuggingFace\n- Model type: Language model\n- Language(s) (NLP): en\n- License: More information needed\n- Related Models: RoBERTA\n - Parent Model:RoBERTA Large\n- Resources for more information:\n- GitHub Repo \n- Associated Paper",
"# Uses",
"## Direct Use\n \nLegal contract review",
"## Downstream Use [Optional]\n \nMore information needed",
"## Out-of-Scope Use\n \n \nThe model should not be used to intentionally create hostile or alienating environments for people.",
"# Bias, Risks, and Limitations\n \nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.",
"## Recommendations\n \nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recomendations.",
"# Training Details",
"## Training Data\nSee cuad dataset card for further details",
"## Training Procedure\n \nMore information needed",
"### Preprocessing\n \nMore information needed",
"### Speeds, Sizes, Times\n \nMore information needed",
"# Evaluation",
"## Testing Data, Factors & Metrics",
"### Testing Data",
"#### Extra Data\nResearchers may be interested in several gigabytes of unlabeled contract pretraining data, which is available here.",
"### Factors\n \nMore information needed",
"### Metrics\n \nMore information needed",
"## Results \n \n \n\n\nWe provide checkpoints for three of the best models fine-tuned on CUAD: RoBERTa-base (~100M parameters), RoBERTa-large (~300M parameters), and DeBERTa-xlarge (~900M parameters).",
"# Model Examination\n \nMore information needed",
"# Environmental Impact\n \nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n \n- Hardware Type: More information needed\n- Hours used: More information needed\n- Cloud Provider: More information needed\n- Compute Region: More information needed\n- Carbon Emitted: More information needed",
"# Technical Specifications [optional]",
"## Model Architecture and Objective\n \nMore information needed",
"## Compute Infrastructure\n \nMore information needed",
"### Hardware\n \nMore information needed",
"### Software\n \nThe HuggingFace Transformers library. It was tested with Python 3.8, PyTorch 1.7, and Transformers 4.3/4.4. \n \nBibTeX:\n \n @article{hendrycks2021cuad,\n title={CUAD: An Expert-Annotated NLP Dataset for Legal Contract Review}, \n author={Dan Hendrycks and Collin Burns and Anya Chen and Spencer Ball},\n journal={NeurIPS},\n year={2021}\n}",
"# Glossary [optional]\n \nMore information needed",
"# More Information [optional]\n \nFor more details about CUAD and legal contract review, see the Atticus Project website.",
"# Model Card Authors [optional]\n \nTheAtticusProject",
"# Model Card Contact\n \nTheAtticusProject, in collaboration with the Ezi Ozoani and the HuggingFace Team",
"# How to Get Started with the Model\n \nUse the code below to get started with the model.\n \n<details>\n<summary> Click to expand </summary>\n\n\n\n \n</details>"
] |
[
"TAGS\n#transformers #pytorch #safetensors #roberta #question-answering #en #dataset-cuad #arxiv-2103.06268 #arxiv-1910.09700 #endpoints_compatible #has_space #region-us \n",
"# Model Card for RoBERTa Large Model fine-tuned with CUAD dataset\n \nThis model is the fine-tuned version of \"RoBERTa Large\" using CUAD dataset",
"# Model Details",
"## Model Description\n \nThe Contract Understanding Atticus Dataset (CUAD), pronounced \"kwad\", a dataset for legal contract review curated by the Atticus Project. \n \nContract review is a task about \"finding needles in a haystack.\"\nWe find that Transformer models have nascent performance on CUAD, but that this performance is strongly influenced by model design and training dataset size. Despite some promising results, there is still substantial room for improvement. As one of the only large, specialized NLP benchmarks annotated by experts, CUAD can serve as a challenging research benchmark for the broader NLP community. \n \n- Developed by: TheAtticusProject\n- Shared by [Optional]: HuggingFace\n- Model type: Language model\n- Language(s) (NLP): en\n- License: More information needed\n- Related Models: RoBERTA\n - Parent Model:RoBERTA Large\n- Resources for more information:\n- GitHub Repo \n- Associated Paper",
"# Uses",
"## Direct Use\n \nLegal contract review",
"## Downstream Use [Optional]\n \nMore information needed",
"## Out-of-Scope Use\n \n \nThe model should not be used to intentionally create hostile or alienating environments for people.",
"# Bias, Risks, and Limitations\n \nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.",
"## Recommendations\n \nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recomendations.",
"# Training Details",
"## Training Data\nSee cuad dataset card for further details",
"## Training Procedure\n \nMore information needed",
"### Preprocessing\n \nMore information needed",
"### Speeds, Sizes, Times\n \nMore information needed",
"# Evaluation",
"## Testing Data, Factors & Metrics",
"### Testing Data",
"#### Extra Data\nResearchers may be interested in several gigabytes of unlabeled contract pretraining data, which is available here.",
"### Factors\n \nMore information needed",
"### Metrics\n \nMore information needed",
"## Results \n \n \n\n\nWe provide checkpoints for three of the best models fine-tuned on CUAD: RoBERTa-base (~100M parameters), RoBERTa-large (~300M parameters), and DeBERTa-xlarge (~900M parameters).",
"# Model Examination\n \nMore information needed",
"# Environmental Impact\n \nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n \n- Hardware Type: More information needed\n- Hours used: More information needed\n- Cloud Provider: More information needed\n- Compute Region: More information needed\n- Carbon Emitted: More information needed",
"# Technical Specifications [optional]",
"## Model Architecture and Objective\n \nMore information needed",
"## Compute Infrastructure\n \nMore information needed",
"### Hardware\n \nMore information needed",
"### Software\n \nThe HuggingFace Transformers library. It was tested with Python 3.8, PyTorch 1.7, and Transformers 4.3/4.4. \n \nBibTeX:\n \n @article{hendrycks2021cuad,\n title={CUAD: An Expert-Annotated NLP Dataset for Legal Contract Review}, \n author={Dan Hendrycks and Collin Burns and Anya Chen and Spencer Ball},\n journal={NeurIPS},\n year={2021}\n}",
"# Glossary [optional]\n \nMore information needed",
"# More Information [optional]\n \nFor more details about CUAD and legal contract review, see the Atticus Project website.",
"# Model Card Authors [optional]\n \nTheAtticusProject",
"# Model Card Contact\n \nTheAtticusProject, in collaboration with the Ezi Ozoani and the HuggingFace Team",
"# How to Get Started with the Model\n \nUse the code below to get started with the model.\n \n<details>\n<summary> Click to expand </summary>\n\n\n\n \n</details>"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.