pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
listlengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
listlengths 0
201
| languages
listlengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
listlengths 0
722
| processed_texts
listlengths 1
723
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-Urdu
This model is a fine-tuned version of [Harveenchadha/vakyansh-wav2vec2-urdu-urm-60](https://huggingface.co/Harveenchadha/vakyansh-wav2vec2-urdu-urm-60) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Wer: 0.5747
- Cer: 0.3268
## Model description
The training and valid dataset is 0.58 hours. It was hard to train any model on lower number of so I decided to take vakyansh-wav2vec2-urdu-urm-60 checkpoint and finetune the wav2vec2 model.
## Training procedure
Trained on Harveenchadha/vakyansh-wav2vec2-urdu-urm-60 due to lesser number of samples.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 4.3054 | 16.67 | 50 | 9.0055 | 0.8306 | 0.4869 |
| 2.0629 | 33.33 | 100 | 9.5849 | 0.6061 | 0.3414 |
| 0.8966 | 50.0 | 150 | 4.8686 | 0.6052 | 0.3426 |
| 0.4197 | 66.67 | 200 | 12.3261 | 0.5817 | 0.3370 |
| 0.294 | 83.33 | 250 | 11.9653 | 0.5712 | 0.3328 |
| 0.2329 | 100.0 | 300 | 7.6846 | 0.5747 | 0.3268 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
{"language": ["ur"], "license": "apache-2.0", "library_name": "transformers", "tags": ["automatic-speech-recognition", "robust-speech-event", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "metrics": ["wer", "cer"], "pipeline_tag": "automatic-speech-recognition", "base_model": "Harveenchadha/vakyansh-wav2vec2-urdu-urm-60", "model-index": [{"name": "wav2vec2-urdu", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Urdu Speech Recognition"}, "dataset": {"name": "Common Voice (Urdu)", "type": "common_voice"}, "metrics": [{"type": "wer", "value": 57.47, "name": "WER", "config": "load_metric(\"wer\")", "verified": true}, {"type": "cer", "value": 32.68, "name": "CER", "config": "load_metric(\"cer\")", "verified": true}]}]}]}
|
kingabzpro/wav2vec2-urdu
| null |
[
"transformers",
"pytorch",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"robust-speech-event",
"hf-asr-leaderboard",
"ur",
"dataset:mozilla-foundation/common_voice_8_0",
"base_model:Harveenchadha/vakyansh-wav2vec2-urdu-urm-60",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ur"
] |
TAGS
#transformers #pytorch #safetensors #wav2vec2 #automatic-speech-recognition #robust-speech-event #hf-asr-leaderboard #ur #dataset-mozilla-foundation/common_voice_8_0 #base_model-Harveenchadha/vakyansh-wav2vec2-urdu-urm-60 #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
wav2vec2-large-xls-r-300m-Urdu
==============================
This model is a fine-tuned version of Harveenchadha/vakyansh-wav2vec2-urdu-urm-60 on the common\_voice dataset.
It achieves the following results on the evaluation set:
* Wer: 0.5747
* Cer: 0.3268
Model description
-----------------
The training and valid dataset is 0.58 hours. It was hard to train any model on lower number of so I decided to take vakyansh-wav2vec2-urdu-urm-60 checkpoint and finetune the wav2vec2 model.
Training procedure
------------------
Trained on Harveenchadha/vakyansh-wav2vec2-urdu-urm-60 due to lesser number of samples.
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 64
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 128
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 100
* num\_epochs: 100
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.16.0.dev0
* Pytorch 1.10.1+cu102
* Datasets 1.17.1.dev0
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 100\n* num\\_epochs: 100\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #safetensors #wav2vec2 #automatic-speech-recognition #robust-speech-event #hf-asr-leaderboard #ur #dataset-mozilla-foundation/common_voice_8_0 #base_model-Harveenchadha/vakyansh-wav2vec2-urdu-urm-60 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 100\n* num\\_epochs: 100\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-magazine-classifier
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8377
- Precision: 0.25
- Recall: 0.125
- Fscore: 0.1667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | Fscore |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|
| 0.1779 | 1.0 | 2 | 1.7584 | 0.2222 | 0.3333 | 0.2667 |
| 0.1635 | 2.0 | 4 | 1.7585 | 0.25 | 0.125 | 0.1667 |
| 0.1405 | 3.0 | 6 | 1.8377 | 0.25 | 0.125 | 0.1667 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall"], "model-index": [{"name": "distilbert-magazine-classifier", "results": []}]}
|
kingla6/distilbert-magazine-classifier
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
distilbert-magazine-classifier
==============================
This model is a fine-tuned version of distilbert-base-cased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 1.8377
* Precision: 0.25
* Recall: 0.125
* Fscore: 0.1667
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3.0
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.0+cu111
* Datasets 1.17.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3"
] |
token-classification
|
transformers
|
# POS tagger based on SlovakBERT
This is a POS tagger based on [SlovakBERT](https://huggingface.co/gerulata/slovakbert). The model uses [Universal POS tagset (UPOS)](https://universaldependencies.org/u/pos/). The model was fine-tuned using Slovak part of [Universal Dependencies dataset](https://universaldependencies.org/) [Zeman 2017] containing 10k manually annotated Slovak sentences.
## Results
The model was evaluated in [our paper](https://arxiv.org/abs/2109.15254) [Pikuliak et al 2021, Section 4.2]. It achieves \\(97.84\%\\) accuracy.
## Cite
```
@inproceedings{pikuliak-etal-2022-slovakbert,
title = "{S}lovak{BERT}: {S}lovak Masked Language Model",
author = "Pikuliak, Mat{\'u}{\v{s}} and
Grivalsk{\'y}, {\v{S}}tefan and
Kon{\^o}pka, Martin and
Bl{\v{s}}t{\'a}k, Miroslav and
Tamajka, Martin and
Bachrat{\'y}, Viktor and
Simko, Marian and
Bal{\'a}{\v{z}}ik, Pavol and
Trnka, Michal and
Uhl{\'a}rik, Filip",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2022",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.findings-emnlp.530",
pages = "7156--7168",
abstract = "We introduce a new Slovak masked language model called \textit{SlovakBERT}. This is to our best knowledge the first paper discussing Slovak transformers-based language models. We evaluate our model on several NLP tasks and achieve state-of-the-art results. This evaluation is likewise the first attempt to establish a benchmark for Slovak language models. We publish the masked language model, as well as the fine-tuned models for part-of-speech tagging, sentiment analysis and semantic textual similarity.",
}
```
|
{"language": ["sk"], "license": "cc", "tags": ["pos"], "datasets": ["universal_dependencies"], "metrics": ["accuracy"], "widget": [{"text": "Kde t\u00e1 \u013eudsk\u00e1 du\u0161a drieme?"}]}
|
kinit/slovakbert-pos
| null |
[
"transformers",
"pytorch",
"roberta",
"token-classification",
"pos",
"sk",
"dataset:universal_dependencies",
"arxiv:2109.15254",
"license:cc",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2109.15254"
] |
[
"sk"
] |
TAGS
#transformers #pytorch #roberta #token-classification #pos #sk #dataset-universal_dependencies #arxiv-2109.15254 #license-cc #autotrain_compatible #endpoints_compatible #region-us
|
# POS tagger based on SlovakBERT
This is a POS tagger based on SlovakBERT. The model uses Universal POS tagset (UPOS). The model was fine-tuned using Slovak part of Universal Dependencies dataset [Zeman 2017] containing 10k manually annotated Slovak sentences.
## Results
The model was evaluated in our paper [Pikuliak et al 2021, Section 4.2]. It achieves \\(97.84\%\\) accuracy.
## Cite
|
[
"# POS tagger based on SlovakBERT\n\nThis is a POS tagger based on SlovakBERT. The model uses Universal POS tagset (UPOS). The model was fine-tuned using Slovak part of Universal Dependencies dataset [Zeman 2017] containing 10k manually annotated Slovak sentences.",
"## Results\n\nThe model was evaluated in our paper [Pikuliak et al 2021, Section 4.2]. It achieves \\\\(97.84\\%\\\\) accuracy.",
"## Cite"
] |
[
"TAGS\n#transformers #pytorch #roberta #token-classification #pos #sk #dataset-universal_dependencies #arxiv-2109.15254 #license-cc #autotrain_compatible #endpoints_compatible #region-us \n",
"# POS tagger based on SlovakBERT\n\nThis is a POS tagger based on SlovakBERT. The model uses Universal POS tagset (UPOS). The model was fine-tuned using Slovak part of Universal Dependencies dataset [Zeman 2017] containing 10k manually annotated Slovak sentences.",
"## Results\n\nThe model was evaluated in our paper [Pikuliak et al 2021, Section 4.2]. It achieves \\\\(97.84\\%\\\\) accuracy.",
"## Cite"
] |
text-classification
|
transformers
|
# Sentiment Analysis model based on SlovakBERT
This is a sentiment analysis classifier based on [SlovakBERT](https://huggingface.co/gerulata/slovakbert). The model can distinguish three level of sentiment:
- `-1` - Negative sentiment
- `0` - Neutral sentiment
- `1` - Positive setiment
The model was fine-tuned using Slovak part of [Multilingual Twitter Sentiment Analysis Dataset](https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0155036) [Mozetič et al 2016] containing 50k manually annotated Slovak tweets. As such, it is fine-tuned for tweets and it is not advised to use the model for general-purpose sentiment analysis.
## Results
The model was evaluated in [our paper](https://arxiv.org/abs/2109.15254) [Pikuliak et al 2021, Section 4.4]. It achieves \\(0.67\\) F1-score on the original dataset and \\(0.58\\) F1-score on general reviews dataset.
## Cite
```
@inproceedings{pikuliak-etal-2022-slovakbert,
title = "{S}lovak{BERT}: {S}lovak Masked Language Model",
author = "Pikuliak, Mat{\'u}{\v{s}} and
Grivalsk{\'y}, {\v{S}}tefan and
Kon{\^o}pka, Martin and
Bl{\v{s}}t{\'a}k, Miroslav and
Tamajka, Martin and
Bachrat{\'y}, Viktor and
Simko, Marian and
Bal{\'a}{\v{z}}ik, Pavol and
Trnka, Michal and
Uhl{\'a}rik, Filip",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2022",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.findings-emnlp.530",
pages = "7156--7168",
abstract = "We introduce a new Slovak masked language model called \textit{SlovakBERT}. This is to our best knowledge the first paper discussing Slovak transformers-based language models. We evaluate our model on several NLP tasks and achieve state-of-the-art results. This evaluation is likewise the first attempt to establish a benchmark for Slovak language models. We publish the masked language model, as well as the fine-tuned models for part-of-speech tagging, sentiment analysis and semantic textual similarity.",
}
```
|
{"language": ["sk"], "license": "cc", "tags": ["twitter", "sentiment-analysis"], "metrics": ["f1"], "widget": [{"text": "Najkraj\u0161ia viano\u010dn\u00e1 reklama: Toto mil\u00e9 video v\u00e1m vyk\u00fazli \u010darovn\u00fa atmosf\u00e9ru: Vianoce sa nezadr\u017eate\u013ene bl\u00ed\u017eia."}, {"text": "A op\u00e4\u0165 sa objavili nebezpe\u010dn\u00e9 v\u00fdrobky. Pozrite sa, \u010di ich nem\u00e1te doma"}]}
|
kinit/slovakbert-sentiment-twitter
| null |
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"twitter",
"sentiment-analysis",
"sk",
"arxiv:2109.15254",
"license:cc",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2109.15254"
] |
[
"sk"
] |
TAGS
#transformers #pytorch #roberta #text-classification #twitter #sentiment-analysis #sk #arxiv-2109.15254 #license-cc #autotrain_compatible #endpoints_compatible #region-us
|
# Sentiment Analysis model based on SlovakBERT
This is a sentiment analysis classifier based on SlovakBERT. The model can distinguish three level of sentiment:
- '-1' - Negative sentiment
- '0' - Neutral sentiment
- '1' - Positive setiment
The model was fine-tuned using Slovak part of Multilingual Twitter Sentiment Analysis Dataset [Mozetič et al 2016] containing 50k manually annotated Slovak tweets. As such, it is fine-tuned for tweets and it is not advised to use the model for general-purpose sentiment analysis.
## Results
The model was evaluated in our paper [Pikuliak et al 2021, Section 4.4]. It achieves \\(0.67\\) F1-score on the original dataset and \\(0.58\\) F1-score on general reviews dataset.
## Cite
|
[
"# Sentiment Analysis model based on SlovakBERT\n\nThis is a sentiment analysis classifier based on SlovakBERT. The model can distinguish three level of sentiment:\n\n- '-1' - Negative sentiment\n- '0' - Neutral sentiment\n- '1' - Positive setiment\n\nThe model was fine-tuned using Slovak part of Multilingual Twitter Sentiment Analysis Dataset [Mozetič et al 2016] containing 50k manually annotated Slovak tweets. As such, it is fine-tuned for tweets and it is not advised to use the model for general-purpose sentiment analysis.",
"## Results\n\nThe model was evaluated in our paper [Pikuliak et al 2021, Section 4.4]. It achieves \\\\(0.67\\\\) F1-score on the original dataset and \\\\(0.58\\\\) F1-score on general reviews dataset.",
"## Cite"
] |
[
"TAGS\n#transformers #pytorch #roberta #text-classification #twitter #sentiment-analysis #sk #arxiv-2109.15254 #license-cc #autotrain_compatible #endpoints_compatible #region-us \n",
"# Sentiment Analysis model based on SlovakBERT\n\nThis is a sentiment analysis classifier based on SlovakBERT. The model can distinguish three level of sentiment:\n\n- '-1' - Negative sentiment\n- '0' - Neutral sentiment\n- '1' - Positive setiment\n\nThe model was fine-tuned using Slovak part of Multilingual Twitter Sentiment Analysis Dataset [Mozetič et al 2016] containing 50k manually annotated Slovak tweets. As such, it is fine-tuned for tweets and it is not advised to use the model for general-purpose sentiment analysis.",
"## Results\n\nThe model was evaluated in our paper [Pikuliak et al 2021, Section 4.4]. It achieves \\\\(0.67\\\\) F1-score on the original dataset and \\\\(0.58\\\\) F1-score on general reviews dataset.",
"## Cite"
] |
sentence-similarity
|
sentence-transformers
|
# Sentence similarity model based on SlovakBERT
This is a sentence similarity model based on [SlovakBERT](https://huggingface.co/gerulata/slovakbert). The model was fine-tuned using [STSbenchmark](https://ixa2.si.ehu.eus/stswiki/index.php/STSbenchmark) [Cer et al 2017] translated to Slovak using [M2M100](https://huggingface.co/facebook/m2m100_1.2B). The model can be used as an universal sentence encoder for Slovak sentences.
## Results
The model was evaluated in [our paper](https://arxiv.org/abs/2109.15254) [Pikuliak et al 2021, Section 4.3]. It achieves \\(0.791\\) Spearman correlation on STSbenchmark test set.
## Usage
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('kinit/slovakbert-sts-stsb')
embeddings = model.encode(sentences)
print(embeddings)
```
## Cite
```
@inproceedings{pikuliak-etal-2022-slovakbert,
title = "{S}lovak{BERT}: {S}lovak Masked Language Model",
author = "Pikuliak, Mat{\'u}{\v{s}} and
Grivalsk{\'y}, {\v{S}}tefan and
Kon{\^o}pka, Martin and
Bl{\v{s}}t{\'a}k, Miroslav and
Tamajka, Martin and
Bachrat{\'y}, Viktor and
Simko, Marian and
Bal{\'a}{\v{z}}ik, Pavol and
Trnka, Michal and
Uhl{\'a}rik, Filip",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2022",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.findings-emnlp.530",
pages = "7156--7168",
abstract = "We introduce a new Slovak masked language model called \textit{SlovakBERT}. This is to our best knowledge the first paper discussing Slovak transformers-based language models. We evaluate our model on several NLP tasks and achieve state-of-the-art results. This evaluation is likewise the first attempt to establish a benchmark for Slovak language models. We publish the masked language model, as well as the fine-tuned models for part-of-speech tagging, sentiment analysis and semantic textual similarity.",
}
```
|
{"language": ["sk"], "license": "cc", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "sts"], "datasets": ["glue"], "metrics": ["spearmanr"], "pipeline_tag": "sentence-similarity", "widget": [{"source_sentence": "Izrael uskuto\u010dnil leteck\u00e9 \u00fadery v bl\u00edzkosti Damasku.", "sentences": ["Izrael uskuto\u010dnil vzdu\u0161n\u00fd \u00fatok na S\u00fdriu.", "Pes le\u017e\u00ed na gau\u010di a m\u00e1 hlavu na bielom vank\u00fa\u0161i."]}]}
|
kinit/slovakbert-sts-stsb
| null |
[
"sentence-transformers",
"pytorch",
"roberta",
"feature-extraction",
"sentence-similarity",
"sts",
"sk",
"dataset:glue",
"arxiv:2109.15254",
"license:cc",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2109.15254"
] |
[
"sk"
] |
TAGS
#sentence-transformers #pytorch #roberta #feature-extraction #sentence-similarity #sts #sk #dataset-glue #arxiv-2109.15254 #license-cc #endpoints_compatible #has_space #region-us
|
# Sentence similarity model based on SlovakBERT
This is a sentence similarity model based on SlovakBERT. The model was fine-tuned using STSbenchmark [Cer et al 2017] translated to Slovak using M2M100. The model can be used as an universal sentence encoder for Slovak sentences.
## Results
The model was evaluated in our paper [Pikuliak et al 2021, Section 4.3]. It achieves \\(0.791\\) Spearman correlation on STSbenchmark test set.
## Usage
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Cite
|
[
"# Sentence similarity model based on SlovakBERT\n\nThis is a sentence similarity model based on SlovakBERT. The model was fine-tuned using STSbenchmark [Cer et al 2017] translated to Slovak using M2M100. The model can be used as an universal sentence encoder for Slovak sentences.",
"## Results\n\nThe model was evaluated in our paper [Pikuliak et al 2021, Section 4.3]. It achieves \\\\(0.791\\\\) Spearman correlation on STSbenchmark test set.",
"## Usage\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Cite"
] |
[
"TAGS\n#sentence-transformers #pytorch #roberta #feature-extraction #sentence-similarity #sts #sk #dataset-glue #arxiv-2109.15254 #license-cc #endpoints_compatible #has_space #region-us \n",
"# Sentence similarity model based on SlovakBERT\n\nThis is a sentence similarity model based on SlovakBERT. The model was fine-tuned using STSbenchmark [Cer et al 2017] translated to Slovak using M2M100. The model can be used as an universal sentence encoder for Slovak sentences.",
"## Results\n\nThe model was evaluated in our paper [Pikuliak et al 2021, Section 4.3]. It achieves \\\\(0.791\\\\) Spearman correlation on STSbenchmark test set.",
"## Usage\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Cite"
] |
text-generation
|
transformers
|
#RickSanChez
|
{"tags": ["conversational"]}
|
kipiiler/Rickbot
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
#RickSanChez
|
[] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
feature-extraction
|
transformers
|
## Model Description
This model is based off **Sentence-Transformer's** `distiluse-base-multilingual-cased` multilingual model that has been extended to understand sentence embeddings in Estonian.
## Sentence-Transformers
This model can be imported directly via the SentenceTransformers package as shown below:
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('kiri-ai/distiluse-base-multilingual-cased-et')
sentences = ['Here is a sample sentence','Another sample sentence']
embeddings = model.encode(sentences)
print("Sentence embeddings:")
print(embeddings)
```
## Fine-tuning
The fine-tuning and training processes were inspired by [sbert's](https://www.sbert.net/) multilingual training techniques which are available [here](https://www.sbert.net/examples/training/multilingual/README.html). The documentation shows and explains the step-by-step process of using parallel sentences to train models in a different language.
### Resources
The model was fine-tuned on English-Estonian parallel sentences taken from [OPUS](http://opus.nlpl.eu/) and [ParaCrawl](https://paracrawl.eu/).
|
{"language": "et"}
|
kiri-ai/distiluse-base-multilingual-cased-et
| null |
[
"transformers",
"pytorch",
"distilbert",
"feature-extraction",
"et",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"et"
] |
TAGS
#transformers #pytorch #distilbert #feature-extraction #et #endpoints_compatible #region-us
|
## Model Description
This model is based off Sentence-Transformer's 'distiluse-base-multilingual-cased' multilingual model that has been extended to understand sentence embeddings in Estonian.
## Sentence-Transformers
This model can be imported directly via the SentenceTransformers package as shown below:
## Fine-tuning
The fine-tuning and training processes were inspired by sbert's multilingual training techniques which are available here. The documentation shows and explains the step-by-step process of using parallel sentences to train models in a different language.
### Resources
The model was fine-tuned on English-Estonian parallel sentences taken from OPUS and ParaCrawl.
|
[
"## Model Description\n\nThis model is based off Sentence-Transformer's 'distiluse-base-multilingual-cased' multilingual model that has been extended to understand sentence embeddings in Estonian.",
"## Sentence-Transformers\n\nThis model can be imported directly via the SentenceTransformers package as shown below:",
"## Fine-tuning\n\nThe fine-tuning and training processes were inspired by sbert's multilingual training techniques which are available here. The documentation shows and explains the step-by-step process of using parallel sentences to train models in a different language.",
"### Resources\n\nThe model was fine-tuned on English-Estonian parallel sentences taken from OPUS and ParaCrawl."
] |
[
"TAGS\n#transformers #pytorch #distilbert #feature-extraction #et #endpoints_compatible #region-us \n",
"## Model Description\n\nThis model is based off Sentence-Transformer's 'distiluse-base-multilingual-cased' multilingual model that has been extended to understand sentence embeddings in Estonian.",
"## Sentence-Transformers\n\nThis model can be imported directly via the SentenceTransformers package as shown below:",
"## Fine-tuning\n\nThe fine-tuning and training processes were inspired by sbert's multilingual training techniques which are available here. The documentation shows and explains the step-by-step process of using parallel sentences to train models in a different language.",
"### Resources\n\nThe model was fine-tuned on English-Estonian parallel sentences taken from OPUS and ParaCrawl."
] |
text-generation
|
transformers
|
# Pytorch int8 quantized version of gpt2-large
## Usage
Download the .bin file locally.
Load with:
Rest of the usage according to [original instructions](https://huggingface.co/gpt2-large).
```python
import torch
model = torch.load("path/to/pytorch_model_quantized.bin")
```
|
{"language": ["en"]}
|
kiri-ai/gpt2-large-quantized
| null |
[
"transformers",
"gpt2",
"text-generation",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #gpt2 #text-generation #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Pytorch int8 quantized version of gpt2-large
## Usage
Download the .bin file locally.
Load with:
Rest of the usage according to original instructions.
|
[
"# Pytorch int8 quantized version of gpt2-large",
"## Usage\n\nDownload the .bin file locally.\nLoad with:\n\nRest of the usage according to original instructions."
] |
[
"TAGS\n#transformers #gpt2 #text-generation #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Pytorch int8 quantized version of gpt2-large",
"## Usage\n\nDownload the .bin file locally.\nLoad with:\n\nRest of the usage according to original instructions."
] |
text2text-generation
|
transformers
|
# T5 Base with QA + Summary + Emotion
## Dependencies
Requires transformers>=4.0.0
## Description
This model was finetuned on the CoQa, Squad 2, GoEmotions and CNN/DailyMail.
It achieves a score of **F1 79.5** on the Squad 2 dev set and a score of **F1 70.6** on the CoQa dev set.
Summarisation and emotion detection has not been evaluated yet.
## Usage
### Question answering
#### With Transformers
```python
from transformers import T5ForConditionalGeneration, T5Tokenizer
model = T5ForConditionalGeneration.from_pretrained("kiri-ai/t5-base-qa-summary-emotion")
tokenizer = T5Tokenizer.from_pretrained("kiri-ai/t5-base-qa-summary-emotion")
def get_answer(question, prev_qa, context):
input_text = [f"q: {qa[0]} a: {qa[1]}" for qa in prev_qa]
input_text.append(f"q: {question}")
input_text.append(f"c: {context}")
input_text = " ".join(input_text)
features = tokenizer([input_text], return_tensors='pt')
tokens = model.generate(input_ids=features['input_ids'],
attention_mask=features['attention_mask'], max_length=64)
return tokenizer.decode(tokens[0], skip_special_tokens=True)
print(get_answer("Why is the moon yellow?", "I'm not entirely sure why the moon is yellow.")) # unknown
context = "Elon Musk left OpenAI to avoid possible future conflicts with his role as CEO of Tesla."
print(get_answer("Why not?", [("Does Elon Musk still work with OpenAI", "No")], context)) # to avoid possible future conflicts with his role as CEO of Tesla
```
#### With Kiri
```python
from kiri.models import T5QASummaryEmotion
context = "Elon Musk left OpenAI to avoid possible future conflicts with his role as CEO of Tesla."
prev_qa = [("Does Elon Musk still work with OpenAI", "No")]
model = T5QASummaryEmotion()
# Leave prev_qa blank for non conversational question-answering
model.qa("Why not?", context, prev_qa=prev_qa)
> "to avoid possible future conflicts with his role as CEO of Tesla"
```
### Summarisation
#### With Transformers
```python
from transformers import T5ForConditionalGeneration, T5Tokenizer
model = T5ForConditionalGeneration.from_pretrained("kiri-ai/t5-base-qa-summary-emotion")
tokenizer = T5Tokenizer.from_pretrained("kiri-ai/t5-base-qa-summary-emotion")
def summary(context):
input_text = f"summarize: {context}"
features = tokenizer([input_text], return_tensors='pt')
tokens = model.generate(input_ids=features['input_ids'],
attention_mask=features['attention_mask'], max_length=64)
return tokenizer.decode(tokens[0], skip_special_tokens=True)
```
#### With Kiri
```python
from kiri.models import T5QASummaryEmotion
model = T5QASummaryEmotion()
model.summarise("Long text to summarise")
> "Short summary of long text"
```
### Emotion detection
#### With Transformers
```python
from transformers import T5ForConditionalGeneration, T5Tokenizer
model = T5ForConditionalGeneration.from_pretrained("kiri-ai/t5-base-qa-summary-emotion")
tokenizer = T5Tokenizer.from_pretrained("kiri-ai/t5-base-qa-summary-emotion")
def emotion(context):
input_text = f"emotion: {context}"
features = tokenizer([input_text], return_tensors='pt')
tokens = model.generate(input_ids=features['input_ids'],
attention_mask=features['attention_mask'], max_length=64)
return tokenizer.decode(tokens[0], skip_special_tokens=True)
```
#### With Kiri
```python
from kiri.models import T5QASummaryEmotion
model = T5QASummaryEmotion()
model.emotion("I hope this works!")
> "optimism"
```
## About us
Kiri makes using state-of-the-art models easy, accessible and scalable.
[Website](https://kiri.ai) | [Natural Language Engine](https://github.com/kiri-ai/kiri)
|
{"language": ["en"], "license": "apache-2.0", "tags": ["question-answering", "emotion-detection", "summarisation"], "datasets": ["coqa", "squad_v2", "go_emotions", "cnn_dailymail"], "metrics": ["f1"], "pipeline_tag": "text2text-generation", "widget": [{"text": "q: Who is Elon Musk? a: an entrepreneur q: When was he born? c: Elon Musk is an entrepreneur born in 1971. </s>"}, {"text": "emotion: I hope this works! </s>"}]}
|
kiri-ai/t5-base-qa-summary-emotion
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"question-answering",
"emotion-detection",
"summarisation",
"en",
"dataset:coqa",
"dataset:squad_v2",
"dataset:go_emotions",
"dataset:cnn_dailymail",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #t5 #text2text-generation #question-answering #emotion-detection #summarisation #en #dataset-coqa #dataset-squad_v2 #dataset-go_emotions #dataset-cnn_dailymail #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# T5 Base with QA + Summary + Emotion
## Dependencies
Requires transformers>=4.0.0
## Description
This model was finetuned on the CoQa, Squad 2, GoEmotions and CNN/DailyMail.
It achieves a score of F1 79.5 on the Squad 2 dev set and a score of F1 70.6 on the CoQa dev set.
Summarisation and emotion detection has not been evaluated yet.
## Usage
### Question answering
#### With Transformers
#### With Kiri
### Summarisation
#### With Transformers
#### With Kiri
### Emotion detection
#### With Transformers
#### With Kiri
## About us
Kiri makes using state-of-the-art models easy, accessible and scalable.
Website | Natural Language Engine
|
[
"# T5 Base with QA + Summary + Emotion",
"## Dependencies\n\nRequires transformers>=4.0.0",
"## Description\n\nThis model was finetuned on the CoQa, Squad 2, GoEmotions and CNN/DailyMail.\n\nIt achieves a score of F1 79.5 on the Squad 2 dev set and a score of F1 70.6 on the CoQa dev set.\n\nSummarisation and emotion detection has not been evaluated yet.",
"## Usage",
"### Question answering",
"#### With Transformers",
"#### With Kiri",
"### Summarisation",
"#### With Transformers",
"#### With Kiri",
"### Emotion detection",
"#### With Transformers",
"#### With Kiri",
"## About us\n\nKiri makes using state-of-the-art models easy, accessible and scalable.\n\nWebsite | Natural Language Engine"
] |
[
"TAGS\n#transformers #pytorch #t5 #text2text-generation #question-answering #emotion-detection #summarisation #en #dataset-coqa #dataset-squad_v2 #dataset-go_emotions #dataset-cnn_dailymail #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# T5 Base with QA + Summary + Emotion",
"## Dependencies\n\nRequires transformers>=4.0.0",
"## Description\n\nThis model was finetuned on the CoQa, Squad 2, GoEmotions and CNN/DailyMail.\n\nIt achieves a score of F1 79.5 on the Squad 2 dev set and a score of F1 70.6 on the CoQa dev set.\n\nSummarisation and emotion detection has not been evaluated yet.",
"## Usage",
"### Question answering",
"#### With Transformers",
"#### With Kiri",
"### Summarisation",
"#### With Transformers",
"#### With Kiri",
"### Emotion detection",
"#### With Transformers",
"#### With Kiri",
"## About us\n\nKiri makes using state-of-the-art models easy, accessible and scalable.\n\nWebsite | Natural Language Engine"
] |
text-classification
|
transformers
|
# Reddit exercise feedback classification
Model to classify Reddit's comments for exercise feedback. Current classes are good, correction, bad posture, not informative. If you want to use it locally,
### Usage:
```py
from transformers import pipeline
classifier = pipeline("text-classification", "kittinan/exercise-feedback-classification")
classifier("search for alan thrall deadlift video he will explain basic ques")
#[{'label': 'correction', 'score': 0.9998193979263306}]
```
|
{}
|
kittinan/exercise-feedback-classification
| null |
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us
|
# Reddit exercise feedback classification
Model to classify Reddit's comments for exercise feedback. Current classes are good, correction, bad posture, not informative. If you want to use it locally,
### Usage:
|
[
"# Reddit exercise feedback classification\n\nModel to classify Reddit's comments for exercise feedback. Current classes are good, correction, bad posture, not informative. If you want to use it locally,",
"### Usage:"
] |
[
"TAGS\n#transformers #pytorch #bert #text-classification #autotrain_compatible #endpoints_compatible #region-us \n",
"# Reddit exercise feedback classification\n\nModel to classify Reddit's comments for exercise feedback. Current classes are good, correction, bad posture, not informative. If you want to use it locally,",
"### Usage:"
] |
null | null |
# RoBERTa base model
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1907.11692) and first released in
[this repository](https://github.com/pytorch/fairseq/tree/master/examples/roberta). This model is case-sensitive: it
makes a difference between english and English.
Disclaimer: The team releasing RoBERTa did not write a model card for this model so this model card has been written by
the Hugging Face team.
|
{"language": "en", "license": "mit", "tags": ["exbert"], "datasets": ["bookcorpus", "wikipedia"]}
|
kjackson/distilbert-base-uncased-finetuned-emotion
| null |
[
"exbert",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1907.11692",
"license:mit",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1907.11692"
] |
[
"en"
] |
TAGS
#exbert #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1907.11692 #license-mit #region-us
|
# RoBERTa base model
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model is case-sensitive: it
makes a difference between english and English.
Disclaimer: The team releasing RoBERTa did not write a model card for this model so this model card has been written by
the Hugging Face team.
|
[
"# RoBERTa base model\n\nPretrained model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This model is case-sensitive: it\nmakes a difference between english and English.\n\nDisclaimer: The team releasing RoBERTa did not write a model card for this model so this model card has been written by\nthe Hugging Face team."
] |
[
"TAGS\n#exbert #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1907.11692 #license-mit #region-us \n",
"# RoBERTa base model\n\nPretrained model on English language using a masked language modeling (MLM) objective. It was introduced in\nthis paper and first released in\nthis repository. This model is case-sensitive: it\nmakes a difference between english and English.\n\nDisclaimer: The team releasing RoBERTa did not write a model card for this model so this model card has been written by\nthe Hugging Face team."
] |
token-classification
|
transformers
|
# Nominalization Detector
This model identifies "predicative nominalizations", that is, nominalizations that carry an eventive (or "verbal") meaning in context. It is a `bert-base-cased` pretrained model, fine-tuned for token classification on top of the "nominalization detection" task as defined and annotated by the QANom project [(Klein et. al., COLING 2020)](https://www.aclweb.org/anthology/2020.coling-main.274/).
## Task Description
The model is trained as a binary classifier, classifying candidate nominalizations.
The candidates are extracted using a POS tagger (filtering common nouns) and additionally lexical resources (e.g. WordNet and CatVar), filtering nouns that have (at least one) derivationally-related verb. In the QANom annotation project, these candidates are given to annotators to decide whether they carry a "verbal" meaning in the context of the sentence. The current model reproduces this binary classification.
## Demo
Check out our cool [demo](https://huggingface.co/spaces/kleinay/nominalization-detection-demo)!
## Usage
The candidate extraction algorithm is implemented inside the `qanom` package - see the README in the [QANom github repo](https://github.com/kleinay/QANom) for full documentation. The `qanom` package is also available via `pip install qanom`.
For ease of use, we encapsulated the full nominalization detection pipeline (i.e. candidate extraction + predicate classification) in the `qanom.nominalization_detector.NominalizationDetector` class, which internally utilize this `nominalization-candidate-classifier`:
```python
from qanom.nominalization_detector import NominalizationDetector
detector = NominalizationDetector()
raw_sentences = ["The construction of the officer 's building finished right after the beginning of the destruction of the previous construction ."]
print(detector(raw_sentences, return_all_candidates=True))
print(detector(raw_sentences, threshold=0.75, return_probability=False))
```
Outputs:
```json
[[{'predicate_idx': 1,
'predicate': 'construction',
'predicate_detector_prediction': True,
'predicate_detector_probability': 0.7626778483390808,
'verb_form': 'construct'},
{'predicate_idx': 4,
'predicate': 'officer',
'predicate_detector_prediction': False,
'predicate_detector_probability': 0.19832570850849152,
'verb_form': 'officer'},
{'predicate_idx': 6,
'predicate': 'building',
'predicate_detector_prediction': True,
'predicate_detector_probability': 0.5794129371643066,
'verb_form': 'build'},
{'predicate_idx': 11,
'predicate': 'beginning',
'predicate_detector_prediction': True,
'predicate_detector_probability': 0.8937646150588989,
'verb_form': 'begin'},
{'predicate_idx': 14,
'predicate': 'destruction',
'predicate_detector_prediction': True,
'predicate_detector_probability': 0.8501205444335938,
'verb_form': 'destruct'},
{'predicate_idx': 18,
'predicate': 'construction',
'predicate_detector_prediction': True,
'predicate_detector_probability': 0.7022264003753662,
'verb_form': 'construct'}]]
```
```json
[[{'predicate_idx': 1, 'predicate': 'construction', 'verb_form': 'construct'},
{'predicate_idx': 11, 'predicate': 'beginning', 'verb_form': 'begin'},
{'predicate_idx': 14, 'predicate': 'destruction', 'verb_form': 'destruct'}]]
```
## Cite
```latex
@inproceedings{klein2020qanom,
title={QANom: Question-Answer driven SRL for Nominalizations},
author={Klein, Ayal and Mamou, Jonathan and Pyatkin, Valentina and Stepanov, Daniela and He, Hangfeng and Roth, Dan and Zettlemoyer, Luke and Dagan, Ido},
booktitle={Proceedings of the 28th International Conference on Computational Linguistics},
pages={3069--3083},
year={2020}
}
```
|
{"language": ["en"], "tags": ["pytorch", "token-classification", "nominalizations"], "datasets": ["kleinay/qanom"]}
|
kleinay/nominalization-candidate-classifier
| null |
[
"transformers",
"pytorch",
"bert",
"token-classification",
"nominalizations",
"en",
"dataset:kleinay/qanom",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #bert #token-classification #nominalizations #en #dataset-kleinay/qanom #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# Nominalization Detector
This model identifies "predicative nominalizations", that is, nominalizations that carry an eventive (or "verbal") meaning in context. It is a 'bert-base-cased' pretrained model, fine-tuned for token classification on top of the "nominalization detection" task as defined and annotated by the QANom project (Klein et. al., COLING 2020).
## Task Description
The model is trained as a binary classifier, classifying candidate nominalizations.
The candidates are extracted using a POS tagger (filtering common nouns) and additionally lexical resources (e.g. WordNet and CatVar), filtering nouns that have (at least one) derivationally-related verb. In the QANom annotation project, these candidates are given to annotators to decide whether they carry a "verbal" meaning in the context of the sentence. The current model reproduces this binary classification.
## Demo
Check out our cool demo!
## Usage
The candidate extraction algorithm is implemented inside the 'qanom' package - see the README in the QANom github repo for full documentation. The 'qanom' package is also available via 'pip install qanom'.
For ease of use, we encapsulated the full nominalization detection pipeline (i.e. candidate extraction + predicate classification) in the 'qanom.nominalization_detector.NominalizationDetector' class, which internally utilize this 'nominalization-candidate-classifier':
Outputs:
## Cite
|
[
"# Nominalization Detector\n\nThis model identifies \"predicative nominalizations\", that is, nominalizations that carry an eventive (or \"verbal\") meaning in context. It is a 'bert-base-cased' pretrained model, fine-tuned for token classification on top of the \"nominalization detection\" task as defined and annotated by the QANom project (Klein et. al., COLING 2020).",
"## Task Description\n\nThe model is trained as a binary classifier, classifying candidate nominalizations. \nThe candidates are extracted using a POS tagger (filtering common nouns) and additionally lexical resources (e.g. WordNet and CatVar), filtering nouns that have (at least one) derivationally-related verb. In the QANom annotation project, these candidates are given to annotators to decide whether they carry a \"verbal\" meaning in the context of the sentence. The current model reproduces this binary classification.",
"## Demo\n\nCheck out our cool demo!",
"## Usage\n\nThe candidate extraction algorithm is implemented inside the 'qanom' package - see the README in the QANom github repo for full documentation. The 'qanom' package is also available via 'pip install qanom'. \n\nFor ease of use, we encapsulated the full nominalization detection pipeline (i.e. candidate extraction + predicate classification) in the 'qanom.nominalization_detector.NominalizationDetector' class, which internally utilize this 'nominalization-candidate-classifier':\n\n \n\nOutputs:",
"## Cite"
] |
[
"TAGS\n#transformers #pytorch #bert #token-classification #nominalizations #en #dataset-kleinay/qanom #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# Nominalization Detector\n\nThis model identifies \"predicative nominalizations\", that is, nominalizations that carry an eventive (or \"verbal\") meaning in context. It is a 'bert-base-cased' pretrained model, fine-tuned for token classification on top of the \"nominalization detection\" task as defined and annotated by the QANom project (Klein et. al., COLING 2020).",
"## Task Description\n\nThe model is trained as a binary classifier, classifying candidate nominalizations. \nThe candidates are extracted using a POS tagger (filtering common nouns) and additionally lexical resources (e.g. WordNet and CatVar), filtering nouns that have (at least one) derivationally-related verb. In the QANom annotation project, these candidates are given to annotators to decide whether they carry a \"verbal\" meaning in the context of the sentence. The current model reproduces this binary classification.",
"## Demo\n\nCheck out our cool demo!",
"## Usage\n\nThe candidate extraction algorithm is implemented inside the 'qanom' package - see the README in the QANom github repo for full documentation. The 'qanom' package is also available via 'pip install qanom'. \n\nFor ease of use, we encapsulated the full nominalization detection pipeline (i.e. candidate extraction + predicate classification) in the 'qanom.nominalization_detector.NominalizationDetector' class, which internally utilize this 'nominalization-candidate-classifier':\n\n \n\nOutputs:",
"## Cite"
] |
text2text-generation
|
transformers
|
# A Seq2Seq model for QANom parsing
This is a `t5-small` pretrained model, fine-tuned on the task of generating QANom QAs.
"QANom" stands for "QASRL for Nominalizations", which is an adaptation of [QASRL (Question-Answer driven Semantic Role Labeling)](https://qasrl.org) for the nominal predicates domain. See the [QANom paper](https://aclanthology.org/2020.coling-main.274/) for details about the task. The QANom Dataset official site is a [Google drive](https://drive.google.com/drive/folders/15PHKVdPm65ysgdkV47z6J_73kETk7_of), but we also wrapped it into a [Huggingface Dataset](https://huggingface.co/datasets/biu-nlp/qanom), which is easier to plug-and-play with (check out our [HF profile](https://huggingface.co/biu-nlp) for other related datasets, such as QASRL, QAMR, QADiscourse, and QA-Align).
## Demo
Visit [our demo](https://huggingface.co/spaces/kleinay/qanom-seq2seq-demo) for interactively exploring our model!
## Usage
The model and tokenizer can be downloaded as simply as running:
```python
import transformers
model = transformers.AutoModelForSeq2SeqLM.from_pretrained("kleinay/qanom-seq2seq-model-baseline")
tokenizer = transformers.AutoTokenizer.from_pretrained("kleinay/qanom-seq2seq-model-baseline")
```
However, the model fine-tuning procedure involves input preprocessing (marking the predicate in the sentence, T5's "task prefix", incorporating the predicate type and/or the verbal for of the nominalization) and output postprocessing (parsing the sequence into a list of QASRL-formatted QAs).
In order to use the model for QANom parsing easily, we suggest downloading the [`pipeline.py`](https://huggingface.co/kleinay/qanom-seq2seq-model-baseline/blob/main/pipeline.py) file from this repository, and then use the `QASRL_Pipeline` class:
```python
from pipeline import QASRL_Pipeline
pipe = QASRL_Pipeline("kleinay/qanom-seq2seq-model-baseline")
pipe("The student was interested in Luke 's <predicate> research about see animals .", verb_form="research", predicate_type="nominal")
```
Which will output:
```json
[{'generated_text': 'who _ _ researched something _ _ ?<extra_id_7> Luke',
'QAs': [{'question': 'who researched something ?', 'answers': ['Luke']}]}]
```
You can learn more about using `transformers.pipelines` in the [official docs](https://huggingface.co/docs/transformers/main_classes/pipelines).
Notice that you need to specify which word in the sentence is the predicate, about which the question will interrogate. By default, you should precede the predicate with the `<predicate>` symbol, but you can also specify your own predicate marker:
```python
pipe("The student was interested in Luke 's <PRED> research about see animals .", verb_form="research", predicate_type="nominal", predicate_marker="<PRED>")
```
In addition, you can specify additional kwargs for controling the model's decoding algorithm:
```python
pipe("The student was interested in Luke 's <predicate> research about see animals .", verb_form="research", predicate_type="nominal", num_beams=3)
```
|
{"language": ["en"], "tags": ["semantic-role-labeling", "question-answer generation", "pytorch"], "datasets": ["kleinay/qanom"]}
|
kleinay/qanom-seq2seq-model-baseline
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"semantic-role-labeling",
"question-answer generation",
"en",
"dataset:kleinay/qanom",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #t5 #text2text-generation #semantic-role-labeling #question-answer generation #en #dataset-kleinay/qanom #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
# A Seq2Seq model for QANom parsing
This is a 't5-small' pretrained model, fine-tuned on the task of generating QANom QAs.
"QANom" stands for "QASRL for Nominalizations", which is an adaptation of QASRL (Question-Answer driven Semantic Role Labeling) for the nominal predicates domain. See the QANom paper for details about the task. The QANom Dataset official site is a Google drive, but we also wrapped it into a Huggingface Dataset, which is easier to plug-and-play with (check out our HF profile for other related datasets, such as QASRL, QAMR, QADiscourse, and QA-Align).
## Demo
Visit our demo for interactively exploring our model!
## Usage
The model and tokenizer can be downloaded as simply as running:
However, the model fine-tuning procedure involves input preprocessing (marking the predicate in the sentence, T5's "task prefix", incorporating the predicate type and/or the verbal for of the nominalization) and output postprocessing (parsing the sequence into a list of QASRL-formatted QAs).
In order to use the model for QANom parsing easily, we suggest downloading the 'URL' file from this repository, and then use the 'QASRL_Pipeline' class:
Which will output:
You can learn more about using 'transformers.pipelines' in the official docs.
Notice that you need to specify which word in the sentence is the predicate, about which the question will interrogate. By default, you should precede the predicate with the '<predicate>' symbol, but you can also specify your own predicate marker:
In addition, you can specify additional kwargs for controling the model's decoding algorithm:
|
[
"# A Seq2Seq model for QANom parsing\n\nThis is a 't5-small' pretrained model, fine-tuned on the task of generating QANom QAs. \n\n\"QANom\" stands for \"QASRL for Nominalizations\", which is an adaptation of QASRL (Question-Answer driven Semantic Role Labeling) for the nominal predicates domain. See the QANom paper for details about the task. The QANom Dataset official site is a Google drive, but we also wrapped it into a Huggingface Dataset, which is easier to plug-and-play with (check out our HF profile for other related datasets, such as QASRL, QAMR, QADiscourse, and QA-Align).",
"## Demo\n\nVisit our demo for interactively exploring our model!",
"## Usage \n\nThe model and tokenizer can be downloaded as simply as running:\n\n\nHowever, the model fine-tuning procedure involves input preprocessing (marking the predicate in the sentence, T5's \"task prefix\", incorporating the predicate type and/or the verbal for of the nominalization) and output postprocessing (parsing the sequence into a list of QASRL-formatted QAs). \nIn order to use the model for QANom parsing easily, we suggest downloading the 'URL' file from this repository, and then use the 'QASRL_Pipeline' class:\n\n \nWhich will output:\n \nYou can learn more about using 'transformers.pipelines' in the official docs.\n\nNotice that you need to specify which word in the sentence is the predicate, about which the question will interrogate. By default, you should precede the predicate with the '<predicate>' symbol, but you can also specify your own predicate marker:\n\nIn addition, you can specify additional kwargs for controling the model's decoding algorithm:"
] |
[
"TAGS\n#transformers #pytorch #t5 #text2text-generation #semantic-role-labeling #question-answer generation #en #dataset-kleinay/qanom #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"# A Seq2Seq model for QANom parsing\n\nThis is a 't5-small' pretrained model, fine-tuned on the task of generating QANom QAs. \n\n\"QANom\" stands for \"QASRL for Nominalizations\", which is an adaptation of QASRL (Question-Answer driven Semantic Role Labeling) for the nominal predicates domain. See the QANom paper for details about the task. The QANom Dataset official site is a Google drive, but we also wrapped it into a Huggingface Dataset, which is easier to plug-and-play with (check out our HF profile for other related datasets, such as QASRL, QAMR, QADiscourse, and QA-Align).",
"## Demo\n\nVisit our demo for interactively exploring our model!",
"## Usage \n\nThe model and tokenizer can be downloaded as simply as running:\n\n\nHowever, the model fine-tuning procedure involves input preprocessing (marking the predicate in the sentence, T5's \"task prefix\", incorporating the predicate type and/or the verbal for of the nominalization) and output postprocessing (parsing the sequence into a list of QASRL-formatted QAs). \nIn order to use the model for QANom parsing easily, we suggest downloading the 'URL' file from this repository, and then use the 'QASRL_Pipeline' class:\n\n \nWhich will output:\n \nYou can learn more about using 'transformers.pipelines' in the official docs.\n\nNotice that you need to specify which word in the sentence is the predicate, about which the question will interrogate. By default, you should precede the predicate with the '<predicate>' symbol, but you can also specify your own predicate marker:\n\nIn addition, you can specify additional kwargs for controling the model's decoding algorithm:"
] |
text2text-generation
|
transformers
|
# A Seq2Seq model for QANom parsing
This is a `t5-small` pretrained model, fine-tuned jointly on the tasks of generating QASRL and QANom QAs.
"QANom" stands for "QASRL for Nominalizations", which is an adaptation of [QASRL (Question-Answer driven Semantic Role Labeling)](https://qasrl.org) for the nominal predicates domain. See the [QANom paper](https://aclanthology.org/2020.coling-main.274/) for details about the task. The QANom Dataset official site is a [Google drive](https://drive.google.com/drive/folders/15PHKVdPm65ysgdkV47z6J_73kETk7_of), but we also wrapped it into a [Huggingface Dataset](https://huggingface.co/datasets/biu-nlp/qanom), which is easier to plug-and-play with (check out our [HF profile](https://huggingface.co/biu-nlp) for other related datasets, such as QASRL, QAMR, QADiscourse, and QA-Align).
## Demo
Visit [our demo](https://huggingface.co/spaces/kleinay/qanom-seq2seq-demo) for interactively exploring our model!
## Usage
The model and tokenizer can be downloaded as simply as running:
```python
import transformers
model = transformers.AutoModelForSeq2SeqLM.from_pretrained("kleinay/qanom-seq2seq-model-baseline")
tokenizer = transformers.AutoTokenizer.from_pretrained("kleinay/qanom-seq2seq-model-baseline")
```
However, the model fine-tuning procedure involves input preprocessing (marking the predicate in the sentence, T5's "task prefix", incorporating the predicate type and/or the verbal form of the nominalization) and output postprocessing (parsing the sequence into a list of QASRL-formatted QAs).
In order to use the model for QANom parsing easily, we suggest downloading the [`pipeline.py`](https://huggingface.co/kleinay/qanom-seq2seq-model-joint/blob/main/pipeline.py) file from this repository, and then use the `QASRL_Pipeline` class:
```python
from pipeline import QASRL_Pipeline
pipe = QASRL_Pipeline("kleinay/qanom-seq2seq-model-joint")
pipe("The student was interested in Luke 's <predicate> research about sea animals .", verb_form="research", predicate_type="nominal")
```
Which will output:
```json
[{'generated_text': 'who _ _ researched something _ _ ?<extra_id_7> Luke',
'QAs': [{'question': 'who researched something ?', 'answers': ['Luke']}]}]
```
You can learn more about using `transformers.pipelines` in the [official docs](https://huggingface.co/docs/transformers/main_classes/pipelines).
Notice that you need to specify which word in the sentence is the predicate, about which the question will interrogate. By default, you should precede the predicate with the `<predicate>` symbol, but you can also specify your own predicate marker:
```python
pipe("The student was interested in Luke 's <PRED> research about sea animals .", verb_form="research", predicate_type="nominal", predicate_marker="<PRED>")
```
In addition, you can specify additional kwargs for controling the model's decoding algorithm:
```python
pipe("The student was interested in Luke 's <predicate> research about sea animals .", verb_form="research", predicate_type="nominal", num_beams=3)
```
|
{"language": ["en"], "tags": ["semantic-role-labeling", "question-answer generation", "pytorch"], "datasets": ["kleinay/qanom"]}
|
kleinay/qanom-seq2seq-model-joint
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"semantic-role-labeling",
"question-answer generation",
"en",
"dataset:kleinay/qanom",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #t5 #text2text-generation #semantic-role-labeling #question-answer generation #en #dataset-kleinay/qanom #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
# A Seq2Seq model for QANom parsing
This is a 't5-small' pretrained model, fine-tuned jointly on the tasks of generating QASRL and QANom QAs.
"QANom" stands for "QASRL for Nominalizations", which is an adaptation of QASRL (Question-Answer driven Semantic Role Labeling) for the nominal predicates domain. See the QANom paper for details about the task. The QANom Dataset official site is a Google drive, but we also wrapped it into a Huggingface Dataset, which is easier to plug-and-play with (check out our HF profile for other related datasets, such as QASRL, QAMR, QADiscourse, and QA-Align).
## Demo
Visit our demo for interactively exploring our model!
## Usage
The model and tokenizer can be downloaded as simply as running:
However, the model fine-tuning procedure involves input preprocessing (marking the predicate in the sentence, T5's "task prefix", incorporating the predicate type and/or the verbal form of the nominalization) and output postprocessing (parsing the sequence into a list of QASRL-formatted QAs).
In order to use the model for QANom parsing easily, we suggest downloading the 'URL' file from this repository, and then use the 'QASRL_Pipeline' class:
Which will output:
You can learn more about using 'transformers.pipelines' in the official docs.
Notice that you need to specify which word in the sentence is the predicate, about which the question will interrogate. By default, you should precede the predicate with the '<predicate>' symbol, but you can also specify your own predicate marker:
In addition, you can specify additional kwargs for controling the model's decoding algorithm:
|
[
"# A Seq2Seq model for QANom parsing\n\nThis is a 't5-small' pretrained model, fine-tuned jointly on the tasks of generating QASRL and QANom QAs. \n\n\"QANom\" stands for \"QASRL for Nominalizations\", which is an adaptation of QASRL (Question-Answer driven Semantic Role Labeling) for the nominal predicates domain. See the QANom paper for details about the task. The QANom Dataset official site is a Google drive, but we also wrapped it into a Huggingface Dataset, which is easier to plug-and-play with (check out our HF profile for other related datasets, such as QASRL, QAMR, QADiscourse, and QA-Align).",
"## Demo\n\nVisit our demo for interactively exploring our model!",
"## Usage \n\nThe model and tokenizer can be downloaded as simply as running:\n\n\nHowever, the model fine-tuning procedure involves input preprocessing (marking the predicate in the sentence, T5's \"task prefix\", incorporating the predicate type and/or the verbal form of the nominalization) and output postprocessing (parsing the sequence into a list of QASRL-formatted QAs). \nIn order to use the model for QANom parsing easily, we suggest downloading the 'URL' file from this repository, and then use the 'QASRL_Pipeline' class:\n\n \nWhich will output:\n \nYou can learn more about using 'transformers.pipelines' in the official docs.\n\nNotice that you need to specify which word in the sentence is the predicate, about which the question will interrogate. By default, you should precede the predicate with the '<predicate>' symbol, but you can also specify your own predicate marker:\n\nIn addition, you can specify additional kwargs for controling the model's decoding algorithm:"
] |
[
"TAGS\n#transformers #pytorch #t5 #text2text-generation #semantic-role-labeling #question-answer generation #en #dataset-kleinay/qanom #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"# A Seq2Seq model for QANom parsing\n\nThis is a 't5-small' pretrained model, fine-tuned jointly on the tasks of generating QASRL and QANom QAs. \n\n\"QANom\" stands for \"QASRL for Nominalizations\", which is an adaptation of QASRL (Question-Answer driven Semantic Role Labeling) for the nominal predicates domain. See the QANom paper for details about the task. The QANom Dataset official site is a Google drive, but we also wrapped it into a Huggingface Dataset, which is easier to plug-and-play with (check out our HF profile for other related datasets, such as QASRL, QAMR, QADiscourse, and QA-Align).",
"## Demo\n\nVisit our demo for interactively exploring our model!",
"## Usage \n\nThe model and tokenizer can be downloaded as simply as running:\n\n\nHowever, the model fine-tuning procedure involves input preprocessing (marking the predicate in the sentence, T5's \"task prefix\", incorporating the predicate type and/or the verbal form of the nominalization) and output postprocessing (parsing the sequence into a list of QASRL-formatted QAs). \nIn order to use the model for QANom parsing easily, we suggest downloading the 'URL' file from this repository, and then use the 'QASRL_Pipeline' class:\n\n \nWhich will output:\n \nYou can learn more about using 'transformers.pipelines' in the official docs.\n\nNotice that you need to specify which word in the sentence is the predicate, about which the question will interrogate. By default, you should precede the predicate with the '<predicate>' symbol, but you can also specify your own predicate marker:\n\nIn addition, you can specify additional kwargs for controling the model's decoding algorithm:"
] |
text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# trained_model2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15.0
### Training results
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.9.1
- Datasets 1.14.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "trained_model2", "results": []}]}
|
kloon99/KML_Eula_generate_v1
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# trained_model2
This model is a fine-tuned version of distilgpt2 on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15.0
### Training results
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.9.1
- Datasets 1.14.0
- Tokenizers 0.10.3
|
[
"# trained_model2\n\nThis model is a fine-tuned version of distilgpt2 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 15.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.13.0.dev0\n- Pytorch 1.9.1\n- Datasets 1.14.0\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# trained_model2\n\nThis model is a fine-tuned version of distilgpt2 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 15.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.13.0.dev0\n- Pytorch 1.9.1\n- Datasets 1.14.0\n- Tokenizers 0.10.3"
] |
text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# trained_model2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15.0
### Training results
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.9.1
- Datasets 1.14.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "trained_model2", "results": []}]}
|
kloon99/KML_Eula_generate_v2
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# trained_model2
This model is a fine-tuned version of distilgpt2 on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15.0
### Training results
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.9.1
- Datasets 1.14.0
- Tokenizers 0.10.3
|
[
"# trained_model2\n\nThis model is a fine-tuned version of distilgpt2 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 15.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.13.0.dev0\n- Pytorch 1.9.1\n- Datasets 1.14.0\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# trained_model2\n\nThis model is a fine-tuned version of distilgpt2 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 15.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.13.0.dev0\n- Pytorch 1.9.1\n- Datasets 1.14.0\n- Tokenizers 0.10.3"
] |
text-classification
|
transformers
|
{'C0': 'audit_rights',
'C1': 'licensee_indemnity',
'C2': 'licensor_indemnity',
'C3': 'license_grant',
'C4': 'eula_others',
'C5': 'licensee_infringement_indemnity',
'C6': 'licensor_exemption_liability',
'C7': 'licensor_limit_liabilty',
'C8': 'software_warranty'}
|
{}
|
kloon99/KML_Software_License_v1
| null |
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #distilbert #text-classification #autotrain_compatible #endpoints_compatible #region-us
|
{'C0': 'audit_rights',
'C1': 'licensee_indemnity',
'C2': 'licensor_indemnity',
'C3': 'license_grant',
'C4': 'eula_others',
'C5': 'licensee_infringement_indemnity',
'C6': 'licensor_exemption_liability',
'C7': 'licensor_limit_liabilty',
'C8': 'software_warranty'}
|
[] |
[
"TAGS\n#transformers #pytorch #distilbert #text-classification #autotrain_compatible #endpoints_compatible #region-us \n"
] |
fill-mask
|
transformers
|
# KLUE BERT base
## Table of Contents
- [Model Details](#model-details)
- [How to Get Started With the Model](#how-to-get-started-with-the-model)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [Training](#training)
- [Evaluation](#evaluation)
- [Environmental Impact](#environmental-impact)
- [Technical Specifications](#technical-specifications)
- [Citation Information](#citation-information)
- [Model Card Authors](#model-card-authors)
## Model Details
**Model Description:** KLUE BERT base is a pre-trained BERT Model on Korean Language. The developers of KLUE BERT base developed the model in the context of the development of the [Korean Language Understanding Evaluation (KLUE) Benchmark](https://arxiv.org/pdf/2105.09680.pdf).
- **Developed by:** See [GitHub Repo](https://github.com/facebookresearch/fairseq/tree/main/examples/roberta) for model developers
- **Model Type:** Transformer-based language model
- **Language(s):** Korean
- **License:** cc-by-sa-4.0
- **Parent Model:** See the [BERT base uncased model](https://huggingface.co/bert-base-uncased) for more information about the BERT base model.
- **Resources for more information:**
- [Research Paper](https://arxiv.org/abs/2105.09680)
- [GitHub Repo](https://github.com/KLUE-benchmark/KLUE)
## How to Get Started With the Model
```python
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained("klue/bert-base")
tokenizer = AutoTokenizer.from_pretrained("klue/bert-base")
```
## Uses
#### Direct Use
The model can be used for tasks including topic classification, semantic textual similarity, natural language inference, named entity recognition, and other tasks outlined in the [KLUE Benchmark](https://github.com/KLUE-benchmark/KLUE).
#### Misuse and Out-of-scope Use
The model should not be used to intentionally create hostile or alienating environments for people. In addition, the model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
## Risks, Limitations and Biases
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). The model developers discuss several ethical considerations related to the model in the [paper](https://arxiv.org/pdf/2105.09680.pdf), including:
- Bias issues with the publicly available data used in the pretraining corpora (and considerations related to filtering)
- PII in the data used in the pretraining corpora (and efforts to pseudonymize the data)
For ethical considerations related to the KLUE Benchmark, also see the [paper](https://arxiv.org/pdf/2105.09680.pdf).
## Training
#### Training Data
The authors use the following pretraining corpora for the model, described in the [associated paper](https://arxiv.org/pdf/2105.09680.pdf):
> We gather the following five publicly available Korean corpora from diverse sources to cover a broad set of topics and many different styles. We combine these corpora to build the final pretraining corpus of size approximately 62GB.
>
> - **MODU:** [Modu Corpus](https://corpus.korean.go.kr) is a collection of Korean corpora distributed by [National Institute of Korean Languages](https://corpus.korean.go.kr/). It includes both formal articles (news and books) and colloquial text (dialogues).
> - **CC-100-Kor:** [CC-100](https://data.statmt.org/cc-100/) is the large-scale multilingual web crawled corpora by using CC-Net ([Wenzek et al., 2020](https://www.aclweb.org/anthology/2020.lrec-1.494)). This is used for training XLM-R ([Conneau et al., 2020](https://aclanthology.org/2020.acl-main.747/)). We use the Korean portion from this corpora.
> - **NAMUWIKI:** NAMUWIKI is a Korean web-based encyclopedia, similar to Wikipedia, but known to be less formal. Specifically, we download [the dump](http://dump.thewiki.kr) created on March 2nd, 2020.
> - **NEWSCRAWL:** NEWSCRAWL consists of 12,800,000 news articles published from 2011 to 2020, collected from a news aggregation platform.
> - **PETITION:** Petition is a collection of public petitions posted to the Blue House asking for administrative actions on social issues. We use the articles in the [Blue House National Petition](https://www1.president.go.kr/petitions) published from [August 2017 to March 2019](https://ko-nlp.github.io/Korpora/en-docs/corpuslist/korean_petitions.html).
The authors also describe ethical considerations related to the pretraining corpora in the [associated paper](https://arxiv.org/pdf/2105.09680.pdf).
#### Training Procedure
##### Preprocessing
The authors describe their preprocessing procedure in the [associated paper](https://arxiv.org/pdf/2105.09680.pdf):
> We filter noisy text and non-Korean text using the same methods from Section 2.3 (of the paper). Each document in the corpus is split into sentences using C++ implementation (v1.3.1.) of rule-based [Korean Sentence Splitter (KSS)](https://github.com/likejazz/korean-sentence-splitter). For CC-100-Kor and NEWSCRAWL, we keep sentences of length greater than equal to 200 characters, as a heuristics to keep well-formed sentences. We then remove sentences included in our benchmark task datasets, using BM25 as a sentence similarity metric ([reference](https://www.microsoft.com/en-us/research/publication/okapi-at-trec-3/)).
###### Tokenization
The authors describe their tokenization procedure in the [associated paper](https://arxiv.org/pdf/2105.09680.pdf):
> We design and use a new tokenization method, morpheme-based subword tokenization. When building a vocabulary, we pre-tokenize a raw text into morphemes using a morphological analyzer, and then we apply byte pair encoding (BPE) ([Senrich et al., 2016](https://aclanthology.org/P16-1162/)) to get the final vocabulary. For morpheme segmentation, we use [Mecab-ko](https://bitbucket.org/eunjeon/mecab-ko), MeCab ([Kudo, 2006](https://taku910.github.io/mecab/)) adapted for Korean, and for BPE segmentation, we use the wordpiece tokenizer from [Huggingface Tokenizers library](https://github.com/huggingface/tokenizers). We specify the vocabulary size to 32k. After building the vocabulary, we only use the BPE model during inference, which allows us to tokenize a word sequence by reflecting morphemes without a morphological analyzer. This improves both usability and speed.
The training configurations are further described in the [paper](https://arxiv.org/pdf/2105.09680.pdf).
## Evaluation
#### Testing Data, Factors and Metrics
The model was evaluated on the [KLUE Benchmark](https://github.com/KLUE-benchmark/KLUE). The tasks and metrics from the KLUE Benchmark that were used to evaluate this model are described briefly below. For more information about the KLUE Benchmark, see the [data card](https://huggingface.co/datasets/klue), [Github Repository](https://github.com/KLUE-benchmark/KLUE), and [associated paper](https://arxiv.org/pdf/2105.09680.pdf).
- **Task:** Topic Classification (TC) - Yonhap News Agency Topic Classification (YNAT), **Metrics:** Macro F1 score, defined as the mean of topic-wise F1 scores, giving the same importance to each topic.
- **Task:** Semantic Textual Similarity (STS), **Metrics:** Pearsons' correlation coefficient (Pearson’ r) and F1 score
- **Task:** Natural Language Inference (NLI), **Metrics:** Accuracy
- **Task:** Named Entity Recognition (NER), **Metrics:** Entity-level macro F1 (Entity F1) and character-level macro F1 (Char F1) scores
- **Task:** Relation Extraction (RE), **Metrics:** Micro F1 score on relation existing cases and area under the precision- recall curve (AUPRC) on all classes
- **Task:** Dependency Parsing (DP), **Metrics:** Unlabeled attachment score (UAS) and labeled attachment score (LAS)
- **Task:** Machine Reading Comprehension (MRC), **Metrics:** Exact match (EM) and character-level ROUGE-W (ROUGE), which can be viewed as longest common consecutive subsequence (LCCS)-based F1 score.
- **Task:** Dialogue State Tracking (DST), **Metrics:** Joint goal accuracy (JGA) and slot micro F1 score (Slot F1)
#### Results
| Task | TC | STS | | NLI | NER | | RE | | DP | | MRC | | DST | |
| :---: |:---: | :---: | :---: |:---:| :---: | :---: |:---:| :---:| :---: |:---: | :---: | :---:| :---: | :---: |
| Metric | F1 | Pearsons' r| F1 | ACC | Entity F1 | Char F1 | F1 | AUPRC| UAS | LAS | EM | ROUGE| JGA |Slot F1 |
| | 85.73| 90.85 | 82.84 |81.63| 83.97 | 91.39 |66.44| 66.17| 89.96 |88.05 | 62.32 | 68.51| 46.64 | 91.61 |
## Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). We present the hardware type based on the [associated paper](https://arxiv.org/pdf/2105.09680.pdf).
- **Hardware Type:** TPU v3-8
- **Hours used:** Unknown
- **Cloud Provider:** Unknown
- **Compute Region:** Unknown
- **Carbon Emitted:** Unknown
## Technical Specifications
See the [associated paper](https://arxiv.org/pdf/2105.09680.pdf) for details on the modeling architecture (BERT), objective, compute infrastructure, and training details.
## Citation Information
```bibtex
@misc{park2021klue,
title={KLUE: Korean Language Understanding Evaluation},
author={Sungjoon Park and Jihyung Moon and Sungdong Kim and Won Ik Cho and Jiyoon Han and Jangwon Park and Chisung Song and Junseong Kim and Yongsook Song and Taehwan Oh and Joohong Lee and Juhyun Oh and Sungwon Lyu and Younghoon Jeong and Inkwon Lee and Sangwoo Seo and Dongjun Lee and Hyunwoo Kim and Myeonghwa Lee and Seongbo Jang and Seungwon Do and Sunkyoung Kim and Kyungtae Lim and Jongwon Lee and Kyumin Park and Jamin Shin and Seonghyun Kim and Lucy Park and Alice Oh and Jungwoo Ha and Kyunghyun Cho},
year={2021},
eprint={2105.09680},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "ko", "license": "cc-by-sa-4.0", "tags": ["korean", "klue"], "mask_token": "[MASK]", "widget": [{"text": "\ub300\ud55c\ubbfc\uad6d\uc758 \uc218\ub3c4\ub294 [MASK] \uc785\ub2c8\ub2e4."}]}
|
klue/bert-base
| null |
[
"transformers",
"pytorch",
"safetensors",
"bert",
"fill-mask",
"korean",
"klue",
"ko",
"arxiv:2105.09680",
"arxiv:1910.09700",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2105.09680",
"1910.09700"
] |
[
"ko"
] |
TAGS
#transformers #pytorch #safetensors #bert #fill-mask #korean #klue #ko #arxiv-2105.09680 #arxiv-1910.09700 #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
KLUE BERT base
==============
Table of Contents
-----------------
* Model Details
* How to Get Started With the Model
* Uses
* Risks, Limitations and Biases
* Training
* Evaluation
* Environmental Impact
* Technical Specifications
* Citation Information
* Model Card Authors
Model Details
-------------
Model Description: KLUE BERT base is a pre-trained BERT Model on Korean Language. The developers of KLUE BERT base developed the model in the context of the development of the Korean Language Understanding Evaluation (KLUE) Benchmark.
* Developed by: See GitHub Repo for model developers
* Model Type: Transformer-based language model
* Language(s): Korean
* License: cc-by-sa-4.0
* Parent Model: See the BERT base uncased model for more information about the BERT base model.
* Resources for more information:
+ Research Paper
+ GitHub Repo
How to Get Started With the Model
---------------------------------
Uses
----
#### Direct Use
The model can be used for tasks including topic classification, semantic textual similarity, natural language inference, named entity recognition, and other tasks outlined in the KLUE Benchmark.
#### Misuse and Out-of-scope Use
The model should not be used to intentionally create hostile or alienating environments for people. In addition, the model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
Risks, Limitations and Biases
-----------------------------
Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). The model developers discuss several ethical considerations related to the model in the paper, including:
* Bias issues with the publicly available data used in the pretraining corpora (and considerations related to filtering)
* PII in the data used in the pretraining corpora (and efforts to pseudonymize the data)
For ethical considerations related to the KLUE Benchmark, also see the paper.
Training
--------
#### Training Data
The authors use the following pretraining corpora for the model, described in the associated paper:
>
> We gather the following five publicly available Korean corpora from diverse sources to cover a broad set of topics and many different styles. We combine these corpora to build the final pretraining corpus of size approximately 62GB.
>
>
> * MODU: Modu Corpus is a collection of Korean corpora distributed by National Institute of Korean Languages. It includes both formal articles (news and books) and colloquial text (dialogues).
> * CC-100-Kor: CC-100 is the large-scale multilingual web crawled corpora by using CC-Net (Wenzek et al., 2020). This is used for training XLM-R (Conneau et al., 2020). We use the Korean portion from this corpora.
> * NAMUWIKI: NAMUWIKI is a Korean web-based encyclopedia, similar to Wikipedia, but known to be less formal. Specifically, we download the dump created on March 2nd, 2020.
> * NEWSCRAWL: NEWSCRAWL consists of 12,800,000 news articles published from 2011 to 2020, collected from a news aggregation platform.
> * PETITION: Petition is a collection of public petitions posted to the Blue House asking for administrative actions on social issues. We use the articles in the Blue House National Petition published from August 2017 to March 2019.
>
>
>
The authors also describe ethical considerations related to the pretraining corpora in the associated paper.
#### Training Procedure
##### Preprocessing
The authors describe their preprocessing procedure in the associated paper:
>
> We filter noisy text and non-Korean text using the same methods from Section 2.3 (of the paper). Each document in the corpus is split into sentences using C++ implementation (v1.3.1.) of rule-based Korean Sentence Splitter (KSS). For CC-100-Kor and NEWSCRAWL, we keep sentences of length greater than equal to 200 characters, as a heuristics to keep well-formed sentences. We then remove sentences included in our benchmark task datasets, using BM25 as a sentence similarity metric (reference).
>
>
>
###### Tokenization
The authors describe their tokenization procedure in the associated paper:
>
> We design and use a new tokenization method, morpheme-based subword tokenization. When building a vocabulary, we pre-tokenize a raw text into morphemes using a morphological analyzer, and then we apply byte pair encoding (BPE) (Senrich et al., 2016) to get the final vocabulary. For morpheme segmentation, we use Mecab-ko, MeCab (Kudo, 2006) adapted for Korean, and for BPE segmentation, we use the wordpiece tokenizer from Huggingface Tokenizers library. We specify the vocabulary size to 32k. After building the vocabulary, we only use the BPE model during inference, which allows us to tokenize a word sequence by reflecting morphemes without a morphological analyzer. This improves both usability and speed.
>
>
>
The training configurations are further described in the paper.
Evaluation
----------
#### Testing Data, Factors and Metrics
The model was evaluated on the KLUE Benchmark. The tasks and metrics from the KLUE Benchmark that were used to evaluate this model are described briefly below. For more information about the KLUE Benchmark, see the data card, Github Repository, and associated paper.
* Task: Topic Classification (TC) - Yonhap News Agency Topic Classification (YNAT), Metrics: Macro F1 score, defined as the mean of topic-wise F1 scores, giving the same importance to each topic.
* Task: Semantic Textual Similarity (STS), Metrics: Pearsons' correlation coefficient (Pearson’ r) and F1 score
* Task: Natural Language Inference (NLI), Metrics: Accuracy
* Task: Named Entity Recognition (NER), Metrics: Entity-level macro F1 (Entity F1) and character-level macro F1 (Char F1) scores
* Task: Relation Extraction (RE), Metrics: Micro F1 score on relation existing cases and area under the precision- recall curve (AUPRC) on all classes
* Task: Dependency Parsing (DP), Metrics: Unlabeled attachment score (UAS) and labeled attachment score (LAS)
* Task: Machine Reading Comprehension (MRC), Metrics: Exact match (EM) and character-level ROUGE-W (ROUGE), which can be viewed as longest common consecutive subsequence (LCCS)-based F1 score.
* Task: Dialogue State Tracking (DST), Metrics: Joint goal accuracy (JGA) and slot micro F1 score (Slot F1)
#### Results
Environmental Impact
--------------------
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). We present the hardware type based on the associated paper.
* Hardware Type: TPU v3-8
* Hours used: Unknown
* Cloud Provider: Unknown
* Compute Region: Unknown
* Carbon Emitted: Unknown
Technical Specifications
------------------------
See the associated paper for details on the modeling architecture (BERT), objective, compute infrastructure, and training details.
|
[
"#### Direct Use\n\n\nThe model can be used for tasks including topic classification, semantic textual similarity, natural language inference, named entity recognition, and other tasks outlined in the KLUE Benchmark.",
"#### Misuse and Out-of-scope Use\n\n\nThe model should not be used to intentionally create hostile or alienating environments for people. In addition, the model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.\n\n\nRisks, Limitations and Biases\n-----------------------------\n\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). The model developers discuss several ethical considerations related to the model in the paper, including:\n\n\n* Bias issues with the publicly available data used in the pretraining corpora (and considerations related to filtering)\n* PII in the data used in the pretraining corpora (and efforts to pseudonymize the data)\n\n\nFor ethical considerations related to the KLUE Benchmark, also see the paper.\n\n\nTraining\n--------",
"#### Training Data\n\n\nThe authors use the following pretraining corpora for the model, described in the associated paper:\n\n\n\n> \n> We gather the following five publicly available Korean corpora from diverse sources to cover a broad set of topics and many different styles. We combine these corpora to build the final pretraining corpus of size approximately 62GB.\n> \n> \n> * MODU: Modu Corpus is a collection of Korean corpora distributed by National Institute of Korean Languages. It includes both formal articles (news and books) and colloquial text (dialogues).\n> * CC-100-Kor: CC-100 is the large-scale multilingual web crawled corpora by using CC-Net (Wenzek et al., 2020). This is used for training XLM-R (Conneau et al., 2020). We use the Korean portion from this corpora.\n> * NAMUWIKI: NAMUWIKI is a Korean web-based encyclopedia, similar to Wikipedia, but known to be less formal. Specifically, we download the dump created on March 2nd, 2020.\n> * NEWSCRAWL: NEWSCRAWL consists of 12,800,000 news articles published from 2011 to 2020, collected from a news aggregation platform.\n> * PETITION: Petition is a collection of public petitions posted to the Blue House asking for administrative actions on social issues. We use the articles in the Blue House National Petition published from August 2017 to March 2019.\n> \n> \n> \n\n\nThe authors also describe ethical considerations related to the pretraining corpora in the associated paper.",
"#### Training Procedure",
"##### Preprocessing\n\n\nThe authors describe their preprocessing procedure in the associated paper:\n\n\n\n> \n> We filter noisy text and non-Korean text using the same methods from Section 2.3 (of the paper). Each document in the corpus is split into sentences using C++ implementation (v1.3.1.) of rule-based Korean Sentence Splitter (KSS). For CC-100-Kor and NEWSCRAWL, we keep sentences of length greater than equal to 200 characters, as a heuristics to keep well-formed sentences. We then remove sentences included in our benchmark task datasets, using BM25 as a sentence similarity metric (reference).\n> \n> \n>",
"###### Tokenization\n\n\nThe authors describe their tokenization procedure in the associated paper:\n\n\n\n> \n> We design and use a new tokenization method, morpheme-based subword tokenization. When building a vocabulary, we pre-tokenize a raw text into morphemes using a morphological analyzer, and then we apply byte pair encoding (BPE) (Senrich et al., 2016) to get the final vocabulary. For morpheme segmentation, we use Mecab-ko, MeCab (Kudo, 2006) adapted for Korean, and for BPE segmentation, we use the wordpiece tokenizer from Huggingface Tokenizers library. We specify the vocabulary size to 32k. After building the vocabulary, we only use the BPE model during inference, which allows us to tokenize a word sequence by reflecting morphemes without a morphological analyzer. This improves both usability and speed.\n> \n> \n> \n\n\nThe training configurations are further described in the paper.\n\n\nEvaluation\n----------",
"#### Testing Data, Factors and Metrics\n\n\nThe model was evaluated on the KLUE Benchmark. The tasks and metrics from the KLUE Benchmark that were used to evaluate this model are described briefly below. For more information about the KLUE Benchmark, see the data card, Github Repository, and associated paper.\n\n\n* Task: Topic Classification (TC) - Yonhap News Agency Topic Classification (YNAT), Metrics: Macro F1 score, defined as the mean of topic-wise F1 scores, giving the same importance to each topic.\n* Task: Semantic Textual Similarity (STS), Metrics: Pearsons' correlation coefficient (Pearson’ r) and F1 score\n* Task: Natural Language Inference (NLI), Metrics: Accuracy\n* Task: Named Entity Recognition (NER), Metrics: Entity-level macro F1 (Entity F1) and character-level macro F1 (Char F1) scores\n* Task: Relation Extraction (RE), Metrics: Micro F1 score on relation existing cases and area under the precision- recall curve (AUPRC) on all classes\n* Task: Dependency Parsing (DP), Metrics: Unlabeled attachment score (UAS) and labeled attachment score (LAS)\n* Task: Machine Reading Comprehension (MRC), Metrics: Exact match (EM) and character-level ROUGE-W (ROUGE), which can be viewed as longest common consecutive subsequence (LCCS)-based F1 score.\n* Task: Dialogue State Tracking (DST), Metrics: Joint goal accuracy (JGA) and slot micro F1 score (Slot F1)",
"#### Results\n\n\n\nEnvironmental Impact\n--------------------\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). We present the hardware type based on the associated paper.\n\n\n* Hardware Type: TPU v3-8\n* Hours used: Unknown\n* Cloud Provider: Unknown\n* Compute Region: Unknown\n* Carbon Emitted: Unknown\n\n\nTechnical Specifications\n------------------------\n\n\nSee the associated paper for details on the modeling architecture (BERT), objective, compute infrastructure, and training details."
] |
[
"TAGS\n#transformers #pytorch #safetensors #bert #fill-mask #korean #klue #ko #arxiv-2105.09680 #arxiv-1910.09700 #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"#### Direct Use\n\n\nThe model can be used for tasks including topic classification, semantic textual similarity, natural language inference, named entity recognition, and other tasks outlined in the KLUE Benchmark.",
"#### Misuse and Out-of-scope Use\n\n\nThe model should not be used to intentionally create hostile or alienating environments for people. In addition, the model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.\n\n\nRisks, Limitations and Biases\n-----------------------------\n\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). The model developers discuss several ethical considerations related to the model in the paper, including:\n\n\n* Bias issues with the publicly available data used in the pretraining corpora (and considerations related to filtering)\n* PII in the data used in the pretraining corpora (and efforts to pseudonymize the data)\n\n\nFor ethical considerations related to the KLUE Benchmark, also see the paper.\n\n\nTraining\n--------",
"#### Training Data\n\n\nThe authors use the following pretraining corpora for the model, described in the associated paper:\n\n\n\n> \n> We gather the following five publicly available Korean corpora from diverse sources to cover a broad set of topics and many different styles. We combine these corpora to build the final pretraining corpus of size approximately 62GB.\n> \n> \n> * MODU: Modu Corpus is a collection of Korean corpora distributed by National Institute of Korean Languages. It includes both formal articles (news and books) and colloquial text (dialogues).\n> * CC-100-Kor: CC-100 is the large-scale multilingual web crawled corpora by using CC-Net (Wenzek et al., 2020). This is used for training XLM-R (Conneau et al., 2020). We use the Korean portion from this corpora.\n> * NAMUWIKI: NAMUWIKI is a Korean web-based encyclopedia, similar to Wikipedia, but known to be less formal. Specifically, we download the dump created on March 2nd, 2020.\n> * NEWSCRAWL: NEWSCRAWL consists of 12,800,000 news articles published from 2011 to 2020, collected from a news aggregation platform.\n> * PETITION: Petition is a collection of public petitions posted to the Blue House asking for administrative actions on social issues. We use the articles in the Blue House National Petition published from August 2017 to March 2019.\n> \n> \n> \n\n\nThe authors also describe ethical considerations related to the pretraining corpora in the associated paper.",
"#### Training Procedure",
"##### Preprocessing\n\n\nThe authors describe their preprocessing procedure in the associated paper:\n\n\n\n> \n> We filter noisy text and non-Korean text using the same methods from Section 2.3 (of the paper). Each document in the corpus is split into sentences using C++ implementation (v1.3.1.) of rule-based Korean Sentence Splitter (KSS). For CC-100-Kor and NEWSCRAWL, we keep sentences of length greater than equal to 200 characters, as a heuristics to keep well-formed sentences. We then remove sentences included in our benchmark task datasets, using BM25 as a sentence similarity metric (reference).\n> \n> \n>",
"###### Tokenization\n\n\nThe authors describe their tokenization procedure in the associated paper:\n\n\n\n> \n> We design and use a new tokenization method, morpheme-based subword tokenization. When building a vocabulary, we pre-tokenize a raw text into morphemes using a morphological analyzer, and then we apply byte pair encoding (BPE) (Senrich et al., 2016) to get the final vocabulary. For morpheme segmentation, we use Mecab-ko, MeCab (Kudo, 2006) adapted for Korean, and for BPE segmentation, we use the wordpiece tokenizer from Huggingface Tokenizers library. We specify the vocabulary size to 32k. After building the vocabulary, we only use the BPE model during inference, which allows us to tokenize a word sequence by reflecting morphemes without a morphological analyzer. This improves both usability and speed.\n> \n> \n> \n\n\nThe training configurations are further described in the paper.\n\n\nEvaluation\n----------",
"#### Testing Data, Factors and Metrics\n\n\nThe model was evaluated on the KLUE Benchmark. The tasks and metrics from the KLUE Benchmark that were used to evaluate this model are described briefly below. For more information about the KLUE Benchmark, see the data card, Github Repository, and associated paper.\n\n\n* Task: Topic Classification (TC) - Yonhap News Agency Topic Classification (YNAT), Metrics: Macro F1 score, defined as the mean of topic-wise F1 scores, giving the same importance to each topic.\n* Task: Semantic Textual Similarity (STS), Metrics: Pearsons' correlation coefficient (Pearson’ r) and F1 score\n* Task: Natural Language Inference (NLI), Metrics: Accuracy\n* Task: Named Entity Recognition (NER), Metrics: Entity-level macro F1 (Entity F1) and character-level macro F1 (Char F1) scores\n* Task: Relation Extraction (RE), Metrics: Micro F1 score on relation existing cases and area under the precision- recall curve (AUPRC) on all classes\n* Task: Dependency Parsing (DP), Metrics: Unlabeled attachment score (UAS) and labeled attachment score (LAS)\n* Task: Machine Reading Comprehension (MRC), Metrics: Exact match (EM) and character-level ROUGE-W (ROUGE), which can be viewed as longest common consecutive subsequence (LCCS)-based F1 score.\n* Task: Dialogue State Tracking (DST), Metrics: Joint goal accuracy (JGA) and slot micro F1 score (Slot F1)",
"#### Results\n\n\n\nEnvironmental Impact\n--------------------\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). We present the hardware type based on the associated paper.\n\n\n* Hardware Type: TPU v3-8\n* Hours used: Unknown\n* Cloud Provider: Unknown\n* Compute Region: Unknown\n* Carbon Emitted: Unknown\n\n\nTechnical Specifications\n------------------------\n\n\nSee the associated paper for details on the modeling architecture (BERT), objective, compute infrastructure, and training details."
] |
fill-mask
|
transformers
|
# KLUE RoBERTa base
Pretrained RoBERTa Model on Korean Language. See [Github](https://github.com/KLUE-benchmark/KLUE) and [Paper](https://arxiv.org/abs/2105.09680) for more details.
## How to use
_NOTE:_ Use `BertTokenizer` instead of RobertaTokenizer. (`AutoTokenizer` will load `BertTokenizer`)
```python
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained("klue/roberta-base")
tokenizer = AutoTokenizer.from_pretrained("klue/roberta-base")
```
## BibTeX entry and citation info
```bibtex
@misc{park2021klue,
title={KLUE: Korean Language Understanding Evaluation},
author={Sungjoon Park and Jihyung Moon and Sungdong Kim and Won Ik Cho and Jiyoon Han and Jangwon Park and Chisung Song and Junseong Kim and Yongsook Song and Taehwan Oh and Joohong Lee and Juhyun Oh and Sungwon Lyu and Younghoon Jeong and Inkwon Lee and Sangwoo Seo and Dongjun Lee and Hyunwoo Kim and Myeonghwa Lee and Seongbo Jang and Seungwon Do and Sunkyoung Kim and Kyungtae Lim and Jongwon Lee and Kyumin Park and Jamin Shin and Seonghyun Kim and Lucy Park and Alice Oh and Jungwoo Ha and Kyunghyun Cho},
year={2021},
eprint={2105.09680},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "ko", "tags": ["korean", "klue"], "mask_token": "[MASK]", "widget": [{"text": "\ub300\ud55c\ubbfc\uad6d\uc758 \uc218\ub3c4\ub294 [MASK] \uc785\ub2c8\ub2e4."}]}
|
klue/roberta-base
| null |
[
"transformers",
"pytorch",
"safetensors",
"roberta",
"fill-mask",
"korean",
"klue",
"ko",
"arxiv:2105.09680",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2105.09680"
] |
[
"ko"
] |
TAGS
#transformers #pytorch #safetensors #roberta #fill-mask #korean #klue #ko #arxiv-2105.09680 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# KLUE RoBERTa base
Pretrained RoBERTa Model on Korean Language. See Github and Paper for more details.
## How to use
_NOTE:_ Use 'BertTokenizer' instead of RobertaTokenizer. ('AutoTokenizer' will load 'BertTokenizer')
## BibTeX entry and citation info
|
[
"# KLUE RoBERTa base\n\nPretrained RoBERTa Model on Korean Language. See Github and Paper for more details.",
"## How to use\n\n_NOTE:_ Use 'BertTokenizer' instead of RobertaTokenizer. ('AutoTokenizer' will load 'BertTokenizer')",
"## BibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #safetensors #roberta #fill-mask #korean #klue #ko #arxiv-2105.09680 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# KLUE RoBERTa base\n\nPretrained RoBERTa Model on Korean Language. See Github and Paper for more details.",
"## How to use\n\n_NOTE:_ Use 'BertTokenizer' instead of RobertaTokenizer. ('AutoTokenizer' will load 'BertTokenizer')",
"## BibTeX entry and citation info"
] |
fill-mask
|
transformers
|
# KLUE RoBERTa large
Pretrained RoBERTa Model on Korean Language. See [Github](https://github.com/KLUE-benchmark/KLUE) and [Paper](https://arxiv.org/abs/2105.09680) for more details.
## How to use
_NOTE:_ Use `BertTokenizer` instead of RobertaTokenizer. (`AutoTokenizer` will load `BertTokenizer`)
```python
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained("klue/roberta-large")
tokenizer = AutoTokenizer.from_pretrained("klue/roberta-large")
```
## BibTeX entry and citation info
```bibtex
@misc{park2021klue,
title={KLUE: Korean Language Understanding Evaluation},
author={Sungjoon Park and Jihyung Moon and Sungdong Kim and Won Ik Cho and Jiyoon Han and Jangwon Park and Chisung Song and Junseong Kim and Yongsook Song and Taehwan Oh and Joohong Lee and Juhyun Oh and Sungwon Lyu and Younghoon Jeong and Inkwon Lee and Sangwoo Seo and Dongjun Lee and Hyunwoo Kim and Myeonghwa Lee and Seongbo Jang and Seungwon Do and Sunkyoung Kim and Kyungtae Lim and Jongwon Lee and Kyumin Park and Jamin Shin and Seonghyun Kim and Lucy Park and Alice Oh and Jungwoo Ha and Kyunghyun Cho},
year={2021},
eprint={2105.09680},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "ko", "tags": ["korean", "klue"], "mask_token": "[MASK]", "widget": [{"text": "\ub300\ud55c\ubbfc\uad6d\uc758 \uc218\ub3c4\ub294 [MASK] \uc785\ub2c8\ub2e4."}]}
|
klue/roberta-large
| null |
[
"transformers",
"pytorch",
"safetensors",
"roberta",
"fill-mask",
"korean",
"klue",
"ko",
"arxiv:2105.09680",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2105.09680"
] |
[
"ko"
] |
TAGS
#transformers #pytorch #safetensors #roberta #fill-mask #korean #klue #ko #arxiv-2105.09680 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# KLUE RoBERTa large
Pretrained RoBERTa Model on Korean Language. See Github and Paper for more details.
## How to use
_NOTE:_ Use 'BertTokenizer' instead of RobertaTokenizer. ('AutoTokenizer' will load 'BertTokenizer')
## BibTeX entry and citation info
|
[
"# KLUE RoBERTa large\n\nPretrained RoBERTa Model on Korean Language. See Github and Paper for more details.",
"## How to use\n\n_NOTE:_ Use 'BertTokenizer' instead of RobertaTokenizer. ('AutoTokenizer' will load 'BertTokenizer')",
"## BibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #safetensors #roberta #fill-mask #korean #klue #ko #arxiv-2105.09680 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# KLUE RoBERTa large\n\nPretrained RoBERTa Model on Korean Language. See Github and Paper for more details.",
"## How to use\n\n_NOTE:_ Use 'BertTokenizer' instead of RobertaTokenizer. ('AutoTokenizer' will load 'BertTokenizer')",
"## BibTeX entry and citation info"
] |
fill-mask
|
transformers
|
# KLUE RoBERTa small
Pretrained RoBERTa Model on Korean Language. See [Github](https://github.com/KLUE-benchmark/KLUE) and [Paper](https://arxiv.org/abs/2105.09680) for more details.
## How to use
_NOTE:_ Use `BertTokenizer` instead of RobertaTokenizer. (`AutoTokenizer` will load `BertTokenizer`)
```python
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained("klue/roberta-small")
tokenizer = AutoTokenizer.from_pretrained("klue/roberta-small")
```
## BibTeX entry and citation info
```bibtex
@misc{park2021klue,
title={KLUE: Korean Language Understanding Evaluation},
author={Sungjoon Park and Jihyung Moon and Sungdong Kim and Won Ik Cho and Jiyoon Han and Jangwon Park and Chisung Song and Junseong Kim and Yongsook Song and Taehwan Oh and Joohong Lee and Juhyun Oh and Sungwon Lyu and Younghoon Jeong and Inkwon Lee and Sangwoo Seo and Dongjun Lee and Hyunwoo Kim and Myeonghwa Lee and Seongbo Jang and Seungwon Do and Sunkyoung Kim and Kyungtae Lim and Jongwon Lee and Kyumin Park and Jamin Shin and Seonghyun Kim and Lucy Park and Alice Oh and Jungwoo Ha and Kyunghyun Cho},
year={2021},
eprint={2105.09680},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "ko", "tags": ["korean", "klue"], "mask_token": "[MASK]", "widget": [{"text": "\ub300\ud55c\ubbfc\uad6d\uc758 \uc218\ub3c4\ub294 [MASK] \uc785\ub2c8\ub2e4."}]}
|
klue/roberta-small
| null |
[
"transformers",
"pytorch",
"safetensors",
"roberta",
"fill-mask",
"korean",
"klue",
"ko",
"arxiv:2105.09680",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2105.09680"
] |
[
"ko"
] |
TAGS
#transformers #pytorch #safetensors #roberta #fill-mask #korean #klue #ko #arxiv-2105.09680 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# KLUE RoBERTa small
Pretrained RoBERTa Model on Korean Language. See Github and Paper for more details.
## How to use
_NOTE:_ Use 'BertTokenizer' instead of RobertaTokenizer. ('AutoTokenizer' will load 'BertTokenizer')
## BibTeX entry and citation info
|
[
"# KLUE RoBERTa small\n\nPretrained RoBERTa Model on Korean Language. See Github and Paper for more details.",
"## How to use\n\n_NOTE:_ Use 'BertTokenizer' instead of RobertaTokenizer. ('AutoTokenizer' will load 'BertTokenizer')",
"## BibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #safetensors #roberta #fill-mask #korean #klue #ko #arxiv-2105.09680 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# KLUE RoBERTa small\n\nPretrained RoBERTa Model on Korean Language. See Github and Paper for more details.",
"## How to use\n\n_NOTE:_ Use 'BertTokenizer' instead of RobertaTokenizer. ('AutoTokenizer' will load 'BertTokenizer')",
"## BibTeX entry and citation info"
] |
summarization
|
transformers
|
### Pegasus Models
See Docs: [here](https://huggingface.co/transformers/master/model_doc/pegasus.html)
Original TF 1 code [here](https://github.com/google-research/pegasus)
Authors: Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019
Maintained by: [@sshleifer](https://twitter.com/sam_shleifer)
Task: Summarization
The following is copied from the authors' README.
# Mixed & Stochastic Checkpoints
We train a pegasus model with sampled gap sentence ratios on both C4 and HugeNews, and stochastically sample important sentences. The updated the results are reported in this table.
| dataset | C4 | HugeNews | Mixed & Stochastic|
| ---- | ---- | ---- | ----|
| xsum | 45.20/22.06/36.99 | 47.21/24.56/39.25 | 47.60/24.83/39.64|
| cnn_dailymail | 43.90/21.20/40.76 | 44.17/21.47/41.11 | 44.16/21.56/41.30|
| newsroom | 45.07/33.39/41.28 | 45.15/33.51/41.33 | 45.98/34.20/42.18|
| multi_news | 46.74/17.95/24.26 | 47.52/18.72/24.91 | 47.65/18.75/24.95|
| gigaword | 38.75/19.96/36.14 | 39.12/19.86/36.24 | 39.65/20.47/36.76|
| wikihow | 43.07/19.70/34.79 | 41.35/18.51/33.42 | 46.39/22.12/38.41 *|
| reddit_tifu | 26.54/8.94/21.64 | 26.63/9.01/21.60 | 27.99/9.81/22.94|
| big_patent | 53.63/33.16/42.25 | 53.41/32.89/42.07 | 52.29/33.08/41.66 *|
| arxiv | 44.70/17.27/25.80 | 44.67/17.18/25.73 | 44.21/16.95/25.67|
| pubmed | 45.49/19.90/27.69 | 45.09/19.56/27.42 | 45.97/20.15/28.25|
| aeslc | 37.69/21.85/36.84 | 37.40/21.22/36.45 | 37.68/21.25/36.51|
| billsum | 57.20/39.56/45.80 | 57.31/40.19/45.82 | 59.67/41.58/47.59|
The "Mixed & Stochastic" model has the following changes:
- trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
- trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
- the model uniformly sample a gap sentence ratio between 15% and 45%.
- importance sentences are sampled using a 20% uniform noise to importance scores.
- the sentencepiece tokenizer is updated to be able to encode newline character.
(*) the numbers of wikihow and big_patent datasets are not comparable because of change in tokenization and data:
- wikihow dataset contains newline characters which is useful for paragraph segmentation, the C4 and HugeNews model's sentencepiece tokenizer doesn't encode newline and loose this information.
- we update the BigPatent dataset to preserve casing, some format cleanings are also changed, please refer to change in TFDS.
The "Mixed & Stochastic" model has the following changes (from pegasus-large in the paper):
trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
the model uniformly sample a gap sentence ratio between 15% and 45%.
importance sentences are sampled using a 20% uniform noise to importance scores.
the sentencepiece tokenizer is updated to be able to encode newline character.
Citation
```
@misc{zhang2019pegasus,
title={PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization},
author={Jingqing Zhang and Yao Zhao and Mohammad Saleh and Peter J. Liu},
year={2019},
eprint={1912.08777},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "tags": ["summarization"]}
|
kmfoda/staging-pegasus-gmeetsamsum
| null |
[
"transformers",
"pytorch",
"pegasus",
"feature-extraction",
"summarization",
"en",
"arxiv:1912.08777",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1912.08777"
] |
[
"en"
] |
TAGS
#transformers #pytorch #pegasus #feature-extraction #summarization #en #arxiv-1912.08777 #endpoints_compatible #region-us
|
### Pegasus Models
See Docs: here
Original TF 1 code here
Authors: Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019
Maintained by: @sshleifer
Task: Summarization
The following is copied from the authors' README.
Mixed & Stochastic Checkpoints
==============================
We train a pegasus model with sampled gap sentence ratios on both C4 and HugeNews, and stochastically sample important sentences. The updated the results are reported in this table.
The "Mixed & Stochastic" model has the following changes:
* trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
* trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
* the model uniformly sample a gap sentence ratio between 15% and 45%.
* importance sentences are sampled using a 20% uniform noise to importance scores.
* the sentencepiece tokenizer is updated to be able to encode newline character.
(\*) the numbers of wikihow and big\_patent datasets are not comparable because of change in tokenization and data:
* wikihow dataset contains newline characters which is useful for paragraph segmentation, the C4 and HugeNews model's sentencepiece tokenizer doesn't encode newline and loose this information.
* we update the BigPatent dataset to preserve casing, some format cleanings are also changed, please refer to change in TFDS.
The "Mixed & Stochastic" model has the following changes (from pegasus-large in the paper):
trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).
trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).
the model uniformly sample a gap sentence ratio between 15% and 45%.
importance sentences are sampled using a 20% uniform noise to importance scores.
the sentencepiece tokenizer is updated to be able to encode newline character.
Citation
|
[
"### Pegasus Models\n\n\nSee Docs: here\n\n\nOriginal TF 1 code here\n\n\nAuthors: Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019\n\n\nMaintained by: @sshleifer\n\n\nTask: Summarization\n\n\nThe following is copied from the authors' README.\n\n\nMixed & Stochastic Checkpoints\n==============================\n\n\nWe train a pegasus model with sampled gap sentence ratios on both C4 and HugeNews, and stochastically sample important sentences. The updated the results are reported in this table.\n\n\n\nThe \"Mixed & Stochastic\" model has the following changes:\n\n\n* trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).\n* trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).\n* the model uniformly sample a gap sentence ratio between 15% and 45%.\n* importance sentences are sampled using a 20% uniform noise to importance scores.\n* the sentencepiece tokenizer is updated to be able to encode newline character.\n\n\n(\\*) the numbers of wikihow and big\\_patent datasets are not comparable because of change in tokenization and data:\n\n\n* wikihow dataset contains newline characters which is useful for paragraph segmentation, the C4 and HugeNews model's sentencepiece tokenizer doesn't encode newline and loose this information.\n* we update the BigPatent dataset to preserve casing, some format cleanings are also changed, please refer to change in TFDS.\n\n\nThe \"Mixed & Stochastic\" model has the following changes (from pegasus-large in the paper):\n\n\ntrained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).\ntrained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).\nthe model uniformly sample a gap sentence ratio between 15% and 45%.\nimportance sentences are sampled using a 20% uniform noise to importance scores.\nthe sentencepiece tokenizer is updated to be able to encode newline character.\n\n\nCitation"
] |
[
"TAGS\n#transformers #pytorch #pegasus #feature-extraction #summarization #en #arxiv-1912.08777 #endpoints_compatible #region-us \n",
"### Pegasus Models\n\n\nSee Docs: here\n\n\nOriginal TF 1 code here\n\n\nAuthors: Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019\n\n\nMaintained by: @sshleifer\n\n\nTask: Summarization\n\n\nThe following is copied from the authors' README.\n\n\nMixed & Stochastic Checkpoints\n==============================\n\n\nWe train a pegasus model with sampled gap sentence ratios on both C4 and HugeNews, and stochastically sample important sentences. The updated the results are reported in this table.\n\n\n\nThe \"Mixed & Stochastic\" model has the following changes:\n\n\n* trained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).\n* trained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).\n* the model uniformly sample a gap sentence ratio between 15% and 45%.\n* importance sentences are sampled using a 20% uniform noise to importance scores.\n* the sentencepiece tokenizer is updated to be able to encode newline character.\n\n\n(\\*) the numbers of wikihow and big\\_patent datasets are not comparable because of change in tokenization and data:\n\n\n* wikihow dataset contains newline characters which is useful for paragraph segmentation, the C4 and HugeNews model's sentencepiece tokenizer doesn't encode newline and loose this information.\n* we update the BigPatent dataset to preserve casing, some format cleanings are also changed, please refer to change in TFDS.\n\n\nThe \"Mixed & Stochastic\" model has the following changes (from pegasus-large in the paper):\n\n\ntrained on both C4 and HugeNews (dataset mixture is weighted by their number of examples).\ntrained for 1.5M instead of 500k (we observe slower convergence on pretraining perplexity).\nthe model uniformly sample a gap sentence ratio between 15% and 45%.\nimportance sentences are sampled using a 20% uniform noise to importance scores.\nthe sentencepiece tokenizer is updated to be able to encode newline character.\n\n\nCitation"
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Large-XLSR-53-Arabic
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Arabic using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import librosa
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "ar", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("kmfoda/wav2vec2-large-xlsr-arabic")
model = Wav2Vec2ForCTC.from_pretrained("kmfoda/wav2vec2-large-xlsr-arabic")
resamplers = { # all three sampling rates exist in test split
48000: torchaudio.transforms.Resample(48000, 16000),
44100: torchaudio.transforms.Resample(44100, 16000),
32000: torchaudio.transforms.Resample(32000, 16000),
}
def prepare_example(example):
speech, sampling_rate = torchaudio.load(example["path"])
example["speech"] = resamplers[sampling_rate](speech).squeeze().numpy()
return example
test_dataset = test_dataset.map(prepare_example)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Arabic test data of Common Voice.
```python
import librosa
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "ar", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("kmfoda/wav2vec2-large-xlsr-arabic")
model = Wav2Vec2ForCTC.from_pretrained("kmfoda/wav2vec2-large-xlsr-arabic")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\؟\_\؛\ـ\—]'
resamplers = { # all three sampling rates exist in test split
48000: torchaudio.transforms.Resample(48000, 16000),
44100: torchaudio.transforms.Resample(44100, 16000),
32000: torchaudio.transforms.Resample(32000, 16000),
}
def prepare_example(example):
speech, sampling_rate = torchaudio.load(example["path"])
example["speech"] = resamplers[sampling_rate](speech).squeeze().numpy()
return example
test_dataset = test_dataset.map(prepare_example)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 52.53
## Training
The Common Voice `train`, `validation` datasets were used for training.
The script used for training can be found [here](https://huggingface.co/kmfoda/wav2vec2-large-xlsr-arabic/tree/main)
|
{"language": "ar", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "metrics": ["wer"], "model-index": [{"name": "XLSR Wav2Vec2 Arabic by Othmane Rifki", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice ar", "type": "common_voice", "args": "ar"}, "metrics": [{"type": "wer", "value": 46.77, "name": "Test WER"}]}]}]}
|
kmfoda/wav2vec2-large-xlsr-arabic
| null |
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"ar",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ar"
] |
TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #ar #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us
|
# Wav2Vec2-Large-XLSR-53-Arabic
Fine-tuned facebook/wav2vec2-large-xlsr-53 on Arabic using the Common Voice.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
## Evaluation
The model can be evaluated as follows on the Arabic test data of Common Voice.
Test Result: 52.53
## Training
The Common Voice 'train', 'validation' datasets were used for training.
The script used for training can be found here
|
[
"# Wav2Vec2-Large-XLSR-53-Arabic\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Arabic using the Common Voice. \nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\n\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on the Arabic test data of Common Voice. \n\n\n\nTest Result: 52.53",
"## Training\n\nThe Common Voice 'train', 'validation' datasets were used for training.\n\nThe script used for training can be found here"
] |
[
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #ar #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n",
"# Wav2Vec2-Large-XLSR-53-Arabic\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Arabic using the Common Voice. \nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\n\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on the Arabic test data of Common Voice. \n\n\n\nTest Result: 52.53",
"## Training\n\nThe Common Voice 'train', 'validation' datasets were used for training.\n\nThe script used for training can be found here"
] |
text-generation
|
transformers
|
#Harry Potter model
|
{"tags": ["conversational"]}
|
knightbat/harry-potter
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
#Harry Potter model
|
[] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
summarization
|
transformers
|
Model obtained by Fine Tuning 'facebook/bart-large-xsum'
## Usage
# Example 1
```python
from transformers import pipeline
summarizer = pipeline("summarization", model="knkarthick/MEETING-SUMMARY-BART-LARGE-XSUM-SAMSUM-DIALOGSUM-AMI")
text = '''The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building, and the tallest structure in Paris. Its base is square, measuring 125 metres (410 ft) on each side. During its construction, the Eiffel Tower surpassed the Washington Monument to become the tallest man-made structure in the world, a title it held for 41 years until the Chrysler Building in New York City was finished in 1930. It was the first structure to reach a height of 300 metres. Due to the addition of a broadcasting aerial at the top of the tower in 1957, it is now taller than the Chrysler Building by 5.2 metres (17 ft). Excluding transmitters, the Eiffel Tower is the second tallest free-standing structure in France after the Millau Viaduct.
'''
summarizer(text)
```
# Example 2
```python
from transformers import pipeline
summarizer = pipeline("summarization", model="knkarthick/MEETING-SUMMARY-BART-LARGE-XSUM-SAMSUM-DIALOGSUM-AMI")
text = '''Bangalore is the capital and the largest city of the Indian state of Karnataka. It has a population of more than 8 million and a metropolitan population of around 11 million, making it the third most populous city and fifth most populous urban agglomeration in India. Located in southern India on the Deccan Plateau, at a height of over 900 m (3,000 ft) above sea level, Bangalore is known for its pleasant climate throughout the year. Its elevation is the highest among the major cities of India.The city's history dates back to around 890 CE, in a stone inscription found at the Nageshwara Temple in Begur, Bangalore. The Begur inscription is written in Halegannada (ancient Kannada), mentions 'Bengaluru Kalaga' (battle of Bengaluru). It was a significant turning point in the history of Bangalore as it bears the earliest reference to the name 'Bengaluru'. In 1537 CE, Kempé Gowdā – a feudal ruler under the Vijayanagara Empire – established a mud fort considered to be the foundation of modern Bangalore and its oldest areas, or petes, which exist to the present day.
After the fall of Vijayanagar empire in 16th century, the Mughals sold Bangalore to Chikkadevaraja Wodeyar (1673–1704), the then ruler of the Kingdom of Mysore for three lakh rupees. When Haider Ali seized control of the Kingdom of Mysore, the administration of Bangalore passed into his hands.
The city was captured by the British East India Company after victory in the Fourth Anglo-Mysore War (1799), who returned administrative control of the city to the Maharaja of Mysore. The old city developed in the dominions of the Maharaja of Mysore and was made capital of the Princely State of Mysore, which existed as a nominally sovereign entity of the British Raj. In 1809, the British shifted their cantonment to Bangalore, outside the old city, and a town grew up around it, which was governed as part of British India. Following India's independence in 1947, Bangalore became the capital of Mysore State, and remained capital when the new Indian state of Karnataka was formed in 1956. The two urban settlements of Bangalore – city and cantonment – which had developed as independent entities merged into a single urban centre in 1949. The existing Kannada name, Bengalūru, was declared the official name of the city in 2006.
Bangalore is widely regarded as the "Silicon Valley of India" (or "IT capital of India") because of its role as the nation's leading information technology (IT) exporter. Indian technological organisations are headquartered in the city. A demographically diverse city, Bangalore is the second fastest-growing major metropolis in India. Recent estimates of the metro economy of its urban area have ranked Bangalore either the fourth- or fifth-most productive metro area of India. As of 2017, Bangalore was home to 7,700 millionaires and 8 billionaires with a total wealth of $320 billion. It is home to many educational and research institutions. Numerous state-owned aerospace and defence organisations are located in the city. The city also houses the Kannada film industry. It was ranked the most liveable Indian city with a population of over a million under the Ease of Living Index 2020.
'''
summarizer(text)
```
# Example 3
```python
from transformers import pipeline
summarizer = pipeline("summarization", model="knkarthick/MEETING-SUMMARY-BART-LARGE-XSUM-SAMSUM-DIALOGSUM-AMI")
text = '''Hi, I'm David and I'm supposed to be an industrial designer. Um, I just got the project announcement about what the project is. Designing a remote control. That's about it, didn't get anything else. Did you get the same thing? Cool. There's too much gear. Okay. Can't draw. Um. Yeah. Um, well anyway, I don't know, it's just the first animal I can think off the top of my head. Um. Yes. Big reason is 'cause I'm allergic to most animals. Allergic to animal fur, so um fish was a natural choice. Um, yeah, and I kind of like whales. They come in and go eat everything in sight. And they're quite harmless and mild and interesting. Tail's a bit big, I think. It's an after dinner dog then. Hmm. It does make sense from maybe the design point of view 'cause you have more complicated characters like European languages, then you need more buttons. So, possibly. Hmm. Yeah. And you keep losing them. Finding them is really a pain, you know. I mean it's usually quite small, or when you want it right, it slipped behind the couch or it's kicked under the table. You know. Yep. Mm-hmm. I think one factor would be production cost. Because there's a cap there, so um depends on how much you can cram into that price. Um. I think that that's the main factor. Cool.
Okay. Right. Um well this is the kick-off meeting for our our project. Um and um this is just what we're gonna be doing over the next twenty five minutes. Um so first of all, just to kind of make sure that we all know each other, I'm Laura and I'm the project manager. Do you want to introduce yourself again? Okay. Great. Okay. Um so we're designing a new remote control and um Oh I have to record who's here actually. So that's David, Andrew and Craig, isn't it? And you all arrived on time. Um yeah so des uh design a new remote control. Um, as you can see it's supposed to be original, trendy and user friendly. Um so that's kind of our our brief, as it were. Um and so there are three different stages to the design. Um I'm not really sure what what you guys have already received um in your emails. What did you get? Mm-hmm. Is that what everybody got? Okay. Um. So we're gonna have like individual work and then a meeting about it. And repeat that process three times. Um and at this point we get try out the whiteboard over there. Um. So uh you get to draw your favourite animal and sum up your favourite characteristics of it. So who would like to go first? Very good. Mm-hmm. Yeah. Yeah. Right. Lovely. Right. You can take as long over this as you like, because we haven't got an awful lot to discuss. Ok oh we do we do. Don't feel like you're in a rush, anyway. Ach why not We might have to get you up again then. I don't know what mine is. I'm gonna have to think on the spot now. Is that a whale? Ah. Okay. God, I still don't know what I'm gonna write about. Um. I was gonna choose a dog as well. But I'll just draw a different kind of dog. M my favourite animal is my own dog at home. Um That doesn't really look like him, actually. He looks more like a pig, actually. Ah well. Do you? Oh that's very good of you. Uh. Um he's a mixture of uh various things. Um and what do I like about him, um That's just to suggest that his tail wags. Um he's very friendly and cheery and always pleased to see you, and very kind of affectionate and um uh and he's quite quite wee as well so you know he can doesn't take up too much space. Um and uh And he does a funny thing where he chases his tail as well, which is quite amusing, so It is. I think it is. He only does it after he's had his dinner and um he'll just all of a sudden just get up and start chasing his tail 'round the living room. Yeah, so uh Yeah, maybe. Maybe. Right, um where did you find this? Just down here? Yeah. Okay. Um what are we doing next? Uh um. Okay, uh we now need to discuss the project finance. Um so according to the brief um we're gonna be selling this remote control for twenty five Euro, um and we're aiming to make fifty million Euro. Um so we're gonna be selling this on an international scale. And uh we don't want it to cost any more than uh twelve fifty Euros, so fifty percent of the selling price. Sure. All together. Um I dunno. I imagine That's a good question. I imagine it probably is our sale actually because it's probably up to the the um the retailer to uh sell it for whatever price they want. Um. But I I don't know, I mean do you think the fact that it's going to be sold internationally will have a bearing on how we design it at all? Think it will? Um. Hmm. Oh yeah, regions and stuff, yeah. Yeah. Okay. Yeah. Well for a remote control, do you think that will be I suppose it's depends on how complicated our remote control is. Yeah, yeah. Okay. What, just like in terms of like the wealth of the country? Like how much money people have to spend on things like? Aye, I see what you mean, yeah. Marketing. Good marketing thoughts. Oh gosh, I should be writing all this down. Um. Mm. Yeah. Yeah, yeah. Like how much does, you know, a remote control cost. Well twenty five Euro, I mean that's um that's about like eighteen pounds or something, isn't it? Or no, is it as much as that? Sixteen seventeen eighteen pounds. Um, I dunno, I've never bought a remote control, so I don't know how how good a remote control that would get you. Um. But yeah, I suppose it has to look kind of cool and gimmicky. Um right, okay. Let me just scoot on ahead here. Okay. Um well d Does anybody have anything to add to uh to the finance issue at all? Thin No, actually. That would be useful, though, wouldn't it, if you knew like what your money would get you now. Mm-hmm. Yeah, yeah. Oh. Five minutes to end of meeting. Oh, okay. We're a bit behind. Yeah. Right, so do you think that should be like a main design aim of our remote control d you know, do your your satellite and your regular telly and your V_C_R_ and everything? Mm-hmm. Yeah. Or even like, you know, notes about um what you wanna watch. Like you might put in there oh I want to watch such and such and look a Oh that's a good idea. So extra functionalities. Mm-hmm. Hmm. Um okay, uh I'd wel we're gonna have to wrap up pretty quickly in the next couple of minutes. Um I'll just check we've nothing else. Okay. Um so anything else anybody wants to add about what they don't like about remote controls they've used, what they would really like to be part of this new one at all? You keep losing them. Okay. Yeah. W You get those ones where you can, if you like, whistle or make a really high pitched noise they beep. There I mean is that something we'd want to include, do you think? Dunno. Okay maybe. My goodness. Still feels quite primitive. Maybe like a touch screen or something? Okay. Uh-huh, okay. Well I guess that's up to our industrial designer. It looks better. Yeah. Okay. Okay. Right, well um so just to wrap up, the next meeting's gonna be in thirty minutes. So that's about um about ten to twelve by my watch. Um so inbetween now and then, um as the industrial designer, you're gonna be working on you know the actual working design of it so y you know what you're doing there. Um for user interface, technical functions, I guess that's you know like what we've been talking about, what it'll actually do. Um and uh marketing executive, you'll be just thinking about what it actually what, you know, what requirements it has to has to fulfil and you'll all get instructions emailed to you, I guess. Um. Yeah, so it's th the functional design stage is next, I guess. And uh and that's the end of the meeting. So I got that little message a lot sooner than I thought I would, so Mm-hmm. Uh-huh, yeah. Th Okay, well just very quickly 'cause this we're supposed to finish now. Um I guess that's up to us, I mean you probably want some kind of unique selling point of it, so um, you know Yeah. Mm-hmm. Yeah. Okay. Right, okay, we'll that's that's the end of the meeting, then. Um. So, uh thank you all for coming.
Um I'm Craig and I'm User Interface. Yeah. Well, my favourite animal would be a monkey. Then they're small cute and furry, and uh when planet of the apes becomes real, I'm gonna be up there with them. Yeah. I know um My parents went out and bought um remote controls because um they got fed up of having four or five different remote controls for each things the house. So um for them it was just how many devices control. Uh.
Mm-hmm. Great. And I'm Andrew and I'm uh our marketing expert. Mm-hmm. Mm-hmm. Yeah, that's that's it. Yeah. I will go. That's fine. Alright. So This one here, right? Okay. Very nice. Alright. My favourite animal is like A beagle. Um charac favourite characteristics of it? Is that right? Uh, right, well basically um high priority for any animal for me is that they be willing to take a lot of physical affection from their family. And, yeah that they have lots of personality and uh be fit and in robust good health. So this is blue. Blue beagle. My family's beagle. I coulda told you a whole lot more about beagles. Boy, let me tell you. Impressionist. Alright. Mm. Superb sketch, by the way. Yep. I see a dog in there. Yep. Now I see a rooster. What kind is it? Is he aware that th it's his own cha tail he's chasing? Hmm. Probably when he was little he got lots of attention for doing it and has forever been conditioned. 'Kay. Um, can we just go over that again? Uh, so bas at twel Alright, yeah. Okay. So cost like production cost is twelve fifty, but selling price is is that wholesale or retail? Like on the shelf. Our sale our sale anyway. Yeah, okay okay. Okay. Mm-hmm. Alright. Yes. Mm-hmm. Mm-hmm. Well right away I'm wondering if there's um th th uh, like with D_V_D_ players, if there are zones. Um f frequencies or something um as well as uh characters, um different uh keypad styles and s symbols. Um. I don't know. Yeah. Yeah. Yeah. And then a and then al the other thing international is on top of the price. I'm thinking the price might might appeal to a certain market in one region, whereas in another it'll be different, so Just a chara just a characteristic of the Just Or just like, basic product podi positioning, the twenty five Euro remote control might be a big hit in London, might not be such a big hit in Greece, who knows, something like that, yeah. Yep. Right away I'm making some kind of assumptions about what what information we're given here, thinking, 'kay trendy probably means something other than just basic, something other than just standard. Um so I'm wondering right away, is selling twenty five Euros, is that sort of the thi is this gonna to be like the premium product kinda thing or Uh-huh. Mm-hmm. Yep. Yeah, I'd say so, yeah. No. Yeah, yeah. Mm-hmm. Do we have any other background information on like how that compares to other other Yeah. Mm-hmm. Yeah, interesting thing about discussing um production of a remote control for me is that l as you point out, I just don't think of remote controls as somethin something people consciously assess in their purchasing habits. It's just like getting shoelaces with shoes or something. It just comes along. Do you know what I mean? Like so sort of like how do you I I mean one one way of looking at it would be, well the people producing television sets, maybe they have to buy remote controls. Or another way is maybe people who have T_V_ sets are really fed up with their remote control and they really want a better one or something. But Right. Right. Okay so Right, so in function one of the priorities might be to combine as many uses I think so. Yeah, yeah. Yeah. Well like um, maybe what we could use is a sort of like a example of a successful other piece technology is palm palm pilots. They're gone from being just like little sort of scribble boards to cameras, M_P_ three players, telephones, everything, agenda. So, like, I wonder if we might add something new to the to the remote control market, such as the lighting in your house, or um Yeah, yeah. An Yeah. Like, p personally for me, at home I've I've combined the um the audio video of my television set and my D_V_D_ player and my C_D_ player. So they w all work actually function together but I have different remote controls for each of them. So it's sort of ironic that that then they're in there um you know, the sound and everything it's just one system. But each one's got its own little part. Mm. Mm. Mm. Mm-hmm. Mm-hmm. Yeah. Yeah. That's just really good id Yep. Uh, sure. I remember when the first remote control my my family had was on a cable. Actually had a cable between it and the T_V_ and big like buttons that sort of like, like on a blender or something. And um, you know, when I think about what they are now, it's better, but actually it's still kind of, I dunno, like a massive junky thing on the table. Maybe we could think about how, could be more, you know, streamlined. S Something like that, yeah. Or whatever would be technologically reasonable. 'Cause it could b it could it could be that f it could be that functionally that doesn't make it any better, but that just the appeal of of not having You know, these days there's a r pe things in people's homes are becoming more and more like chic, you know. Um, nicer materials and might be be worth exploring anyway. Okay. Um. Before we wrap up, just to make sure we're all on the same page here, um, do we We were given sort of an example of a coffee machine or something, right? Well, um are we at ma right now on the assumption that our television remote control may have features which go beyond the television? Or are we keeping sort of like a a design commitment to television features? I I don't know. Yep. Yeah, sure. Okay. Okay, yeah. Okay. Okay. Okay. Alright.
'''
summarizer(text)
```
# Example 4
```python
from transformers import pipeline
summarizer = pipeline("summarization", model="knkarthick/MEETING-SUMMARY-BART-LARGE-XSUM-SAMSUM-DIALOGSUM-AMI")
text = '''
Das : Hi and welcome to the a16z podcast. I’m Das, and in this episode, I talk SaaS go-to-market with David Ulevitch and our newest enterprise general partner Kristina Shen. The first half of the podcast looks at how remote work impacts the SaaS go-to-market and what the smartest founders are doing to survive the current crisis. The second half covers pricing approaches and strategy, including how to think about free versus paid trials and navigating the transition to larger accounts. But we start with why it’s easier to move upmarket than down… and the advantage that gives a SaaS startup against incumbents.
David : If you have a cohort of customers that are paying you $10,000 a year for your product, you’re going to find a customer that self-selects and is willing to pay $100,000 a year. Once you get one of those, your organization will figure out how you sell to, how you satisfy and support, customers at that price point and that size. But it’s really hard for a company that sells up market to move down market, because they’ve already baked in all that expensive, heavy lifting sales motion. And so as you go down market with a lower price point, usually, you can’t actually support it.
Das : Does that mean that it’s easier for a company to do this go-to-market if they’re a new startup as opposed to if they’re a pre-existing SaaS?
Kristina : It’s culturally very, very hard to give a product away for free that you’re already charging for. It feels like you’re eating away at your own potential revenue when you do it. So most people who try it end up pulling back very quickly.
David : This is actually one of the key reasons why the bottoms up SaaS motion is just so competitive, and compelling, and so destructive against the traditional sales-driven test motion. If you have that great product and people are choosing to use it, it’s very hard for somebody with a sales-driven motion, and all the cost that’s loaded into that, to be able to compete against it. There are so many markets where initially, we would look at companies and say, “Oh, well, this couldn’t possibly be bottoms up. It has to be sold to the CIO. It has to be sold to the CSO or the CFO.” But in almost every case we’ve been wrong, and there has been a bottoms up motion. The canonical example is Slack. It’s crazy that Slack is a bottoms up company, because you’re talking about corporate messaging, and how could you ever have a messaging solution that only a few people might be using, that only a team might be using? But now it’s just, “Oh, yeah, some people started using it, and then more people started using it, and then everyone had Slack.”
Kristina : I think another classic example is Dropbox versus Box. Both started as bottoms up businesses, try before you buy. But Box quickly found, “Hey, I’d rather sell to IT.” And Dropbox said, “Hey, we’ve got a great freemium motion going.” And they catalyzed their business around referrals and giving away free storage and shared storage in a way that really helped drive their bottoms up business.
Das : It’s a big leap to go from selling to smaller customers to larger customers. How have you seen SaaS companies know or get the timing right on that? Especially since it does seem like that’s really related to scaling your sales force?
Kristina : Don’t try to go from a 100-person company to a 20,000-person company. Start targeting early adopters, maybe they’re late stage pre-IPO companies, then newly IPO’d companies. Starting in tech tends to be a little bit easier because they tend to be early adopters. Going vertical by vertical can be a great strategy as well. Targeting one customer who might be branded in that space, can help brand yourself in that category. And then all their competitors will also want your product if you do a good job. A lot of times people will dedicate a sales rep to each vertical, so that they become really, really knowledgeable in that space, and also build their own brand and reputation and know who are the right customers to target.
Das : So right now, you’ve got a lot more people working remote. Does this move to remote work mean that on-premise software is dying? And is it accelerating the move to software as a service?
Kristina : This remote work and working from home is only going to catalyze more of the conversion from on-premise over to cloud and SaaS. In general, software spend declines 20% during an economic downturn. This happened in ’08, this happened in ’01. But when we look at the last downturn in ’08, SaaS spend actually, for public companies, increased, on average, 10%, which means there’s a 30% spread, which really shows us that there was a huge catalyst from people moving on-premise to SaaS.
David : And as people work remote, the ability to use SaaS tools is much easier than having to VPN back into your corporate network. We’ve been seeing that, inside sales teams have been doing larger and larger deals, essentially moving up market on the inside, without having to engage with field sales teams. In fact, a lot of the new SaaS companies today rather than building out a field team, they have a hybrid team, where people are working and closing deals on the inside and if they had to go out and meet with a customer, they would do that. But by and large, most of it was happening over the phone, over email, and over videoconferencing. And all the deals now, by definition, are gonna be done remote because people can’t go visit their customers in person.
Das : So with bottoms up, did user behavior and buyer behavior change, so the go-to-market evolved? Or did the go-to-market evolve and then you saw user and buyer behavior change? I’m curious with this move to remote work. Is that going to trigger more changes or has the go-to-market enabled that change in user behavior, even though we see that change coming because of a lot of forces outside of the market?
Kristina : I definitely think they are interrelated. But I do think it was a user change that catalyzed everything. We decided that we preferred better software, and we tried a couple products. We were able to purchase off our credit card. And then IT and procurement eventually said, “Wow, everyone’s buying these already, I might as well get a company license and a company deal so I’m not paying as much.” While obviously software vendors had to offer the products that could be self-served, users started to realize they had the power, they wanted to use better software, they paid with their credit cards. And now software vendors are forced to change their go-to-market to actually suit that use case.
Das : If that’s the case that when user behavior has changed, it’s tended to be the catalyzing force of bigger changes in the go-to-market, what are some of the changes you foresee for SaaS because the world has changed to this new reality of remote work and more distributed teams?
David : We’re in a very uncertain economic environment right now. And a couple of things will become very clear over the next 3 to 9 to 15 months — you’re going to find out which SaaS products are absolutely essential to helping a business operate and run, and which ones were just nice to have and may not get renewed. I think on the customer, buying side, you’re very likely to see people push back on big annual commitments and prefer to go month-to-month where they can. Or you’ll see more incentives from SaaS startups to offer discounts for annual contracts. You’re going to see people that might sign an annual contract, but they may not want to pay upfront. They may prefer to meter the cash out ratably over the term of the contract. And as companies had empowered and allowed budget authority to be pushed down in organizations, you’re gonna see that budget authority get pulled back, more scrutiny on spending, and likely a lot of SaaS products not get renewed that turned out to not be essential.
Kristina : I think the smartest founders are making sure they have the runway to continue to exist. And they’re doing that in a couple of ways. They’re preserving cash, and they are making sure that their existing customers are super, super happy, because retaining your customers is so important in this environment. And they’re making sure that they have efficient or profitable customer acquisition. Don’t spend valuable dollars acquiring customers. But acquire customers efficiently that will add to a great existing customer base.
Das : To go into pricing and packaging for SaaS for a moment, what are some of the different pricing approaches that you see SaaS companies taking?
Kristina : The old school way of doing SaaS go-to-market is bundle everything together, make the pricing super complex, so you don’t actually understand what you’re paying for. You’re forced to purchase it because you need one component of the product. New modern SaaS pricing is keep it simple, keep it tied to value, and make sure you’re solving one thing really, really well.
David : You want to make it easy for your customers to give you money. And if your customers don’t understand your pricing, that’s a huge red flag. Sometimes founders will try to over engineer their pricing model.
Kristina : We talk a lot about everything has to be 10X better than the alternatives. But it’s much easier to be 10X better when you solve one thing very, very well, and then have simple pricing around it. I think the most common that most people know about is PEPM or per employee per month, where you’re charging basically for every single seat. Another really common model is the freemium model. So, think about a Dropbox, or an Asana, or a Skype, where it’s trigger based. You try the product for free, but when you hit a certain amount of storage, or a certain amount of users, then it converts over to paid. And then you also have a time trial, where you get the full experience of the product for some limited time period. And then you’re asked if you want to continue using the product to pay. And then there’s pay as go, and particularly, pay as you go as a usage model. So, Slack will say, “Hey, if your users aren’t actually using the product this month, we won’t actually charge you for it.”
David : The example that Kristina made about Slack and users, everybody understands what a user is, and if they’re using the product, they pay for it, and if they’re not using it, they don’t pay for it. That’s a very friendly way to make it easy for your customers to give you money. If Slack came up with a pricing model that was like based on number of messages, or number of API integration calls, the customer would have no idea what that means.
Kristina : There’s also the consumption model. So Twilio only charges you for every SMS text or phone call that you make on the platform any given month. And so they make money or lose money as your usage goes. The pricing is very aligned to your productivity.
David : Generally, those are for products where the usage only goes in one direction. If you think of a company like Databricks, where they’re charging for storage, or Amazon’s S3 service, it is very aligned with the customer, but it also strategically aligns with the business because they know the switching cost is very high, the churn is very low. And generally, in those businesses, you’re only going to store more data, so they can charge based on usage or volume of data.
Kristina : Recently, there’s been a huge trend of payment as a revenue. It’s particularly common in vertical markets where SaaS companies are adding payments as a revenue in addition to their employee or subscription revenue. If you look at Shopify, for example, more than 50% of their revenue is actually payment revenue. They’re making money every single time you purchase something off one of their shopping cart websites.
Das : When you’re working with a founder or a SaaS startup, how have you seen them find the right pricing model for their product, for their market?
Kristina : Step one is just talk to a lot of customers. Try to figure out what is the market pricing for possible alternatives or competitors, understand their pain points and their willingness to pay. And just throw a price out there, because you have to have a starting point in order to actually test and iterate. Particularly in the SMB, or the bottoms up business, you can test and iterate pretty quickly because you have so many data points.
David : I always tell founders, step one is to just go out there and talk to customers. Step two is just double your prices. I don’t think there’s ever been a great company with a great product that’s fallen apart because their pricing was wrong. But a lot of SaaS startup founders really under price, and you don’t want to find out two or three years later that you were 200% underpriced. A very common thing that SaaS companies do, they’ll have the basic package that either is free or low cost, that you can just sign up online for. They’ll have a middle package where they share some pricing, and then they’ll have the enterprise package where you have to contact sales to find out more. And that way they don’t actually have to show the pricing for that third package. And that gives the salespeople the flexibility to adjust pricing on a per deal basis.
Das : When you’re working with companies, why are they underpricing their products?
David : I think it’s psychological. People need to price on value, and they don’t know how much value they’re delivering relative to “Oh, it only cost me $100 a month to provide this service, so I just need to charge $200.” But if it turns out you’re saving your customer $50,000 a year, then you’re wildly underpriced. You have to remember that SaaS is essentially a proxy for outsourced IT. You’re spending money on a SaaS service to not pay to develop something internally, or to have to pay IT to support something that’s more complex on-prem. Software is much cheaper than people, and so generally, the price point can be much higher.
Kristina : And the other thing is your value increases over time. You’re delivering more features, more products, you understand the customer better. It’s the beauty of the SaaS model and cloud model that you can iterate and push code immediately, and the customer immediately sees value. A lot of times people have the same price point from the first customer sold to three years later and the 200th customer. Quite frankly, you’ve delivered so much value along the way that your price point should have gone up. The other thing I’ll say is a lot of people discount per seat pricing a lot as they move up market. We tend to tell people that the best validation of your product having great product market fit is your ability to hold your price point. So while there is some natural discounting on a per seat basis because people do deserve some volume discounting, I would say try to resist that as much as possible.
Das : Especially for a technical founder, it’s so tempting to get in there and fiddle with these knobs. How do you know when it is time to experiment with your pricing and packaging?
David : If you’re looking at your business and you see that you are doing more deals, and they’re closing faster, you should raise your pricing. And you pay attention to how long it takes to close deals and whether the number of deals is staying consistent as you do that. And, at some point, you’re going to find out when you’re losing deals on price. I think a moment where companies have to plan ahead to avoid having to course correct is after they roll out massive pricing and packaging changes, which are pretty natural as companies move up market. But how they navigate that transition to larger accounts, and how they either bring along or move away from those smaller, earlier customers who got them to where they are, tends to be really important because they can get a lot of noise on Twitter, they can get a lot of blowback from their customers. So Zendesk is a company where they rolled out a major packaging change. And when they rolled it out, they hadn’t planned on grandfathering in their early customers. They got a lot of pushback, and very quickly, they put out a blog post and said, “We hear what you’re saying, we appreciate you building the business that we’ve become today. We do need to have a package for the future. But all the people that have been customers so far will be grandfathered in for at least a period of time into the old model.”
Kristina : If you iterate pricing constantly, you don’t really have this problem because your customers will be used to pricing changes. You normally pair them with new features, and it all kind of works out. But if you have to go through a big grandfather change, I tend to lean towards treating your early customers really, really well. They adopted when you weren’t a big company yet. They probably co-built the product with you in many ways. And so, it’s great to get more dollars out of your customer base, but treat your early customers well.
Das : Are there any other failure modes that you see startups really falling into around pricing and packaging or any common mistakes that they make?
David : I think a lot of founders don’t always map out the cost or model of their pricing and their product relative to their cost of actually doing sales and marketing and customer acquisition.
Kristina : Inside sales is so popular in Silicon Valley. When you’re selling more to an SMB or mid-market type customer, the expectation is that you’re educating and helping the prospective customer over the phone. And so, you’re not expected to be as high touch. But 5K is almost the minimum price point you need to sell to the SMB with an inside sales team in order to pay for the outbound costs and all the conversions, because there is typically a team that sits around the quota carrying rep. And so, price matching — how much your price point is compared to what your go-to-market motion is — matters a lot. Other big failure modes that I see, people guess the ramp time of a sales rep wrong. And ramp time really ties to the segment of customer you’re selling into. It tends be that if you’re selling into the enterprise, the ramp time for sales reps, because sales cycles are so long, tend to be much longer as well. They could be six months plus, could be a year. While if you’re selling more into SMB or mid-market, the ramp time to get a rep up and running can be much shorter, three to six months. Because the sales cycles are shorter, they just iterate much faster, and they ramp up much more quickly.
David : The other thing that people have to understand is that sales velocity is a really important component to figuring out how many reps you should be hiring, whether they should be inside reps or field reps. If it takes you 90 days to close a deal, that can’t be a $5,000 a year deal, that has to be a $50,000 or even $150,000 a year deal.
Das : Kristina, I know you’ve done a lot of work with metrics. So how do those play in?
Kristina : Probably the one way to sum it all together is how many months does it take to pay back customer acquisition cost. Very commonly within the SaaS world, we talk about a 12-month CAC payback. We typically want to see for every dollar you spend on sales and marketing, you get a dollar back within a year. That means you can tweak the inputs any way you want. Let’s say that doing paid acquisition is really effective for you. Then, you can spend proportionally more on paid acquisition and less on sales reps. Vice versa, if you have a great inbound engine, you actually can hire a lot more sales reps and spend more on sales headcount. With all formulas, it’s a guide rail, so if you have customers that retain really, really well, let’s say you’re selling to the enterprise, and you’ve got a 90% or 95% annual retention rate, then your CAC payback could be between 12 and 24 months. But let’s say you’re selling to the SMB and churn is 2% or 3% monthly, which ends up being like 80% to 90% annual retention. Then, because your customer is less sticky, I would recommend looking at a CAC payback of 6 to 12 months.
Das : How should you think about doing a free trial versus a paid trial?
David : On the one hand, the bottoms up motion where people can try essentially a full version of a product before they buy it is extremely powerful. On the other hand, I’ve started to try to think about how I advise companies, when they are thinking about a free trial for something that might cost $100,000 or $200,000 a year? Do we do a paid pilot that has some sort of contractual obligation that if we meet then turns into a commercial engagement?
Kristina : I do think the beauty of the bottoms up business is that you can get people to try the entire experience of the product for free, and they fall in love with it, and a certain percentage will convert. And that works really, really well for products that can self-serve. When you start moving up market to more complex products, the challenge with trials is it takes work to actually implement the product, whether it be integrations, IT has to give access, etc. You lose that self-serve ability, which is so amazing in the trial. And so, I tend to be more in the camp of paid trials, if it costs you money to actually deploy the trial. And when you’re selling to bigger customers, they associate value when they have to pay. Once a customer has to pay you, then they feel a need to make the project successful and thus they will onboard, schedule things, give you data and access.
David : If you can get to a point where you get the customer to do that paid pilot, such that the only difference between a pilot and an actual customer is just the signing of a contract, that’s very powerful. Now, that does force you to have a really good pre-sales motion to make sure that you can deliver on the promise you’ve made your customers. When companies don’t have a great product, and they paper over it with professional services and sales engineering and post-sales support, that paid pilot thing doesn’t work because the experience isn’t good enough. So, it really is incumbent on the SaaS company that does a paid pilot to make sure that they are able to deliver on that experience.
Kristina : And one emerging trend recently is people signing an annual contract with a one or three month out, as a replacement to the paid pilot. Because it’s the best of both worlds, the SaaS company that’s selling the product gets a higher level of commitment. And the customer gets the optionality of opting out in the same way as a trial without any clawback. It really comes down to where procurement falls. Sometimes procurement is at the beginning of that decision, which makes it more like an annual contract. Sometimes procurement is at the one or three month opt-out period, which means the customer already has a great experience, loves the product, and it is an easier way to convert procurements to actually sign on…
David : And that is a really good segue into renewals. I always tell founders, you might have this subscription business, but it’s not a recurring revenue business until the second year when the revenue actually recurs. I think you really have the first three months to get a customer up and running and happy. And if they’re not, you then have about three months to fix it. And if all that works out, then the remaining six months of the contract can be focused on upsell and expansion.
Das : Awesome. Thank you, Kristina. Thank you, David.
Kristina : Thanks so much for having us. This was fun.
David : Yeah, a lot of fun, great topics, and our favorite thing to talk about.
'''
summarizer(text)
```
|
{"language": "en", "license": "apache-2.0", "tags": ["bart", "seq2seq", "summarization"], "datasets": ["cnndaily/newyorkdaily/xsum/samsum/dialogsum/AMI"], "metrics": ["rouge"], "widget": [{"text": "Hi, I'm David and I'm supposed to be an industrial designer. Um, I just got the project announcement about what the project is. Designing a remote control. That's about it, didn't get anything else. Did you get the same thing? Cool. There's too much gear. Okay. Can't draw. Um. Yeah. Um, well anyway, I don't know, it's just the first animal I can think off the top of my head. Um. Yes. Big reason is 'cause I'm allergic to most animals. Allergic to animal fur, so um fish was a natural choice. Um, yeah, and I kind of like whales. They come in and go eat everything in sight. And they're quite harmless and mild and interesting. Tail's a bit big, I think. It's an after dinner dog then. Hmm. It does make sense from maybe the design point of view 'cause you have more complicated characters like European languages, then you need more buttons. So, possibly. Hmm. Yeah. And you keep losing them. Finding them is really a pain, you know. I mean it's usually quite small, or when you want it right, it slipped behind the couch or it's kicked under the table. You know. Yep. Mm-hmm. I think one factor would be production cost. Because there's a cap there, so um depends on how much you can cram into that price. Um. I think that that's the main factor. Cool.\nOkay. Right. Um well this is the kick-off meeting for our our project. Um and um this is just what we're gonna be doing over the next twenty five minutes. Um so first of all, just to kind of make sure that we all know each other, I'm Laura and I'm the project manager. Do you want to introduce yourself again? Okay. Great. Okay. Um so we're designing a new remote control and um Oh I have to record who's here actually. So that's David, Andrew and Craig, isn't it? And you all arrived on time. Um yeah so des uh design a new remote control. Um, as you can see it's supposed to be original, trendy and user friendly. Um so that's kind of our our brief, as it were. Um and so there are three different stages to the design. Um I'm not really sure what what you guys have already received um in your emails. What did you get? Mm-hmm. Is that what everybody got? Okay. Um. So we're gonna have like individual work and then a meeting about it. And repeat that process three times. Um and at this point we get try out the whiteboard over there. Um. So uh you get to draw your favourite animal and sum up your favourite characteristics of it. So who would like to go first? Very good. Mm-hmm. Yeah. Yeah. Right. Lovely. Right. You can take as long over this as you like, because we haven't got an awful lot to discuss. Ok oh we do we do. Don't feel like you're in a rush, anyway. Ach why not We might have to get you up again then. I don't know what mine is. I'm gonna have to think on the spot now. Is that a whale? Ah. Okay. God, I still don't know what I'm gonna write about. Um. I was gonna choose a dog as well. But I'll just draw a different kind of dog. M my favourite animal is my own dog at home. Um That doesn't really look like him, actually. He looks more like a pig, actually. Ah well. Do you? Oh that's very good of you. Uh. Um he's a mixture of uh various things. Um and what do I like about him, um That's just to suggest that his tail wags. Um he's very friendly and cheery and always pleased to see you, and very kind of affectionate and um uh and he's quite quite wee as well so you know he can doesn't take up too much space. Um and uh And he does a funny thing where he chases his tail as well, which is quite amusing, so It is. I think it is. He only does it after he's had his dinner and um he'll just all of a sudden just get up and start chasing his tail 'round the living room. Yeah, so uh Yeah, maybe. Maybe. Right, um where did you find this? Just down here? Yeah. Okay. Um what are we doing next? Uh um. Okay, uh we now need to discuss the project finance. Um so according to the brief um we're gonna be selling this remote control for twenty five Euro, um and we're aiming to make fifty million Euro. Um so we're gonna be selling this on an international scale. And uh we don't want it to cost any more than uh twelve fifty Euros, so fifty percent of the selling price. Sure. All together. Um I dunno. I imagine That's a good question. I imagine it probably is our sale actually because it's probably up to the the um the retailer to uh sell it for whatever price they want. Um. But I I don't know, I mean do you think the fact that it's going to be sold internationally will have a bearing on how we design it at all? Think it will? Um. Hmm. Oh yeah, regions and stuff, yeah. Yeah. Okay. Yeah. Well for a remote control, do you think that will be I suppose it's depends on how complicated our remote control is. Yeah, yeah. Okay. What, just like in terms of like the wealth of the country? Like how much money people have to spend on things like? Aye, I see what you mean, yeah. Marketing. Good marketing thoughts. Oh gosh, I should be writing all this down. Um. Mm. Yeah. Yeah, yeah. Like how much does, you know, a remote control cost. Well twenty five Euro, I mean that's um that's about like eighteen pounds or something, isn't it? Or no, is it as much as that? Sixteen seventeen eighteen pounds. Um, I dunno, I've never bought a remote control, so I don't know how how good a remote control that would get you. Um. But yeah, I suppose it has to look kind of cool and gimmicky. Um right, okay. Let me just scoot on ahead here. Okay. Um well d Does anybody have anything to add to uh to the finance issue at all? Thin No, actually. That would be useful, though, wouldn't it, if you knew like what your money would get you now. Mm-hmm. Yeah, yeah. Oh. Five minutes to end of meeting. Oh, okay. We're a bit behind. Yeah. Right, so do you think that should be like a main design aim of our remote control d you know, do your your satellite and your regular telly and your V_C_R_ and everything? Mm-hmm. Yeah. Or even like, you know, notes about um what you wanna watch. Like you might put in there oh I want to watch such and such and look a Oh that's a good idea. So extra functionalities. Mm-hmm. Hmm. Um okay, uh I'd wel we're gonna have to wrap up pretty quickly in the next couple of minutes. Um I'll just check we've nothing else. Okay. Um so anything else anybody wants to add about what they don't like about remote controls they've used, what they would really like to be part of this new one at all? You keep losing them. Okay. Yeah. W You get those ones where you can, if you like, whistle or make a really high pitched noise they beep. There I mean is that something we'd want to include, do you think? Dunno. Okay maybe. My goodness. Still feels quite primitive. Maybe like a touch screen or something? Okay. Uh-huh, okay. Well I guess that's up to our industrial designer. It looks better. Yeah. Okay. Okay. Right, well um so just to wrap up, the next meeting's gonna be in thirty minutes. So that's about um about ten to twelve by my watch. Um so inbetween now and then, um as the industrial designer, you're gonna be working on you know the actual working design of it so y you know what you're doing there. Um for user interface, technical functions, I guess that's you know like what we've been talking about, what it'll actually do. Um and uh marketing executive, you'll be just thinking about what it actually what, you know, what requirements it has to has to fulfil and you'll all get instructions emailed to you, I guess. Um. Yeah, so it's th the functional design stage is next, I guess. And uh and that's the end of the meeting. So I got that little message a lot sooner than I thought I would, so Mm-hmm. Uh-huh, yeah. Th Okay, well just very quickly 'cause this we're supposed to finish now. Um I guess that's up to us, I mean you probably want some kind of unique selling point of it, so um, you know Yeah. Mm-hmm. Yeah. Okay. Right, okay, we'll that's that's the end of the meeting, then. Um. So, uh thank you all for coming.\nUm I'm Craig and I'm User Interface. Yeah. Well, my favourite animal would be a monkey. Then they're small cute and furry, and uh when planet of the apes becomes real, I'm gonna be up there with them. Yeah. I know um My parents went out and bought um remote controls because um they got fed up of having four or five different remote controls for each things the house. So um for them it was just how many devices control. Uh.\nMm-hmm. Great. And I'm Andrew and I'm uh our marketing expert. Mm-hmm. Mm-hmm. Yeah, that's that's it. Yeah. I will go. That's fine. Alright. So This one here, right? Okay. Very nice. Alright. My favourite animal is like A beagle. Um charac favourite characteristics of it? Is that right? Uh, right, well basically um high priority for any animal for me is that they be willing to take a lot of physical affection from their family. And, yeah that they have lots of personality and uh be fit and in robust good health. So this is blue. Blue beagle. My family's beagle. I coulda told you a whole lot more about beagles. Boy, let me tell you. Impressionist. Alright. Mm. Superb sketch, by the way. Yep. I see a dog in there. Yep. Now I see a rooster. What kind is it? Is he aware that th it's his own cha tail he's chasing? Hmm. Probably when he was little he got lots of attention for doing it and has forever been conditioned. 'Kay. Um, can we just go over that again? Uh, so bas at twel Alright, yeah. Okay. So cost like production cost is twelve fifty, but selling price is is that wholesale or retail? Like on the shelf. Our sale our sale anyway. Yeah, okay okay. Okay. Mm-hmm. Alright. Yes. Mm-hmm. Mm-hmm. Well right away I'm wondering if there's um th th uh, like with D_V_D_ players, if there are zones. Um f frequencies or something um as well as uh characters, um different uh keypad styles and s symbols. Um. I don't know. Yeah. Yeah. Yeah. And then a and then al the other thing international is on top of the price. I'm thinking the price might might appeal to a certain market in one region, whereas in another it'll be different, so Just a chara just a characteristic of the Just Or just like, basic product podi positioning, the twenty five Euro remote control might be a big hit in London, might not be such a big hit in Greece, who knows, something like that, yeah. Yep. Right away I'm making some kind of assumptions about what what information we're given here, thinking, 'kay trendy probably means something other than just basic, something other than just standard. Um so I'm wondering right away, is selling twenty five Euros, is that sort of the thi is this gonna to be like the premium product kinda thing or Uh-huh. Mm-hmm. Yep. Yeah, I'd say so, yeah. No. Yeah, yeah. Mm-hmm. Do we have any other background information on like how that compares to other other Yeah. Mm-hmm. Yeah, interesting thing about discussing um production of a remote control for me is that l as you point out, I just don't think of remote controls as somethin something people consciously assess in their purchasing habits. It's just like getting shoelaces with shoes or something. It just comes along. Do you know what I mean? Like so sort of like how do you I I mean one one way of looking at it would be, well the people producing television sets, maybe they have to buy remote controls. Or another way is maybe people who have T_V_ sets are really fed up with their remote control and they really want a better one or something. But Right. Right. Okay so Right, so in function one of the priorities might be to combine as many uses I think so. Yeah, yeah. Yeah. Well like um, maybe what we could use is a sort of like a example of a successful other piece technology is palm palm pilots. They're gone from being just like little sort of scribble boards to cameras, M_P_ three players, telephones, everything, agenda. So, like, I wonder if we might add something new to the to the remote control market, such as the lighting in your house, or um Yeah, yeah. An Yeah. Like, p personally for me, at home I've I've combined the um the audio video of my television set and my D_V_D_ player and my C_D_ player. So they w all work actually function together but I have different remote controls for each of them. So it's sort of ironic that that then they're in there um you know, the sound and everything it's just one system. But each one's got its own little part. Mm. Mm. Mm. Mm-hmm. Mm-hmm. Yeah. Yeah. That's just really good id Yep. Uh, sure. I remember when the first remote control my my family had was on a cable. Actually had a cable between it and the T_V_ and big like buttons that sort of like, like on a blender or something. And um, you know, when I think about what they are now, it's better, but actually it's still kind of, I dunno, like a massive junky thing on the table. Maybe we could think about how, could be more, you know, streamlined. S Something like that, yeah. Or whatever would be technologically reasonable. 'Cause it could b it could it could be that f it could be that functionally that doesn't make it any better, but that just the appeal of of not having You know, these days there's a r pe things in people's homes are becoming more and more like chic, you know. Um, nicer materials and might be be worth exploring anyway. Okay. Um. Before we wrap up, just to make sure we're all on the same page here, um, do we We were given sort of an example of a coffee machine or something, right? Well, um are we at ma right now on the assumption that our television remote control may have features which go beyond the television? Or are we keeping sort of like a a design commitment to television features? I I don't know. Yep. Yeah, sure. Okay. Okay, yeah. Okay. Okay. Okay. Alright."}], "model-index": [{"name": "bart-large-meeting-summary-xsum-samsum-dialogsum-AMI", "results": [{"task": {"type": "abstractive-text-summarization", "name": "Abstractive Text Summarization"}, "dataset": {"name": "cnndaily/newyorkdaily/xsum/samsum/dialogsum/AMI Meeting Corpus", "type": "cnndaily/newyorkdaily/xsum/samsum/dialogsum/AMI Meeting Corpus"}, "metrics": [{"type": "rouge-1", "value": "NA", "name": "Validation ROGUE-1"}, {"type": "rouge-2", "value": "NA", "name": "Validation ROGUE-2"}, {"type": "rouge-L", "value": "NA", "name": "Validation ROGUE-L"}, {"type": "rouge-Lsum", "value": "NA", "name": "Validation ROGUE-Lsum"}, {"type": "rouge-1", "value": "NA", "name": "Test ROGUE-1"}, {"type": "rouge-2", "value": "NA", "name": "Test ROGUE-2"}, {"type": "rouge-L", "value": "NA", "name": "Test ROGUE-L"}, {"type": "rouge-Lsum", "value": "NA", "name": "Test ROGUE-Lsum"}]}]}]}
|
knkarthick/MEETING-SUMMARY-BART-LARGE-XSUM-SAMSUM-DIALOGSUM-AMI
| null |
[
"transformers",
"pytorch",
"tf",
"safetensors",
"bart",
"text2text-generation",
"seq2seq",
"summarization",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #safetensors #bart #text2text-generation #seq2seq #summarization #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
Model obtained by Fine Tuning 'facebook/bart-large-xsum'
## Usage
# Example 1
# Example 2
# Example 3
# Example 4
|
[
"## Usage",
"# Example 1",
"# Example 2",
"# Example 3",
"# Example 4"
] |
[
"TAGS\n#transformers #pytorch #tf #safetensors #bart #text2text-generation #seq2seq #summarization #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"## Usage",
"# Example 1",
"# Example 2",
"# Example 3",
"# Example 4"
] |
summarization
|
transformers
|
Model obtained by Fine Tuning 'facebook/bart-large-xsum'
## Usage
# Example 1
```python
from transformers import pipeline
summarizer = pipeline("summarization", model="knkarthick/MEETING-SUMMARY-BART-LARGE-XSUM-SAMSUM-DIALOGSUM")
text = '''The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building, and the tallest structure in Paris. Its base is square, measuring 125 metres (410 ft) on each side. During its construction, the Eiffel Tower surpassed the Washington Monument to become the tallest man-made structure in the world, a title it held for 41 years until the Chrysler Building in New York City was finished in 1930. It was the first structure to reach a height of 300 metres. Due to the addition of a broadcasting aerial at the top of the tower in 1957, it is now taller than the Chrysler Building by 5.2 metres (17 ft). Excluding transmitters, the Eiffel Tower is the second tallest free-standing structure in France after the Millau Viaduct.
'''
summarizer(text)
```
# Example 2
```python
from transformers import pipeline
summarizer = pipeline("summarization", model="knkarthick/MEETING-SUMMARY-BART-LARGE-XSUM-SAMSUM-DIALOGSUM")
text = '''Bangalore is the capital and the largest city of the Indian state of Karnataka. It has a population of more than 8 million and a metropolitan population of around 11 million, making it the third most populous city and fifth most populous urban agglomeration in India. Located in southern India on the Deccan Plateau, at a height of over 900 m (3,000 ft) above sea level, Bangalore is known for its pleasant climate throughout the year. Its elevation is the highest among the major cities of India.The city's history dates back to around 890 CE, in a stone inscription found at the Nageshwara Temple in Begur, Bangalore. The Begur inscription is written in Halegannada (ancient Kannada), mentions 'Bengaluru Kalaga' (battle of Bengaluru). It was a significant turning point in the history of Bangalore as it bears the earliest reference to the name 'Bengaluru'. In 1537 CE, Kempé Gowdā – a feudal ruler under the Vijayanagara Empire – established a mud fort considered to be the foundation of modern Bangalore and its oldest areas, or petes, which exist to the present day.
After the fall of Vijayanagar empire in 16th century, the Mughals sold Bangalore to Chikkadevaraja Wodeyar (1673–1704), the then ruler of the Kingdom of Mysore for three lakh rupees. When Haider Ali seized control of the Kingdom of Mysore, the administration of Bangalore passed into his hands.
The city was captured by the British East India Company after victory in the Fourth Anglo-Mysore War (1799), who returned administrative control of the city to the Maharaja of Mysore. The old city developed in the dominions of the Maharaja of Mysore and was made capital of the Princely State of Mysore, which existed as a nominally sovereign entity of the British Raj. In 1809, the British shifted their cantonment to Bangalore, outside the old city, and a town grew up around it, which was governed as part of British India. Following India's independence in 1947, Bangalore became the capital of Mysore State, and remained capital when the new Indian state of Karnataka was formed in 1956. The two urban settlements of Bangalore – city and cantonment – which had developed as independent entities merged into a single urban centre in 1949. The existing Kannada name, Bengalūru, was declared the official name of the city in 2006.
Bangalore is widely regarded as the "Silicon Valley of India" (or "IT capital of India") because of its role as the nation's leading information technology (IT) exporter. Indian technological organisations are headquartered in the city. A demographically diverse city, Bangalore is the second fastest-growing major metropolis in India. Recent estimates of the metro economy of its urban area have ranked Bangalore either the fourth- or fifth-most productive metro area of India. As of 2017, Bangalore was home to 7,700 millionaires and 8 billionaires with a total wealth of $320 billion. It is home to many educational and research institutions. Numerous state-owned aerospace and defence organisations are located in the city. The city also houses the Kannada film industry. It was ranked the most liveable Indian city with a population of over a million under the Ease of Living Index 2020.
'''
summarizer(text)
```
# Example 3
```python
from transformers import pipeline
summarizer = pipeline("summarization", model="knkarthick/MEETING-SUMMARY-BART-LARGE-XSUM-SAMSUM-DIALOGSUM")
text = '''Hi, I'm David and I'm supposed to be an industrial designer. Um, I just got the project announcement about what the project is. Designing a remote control. That's about it, didn't get anything else. Did you get the same thing? Cool. There's too much gear. Okay. Can't draw. Um. Yeah. Um, well anyway, I don't know, it's just the first animal I can think off the top of my head. Um. Yes. Big reason is 'cause I'm allergic to most animals. Allergic to animal fur, so um fish was a natural choice. Um, yeah, and I kind of like whales. They come in and go eat everything in sight. And they're quite harmless and mild and interesting. Tail's a bit big, I think. It's an after dinner dog then. Hmm. It does make sense from maybe the design point of view 'cause you have more complicated characters like European languages, then you need more buttons. So, possibly. Hmm. Yeah. And you keep losing them. Finding them is really a pain, you know. I mean it's usually quite small, or when you want it right, it slipped behind the couch or it's kicked under the table. You know. Yep. Mm-hmm. I think one factor would be production cost. Because there's a cap there, so um depends on how much you can cram into that price. Um. I think that that's the main factor. Cool.
Okay. Right. Um well this is the kick-off meeting for our our project. Um and um this is just what we're gonna be doing over the next twenty five minutes. Um so first of all, just to kind of make sure that we all know each other, I'm Laura and I'm the project manager. Do you want to introduce yourself again? Okay. Great. Okay. Um so we're designing a new remote control and um Oh I have to record who's here actually. So that's David, Andrew and Craig, isn't it? And you all arrived on time. Um yeah so des uh design a new remote control. Um, as you can see it's supposed to be original, trendy and user friendly. Um so that's kind of our our brief, as it were. Um and so there are three different stages to the design. Um I'm not really sure what what you guys have already received um in your emails. What did you get? Mm-hmm. Is that what everybody got? Okay. Um. So we're gonna have like individual work and then a meeting about it. And repeat that process three times. Um and at this point we get try out the whiteboard over there. Um. So uh you get to draw your favourite animal and sum up your favourite characteristics of it. So who would like to go first? Very good. Mm-hmm. Yeah. Yeah. Right. Lovely. Right. You can take as long over this as you like, because we haven't got an awful lot to discuss. Ok oh we do we do. Don't feel like you're in a rush, anyway. Ach why not We might have to get you up again then. I don't know what mine is. I'm gonna have to think on the spot now. Is that a whale? Ah. Okay. God, I still don't know what I'm gonna write about. Um. I was gonna choose a dog as well. But I'll just draw a different kind of dog. M my favourite animal is my own dog at home. Um That doesn't really look like him, actually. He looks more like a pig, actually. Ah well. Do you? Oh that's very good of you. Uh. Um he's a mixture of uh various things. Um and what do I like about him, um That's just to suggest that his tail wags. Um he's very friendly and cheery and always pleased to see you, and very kind of affectionate and um uh and he's quite quite wee as well so you know he can doesn't take up too much space. Um and uh And he does a funny thing where he chases his tail as well, which is quite amusing, so It is. I think it is. He only does it after he's had his dinner and um he'll just all of a sudden just get up and start chasing his tail 'round the living room. Yeah, so uh Yeah, maybe. Maybe. Right, um where did you find this? Just down here? Yeah. Okay. Um what are we doing next? Uh um. Okay, uh we now need to discuss the project finance. Um so according to the brief um we're gonna be selling this remote control for twenty five Euro, um and we're aiming to make fifty million Euro. Um so we're gonna be selling this on an international scale. And uh we don't want it to cost any more than uh twelve fifty Euros, so fifty percent of the selling price. Sure. All together. Um I dunno. I imagine That's a good question. I imagine it probably is our sale actually because it's probably up to the the um the retailer to uh sell it for whatever price they want. Um. But I I don't know, I mean do you think the fact that it's going to be sold internationally will have a bearing on how we design it at all? Think it will? Um. Hmm. Oh yeah, regions and stuff, yeah. Yeah. Okay. Yeah. Well for a remote control, do you think that will be I suppose it's depends on how complicated our remote control is. Yeah, yeah. Okay. What, just like in terms of like the wealth of the country? Like how much money people have to spend on things like? Aye, I see what you mean, yeah. Marketing. Good marketing thoughts. Oh gosh, I should be writing all this down. Um. Mm. Yeah. Yeah, yeah. Like how much does, you know, a remote control cost. Well twenty five Euro, I mean that's um that's about like eighteen pounds or something, isn't it? Or no, is it as much as that? Sixteen seventeen eighteen pounds. Um, I dunno, I've never bought a remote control, so I don't know how how good a remote control that would get you. Um. But yeah, I suppose it has to look kind of cool and gimmicky. Um right, okay. Let me just scoot on ahead here. Okay. Um well d Does anybody have anything to add to uh to the finance issue at all? Thin No, actually. That would be useful, though, wouldn't it, if you knew like what your money would get you now. Mm-hmm. Yeah, yeah. Oh. Five minutes to end of meeting. Oh, okay. We're a bit behind. Yeah. Right, so do you think that should be like a main design aim of our remote control d you know, do your your satellite and your regular telly and your V_C_R_ and everything? Mm-hmm. Yeah. Or even like, you know, notes about um what you wanna watch. Like you might put in there oh I want to watch such and such and look a Oh that's a good idea. So extra functionalities. Mm-hmm. Hmm. Um okay, uh I'd wel we're gonna have to wrap up pretty quickly in the next couple of minutes. Um I'll just check we've nothing else. Okay. Um so anything else anybody wants to add about what they don't like about remote controls they've used, what they would really like to be part of this new one at all? You keep losing them. Okay. Yeah. W You get those ones where you can, if you like, whistle or make a really high pitched noise they beep. There I mean is that something we'd want to include, do you think? Dunno. Okay maybe. My goodness. Still feels quite primitive. Maybe like a touch screen or something? Okay. Uh-huh, okay. Well I guess that's up to our industrial designer. It looks better. Yeah. Okay. Okay. Right, well um so just to wrap up, the next meeting's gonna be in thirty minutes. So that's about um about ten to twelve by my watch. Um so inbetween now and then, um as the industrial designer, you're gonna be working on you know the actual working design of it so y you know what you're doing there. Um for user interface, technical functions, I guess that's you know like what we've been talking about, what it'll actually do. Um and uh marketing executive, you'll be just thinking about what it actually what, you know, what requirements it has to has to fulfil and you'll all get instructions emailed to you, I guess. Um. Yeah, so it's th the functional design stage is next, I guess. And uh and that's the end of the meeting. So I got that little message a lot sooner than I thought I would, so Mm-hmm. Uh-huh, yeah. Th Okay, well just very quickly 'cause this we're supposed to finish now. Um I guess that's up to us, I mean you probably want some kind of unique selling point of it, so um, you know Yeah. Mm-hmm. Yeah. Okay. Right, okay, we'll that's that's the end of the meeting, then. Um. So, uh thank you all for coming.
Um I'm Craig and I'm User Interface. Yeah. Well, my favourite animal would be a monkey. Then they're small cute and furry, and uh when planet of the apes becomes real, I'm gonna be up there with them. Yeah. I know um My parents went out and bought um remote controls because um they got fed up of having four or five different remote controls for each things the house. So um for them it was just how many devices control. Uh.
Mm-hmm. Great. And I'm Andrew and I'm uh our marketing expert. Mm-hmm. Mm-hmm. Yeah, that's that's it. Yeah. I will go. That's fine. Alright. So This one here, right? Okay. Very nice. Alright. My favourite animal is like A beagle. Um charac favourite characteristics of it? Is that right? Uh, right, well basically um high priority for any animal for me is that they be willing to take a lot of physical affection from their family. And, yeah that they have lots of personality and uh be fit and in robust good health. So this is blue. Blue beagle. My family's beagle. I coulda told you a whole lot more about beagles. Boy, let me tell you. Impressionist. Alright. Mm. Superb sketch, by the way. Yep. I see a dog in there. Yep. Now I see a rooster. What kind is it? Is he aware that th it's his own cha tail he's chasing? Hmm. Probably when he was little he got lots of attention for doing it and has forever been conditioned. 'Kay. Um, can we just go over that again? Uh, so bas at twel Alright, yeah. Okay. So cost like production cost is twelve fifty, but selling price is is that wholesale or retail? Like on the shelf. Our sale our sale anyway. Yeah, okay okay. Okay. Mm-hmm. Alright. Yes. Mm-hmm. Mm-hmm. Well right away I'm wondering if there's um th th uh, like with D_V_D_ players, if there are zones. Um f frequencies or something um as well as uh characters, um different uh keypad styles and s symbols. Um. I don't know. Yeah. Yeah. Yeah. And then a and then al the other thing international is on top of the price. I'm thinking the price might might appeal to a certain market in one region, whereas in another it'll be different, so Just a chara just a characteristic of the Just Or just like, basic product podi positioning, the twenty five Euro remote control might be a big hit in London, might not be such a big hit in Greece, who knows, something like that, yeah. Yep. Right away I'm making some kind of assumptions about what what information we're given here, thinking, 'kay trendy probably means something other than just basic, something other than just standard. Um so I'm wondering right away, is selling twenty five Euros, is that sort of the thi is this gonna to be like the premium product kinda thing or Uh-huh. Mm-hmm. Yep. Yeah, I'd say so, yeah. No. Yeah, yeah. Mm-hmm. Do we have any other background information on like how that compares to other other Yeah. Mm-hmm. Yeah, interesting thing about discussing um production of a remote control for me is that l as you point out, I just don't think of remote controls as somethin something people consciously assess in their purchasing habits. It's just like getting shoelaces with shoes or something. It just comes along. Do you know what I mean? Like so sort of like how do you I I mean one one way of looking at it would be, well the people producing television sets, maybe they have to buy remote controls. Or another way is maybe people who have T_V_ sets are really fed up with their remote control and they really want a better one or something. But Right. Right. Okay so Right, so in function one of the priorities might be to combine as many uses I think so. Yeah, yeah. Yeah. Well like um, maybe what we could use is a sort of like a example of a successful other piece technology is palm palm pilots. They're gone from being just like little sort of scribble boards to cameras, M_P_ three players, telephones, everything, agenda. So, like, I wonder if we might add something new to the to the remote control market, such as the lighting in your house, or um Yeah, yeah. An Yeah. Like, p personally for me, at home I've I've combined the um the audio video of my television set and my D_V_D_ player and my C_D_ player. So they w all work actually function together but I have different remote controls for each of them. So it's sort of ironic that that then they're in there um you know, the sound and everything it's just one system. But each one's got its own little part. Mm. Mm. Mm. Mm-hmm. Mm-hmm. Yeah. Yeah. That's just really good id Yep. Uh, sure. I remember when the first remote control my my family had was on a cable. Actually had a cable between it and the T_V_ and big like buttons that sort of like, like on a blender or something. And um, you know, when I think about what they are now, it's better, but actually it's still kind of, I dunno, like a massive junky thing on the table. Maybe we could think about how, could be more, you know, streamlined. S Something like that, yeah. Or whatever would be technologically reasonable. 'Cause it could b it could it could be that f it could be that functionally that doesn't make it any better, but that just the appeal of of not having You know, these days there's a r pe things in people's homes are becoming more and more like chic, you know. Um, nicer materials and might be be worth exploring anyway. Okay. Um. Before we wrap up, just to make sure we're all on the same page here, um, do we We were given sort of an example of a coffee machine or something, right? Well, um are we at ma right now on the assumption that our television remote control may have features which go beyond the television? Or are we keeping sort of like a a design commitment to television features? I I don't know. Yep. Yeah, sure. Okay. Okay, yeah. Okay. Okay. Okay. Alright.
'''
summarizer(text)
```
# Example 4
```python
from transformers import pipeline
summarizer = pipeline("summarization", model="knkarthick/MEETING-SUMMARY-BART-LARGE-XSUM-SAMSUM-DIALOGSUM")
text = '''
Das : Hi and welcome to the a16z podcast. I’m Das, and in this episode, I talk SaaS go-to-market with David Ulevitch and our newest enterprise general partner Kristina Shen. The first half of the podcast looks at how remote work impacts the SaaS go-to-market and what the smartest founders are doing to survive the current crisis. The second half covers pricing approaches and strategy, including how to think about free versus paid trials and navigating the transition to larger accounts. But we start with why it’s easier to move upmarket than down… and the advantage that gives a SaaS startup against incumbents.
David : If you have a cohort of customers that are paying you $10,000 a year for your product, you’re going to find a customer that self-selects and is willing to pay $100,000 a year. Once you get one of those, your organization will figure out how you sell to, how you satisfy and support, customers at that price point and that size. But it’s really hard for a company that sells up market to move down market, because they’ve already baked in all that expensive, heavy lifting sales motion. And so as you go down market with a lower price point, usually, you can’t actually support it.
Das : Does that mean that it’s easier for a company to do this go-to-market if they’re a new startup as opposed to if they’re a pre-existing SaaS?
Kristina : It’s culturally very, very hard to give a product away for free that you’re already charging for. It feels like you’re eating away at your own potential revenue when you do it. So most people who try it end up pulling back very quickly.
David : This is actually one of the key reasons why the bottoms up SaaS motion is just so competitive, and compelling, and so destructive against the traditional sales-driven test motion. If you have that great product and people are choosing to use it, it’s very hard for somebody with a sales-driven motion, and all the cost that’s loaded into that, to be able to compete against it. There are so many markets where initially, we would look at companies and say, “Oh, well, this couldn’t possibly be bottoms up. It has to be sold to the CIO. It has to be sold to the CSO or the CFO.” But in almost every case we’ve been wrong, and there has been a bottoms up motion. The canonical example is Slack. It’s crazy that Slack is a bottoms up company, because you’re talking about corporate messaging, and how could you ever have a messaging solution that only a few people might be using, that only a team might be using? But now it’s just, “Oh, yeah, some people started using it, and then more people started using it, and then everyone had Slack.”
Kristina : I think another classic example is Dropbox versus Box. Both started as bottoms up businesses, try before you buy. But Box quickly found, “Hey, I’d rather sell to IT.” And Dropbox said, “Hey, we’ve got a great freemium motion going.” And they catalyzed their business around referrals and giving away free storage and shared storage in a way that really helped drive their bottoms up business.
Das : It’s a big leap to go from selling to smaller customers to larger customers. How have you seen SaaS companies know or get the timing right on that? Especially since it does seem like that’s really related to scaling your sales force?
Kristina : Don’t try to go from a 100-person company to a 20,000-person company. Start targeting early adopters, maybe they’re late stage pre-IPO companies, then newly IPO’d companies. Starting in tech tends to be a little bit easier because they tend to be early adopters. Going vertical by vertical can be a great strategy as well. Targeting one customer who might be branded in that space, can help brand yourself in that category. And then all their competitors will also want your product if you do a good job. A lot of times people will dedicate a sales rep to each vertical, so that they become really, really knowledgeable in that space, and also build their own brand and reputation and know who are the right customers to target.
Das : So right now, you’ve got a lot more people working remote. Does this move to remote work mean that on-premise software is dying? And is it accelerating the move to software as a service?
Kristina : This remote work and working from home is only going to catalyze more of the conversion from on-premise over to cloud and SaaS. In general, software spend declines 20% during an economic downturn. This happened in ’08, this happened in ’01. But when we look at the last downturn in ’08, SaaS spend actually, for public companies, increased, on average, 10%, which means there’s a 30% spread, which really shows us that there was a huge catalyst from people moving on-premise to SaaS.
David : And as people work remote, the ability to use SaaS tools is much easier than having to VPN back into your corporate network. We’ve been seeing that, inside sales teams have been doing larger and larger deals, essentially moving up market on the inside, without having to engage with field sales teams. In fact, a lot of the new SaaS companies today rather than building out a field team, they have a hybrid team, where people are working and closing deals on the inside and if they had to go out and meet with a customer, they would do that. But by and large, most of it was happening over the phone, over email, and over videoconferencing. And all the deals now, by definition, are gonna be done remote because people can’t go visit their customers in person.
Das : So with bottoms up, did user behavior and buyer behavior change, so the go-to-market evolved? Or did the go-to-market evolve and then you saw user and buyer behavior change? I’m curious with this move to remote work. Is that going to trigger more changes or has the go-to-market enabled that change in user behavior, even though we see that change coming because of a lot of forces outside of the market?
Kristina : I definitely think they are interrelated. But I do think it was a user change that catalyzed everything. We decided that we preferred better software, and we tried a couple products. We were able to purchase off our credit card. And then IT and procurement eventually said, “Wow, everyone’s buying these already, I might as well get a company license and a company deal so I’m not paying as much.” While obviously software vendors had to offer the products that could be self-served, users started to realize they had the power, they wanted to use better software, they paid with their credit cards. And now software vendors are forced to change their go-to-market to actually suit that use case.
Das : If that’s the case that when user behavior has changed, it’s tended to be the catalyzing force of bigger changes in the go-to-market, what are some of the changes you foresee for SaaS because the world has changed to this new reality of remote work and more distributed teams?
David : We’re in a very uncertain economic environment right now. And a couple of things will become very clear over the next 3 to 9 to 15 months — you’re going to find out which SaaS products are absolutely essential to helping a business operate and run, and which ones were just nice to have and may not get renewed. I think on the customer, buying side, you’re very likely to see people push back on big annual commitments and prefer to go month-to-month where they can. Or you’ll see more incentives from SaaS startups to offer discounts for annual contracts. You’re going to see people that might sign an annual contract, but they may not want to pay upfront. They may prefer to meter the cash out ratably over the term of the contract. And as companies had empowered and allowed budget authority to be pushed down in organizations, you’re gonna see that budget authority get pulled back, more scrutiny on spending, and likely a lot of SaaS products not get renewed that turned out to not be essential.
Kristina : I think the smartest founders are making sure they have the runway to continue to exist. And they’re doing that in a couple of ways. They’re preserving cash, and they are making sure that their existing customers are super, super happy, because retaining your customers is so important in this environment. And they’re making sure that they have efficient or profitable customer acquisition. Don’t spend valuable dollars acquiring customers. But acquire customers efficiently that will add to a great existing customer base.
Das : To go into pricing and packaging for SaaS for a moment, what are some of the different pricing approaches that you see SaaS companies taking?
Kristina : The old school way of doing SaaS go-to-market is bundle everything together, make the pricing super complex, so you don’t actually understand what you’re paying for. You’re forced to purchase it because you need one component of the product. New modern SaaS pricing is keep it simple, keep it tied to value, and make sure you’re solving one thing really, really well.
David : You want to make it easy for your customers to give you money. And if your customers don’t understand your pricing, that’s a huge red flag. Sometimes founders will try to over engineer their pricing model.
Kristina : We talk a lot about everything has to be 10X better than the alternatives. But it’s much easier to be 10X better when you solve one thing very, very well, and then have simple pricing around it. I think the most common that most people know about is PEPM or per employee per month, where you’re charging basically for every single seat. Another really common model is the freemium model. So, think about a Dropbox, or an Asana, or a Skype, where it’s trigger based. You try the product for free, but when you hit a certain amount of storage, or a certain amount of users, then it converts over to paid. And then you also have a time trial, where you get the full experience of the product for some limited time period. And then you’re asked if you want to continue using the product to pay. And then there’s pay as go, and particularly, pay as you go as a usage model. So, Slack will say, “Hey, if your users aren’t actually using the product this month, we won’t actually charge you for it.”
David : The example that Kristina made about Slack and users, everybody understands what a user is, and if they’re using the product, they pay for it, and if they’re not using it, they don’t pay for it. That’s a very friendly way to make it easy for your customers to give you money. If Slack came up with a pricing model that was like based on number of messages, or number of API integration calls, the customer would have no idea what that means.
Kristina : There’s also the consumption model. So Twilio only charges you for every SMS text or phone call that you make on the platform any given month. And so they make money or lose money as your usage goes. The pricing is very aligned to your productivity.
David : Generally, those are for products where the usage only goes in one direction. If you think of a company like Databricks, where they’re charging for storage, or Amazon’s S3 service, it is very aligned with the customer, but it also strategically aligns with the business because they know the switching cost is very high, the churn is very low. And generally, in those businesses, you’re only going to store more data, so they can charge based on usage or volume of data.
Kristina : Recently, there’s been a huge trend of payment as a revenue. It’s particularly common in vertical markets where SaaS companies are adding payments as a revenue in addition to their employee or subscription revenue. If you look at Shopify, for example, more than 50% of their revenue is actually payment revenue. They’re making money every single time you purchase something off one of their shopping cart websites.
Das : When you’re working with a founder or a SaaS startup, how have you seen them find the right pricing model for their product, for their market?
Kristina : Step one is just talk to a lot of customers. Try to figure out what is the market pricing for possible alternatives or competitors, understand their pain points and their willingness to pay. And just throw a price out there, because you have to have a starting point in order to actually test and iterate. Particularly in the SMB, or the bottoms up business, you can test and iterate pretty quickly because you have so many data points.
David : I always tell founders, step one is to just go out there and talk to customers. Step two is just double your prices. I don’t think there’s ever been a great company with a great product that’s fallen apart because their pricing was wrong. But a lot of SaaS startup founders really under price, and you don’t want to find out two or three years later that you were 200% underpriced. A very common thing that SaaS companies do, they’ll have the basic package that either is free or low cost, that you can just sign up online for. They’ll have a middle package where they share some pricing, and then they’ll have the enterprise package where you have to contact sales to find out more. And that way they don’t actually have to show the pricing for that third package. And that gives the salespeople the flexibility to adjust pricing on a per deal basis.
Das : When you’re working with companies, why are they underpricing their products?
David : I think it’s psychological. People need to price on value, and they don’t know how much value they’re delivering relative to “Oh, it only cost me $100 a month to provide this service, so I just need to charge $200.” But if it turns out you’re saving your customer $50,000 a year, then you’re wildly underpriced. You have to remember that SaaS is essentially a proxy for outsourced IT. You’re spending money on a SaaS service to not pay to develop something internally, or to have to pay IT to support something that’s more complex on-prem. Software is much cheaper than people, and so generally, the price point can be much higher.
Kristina : And the other thing is your value increases over time. You’re delivering more features, more products, you understand the customer better. It’s the beauty of the SaaS model and cloud model that you can iterate and push code immediately, and the customer immediately sees value. A lot of times people have the same price point from the first customer sold to three years later and the 200th customer. Quite frankly, you’ve delivered so much value along the way that your price point should have gone up. The other thing I’ll say is a lot of people discount per seat pricing a lot as they move up market. We tend to tell people that the best validation of your product having great product market fit is your ability to hold your price point. So while there is some natural discounting on a per seat basis because people do deserve some volume discounting, I would say try to resist that as much as possible.
Das : Especially for a technical founder, it’s so tempting to get in there and fiddle with these knobs. How do you know when it is time to experiment with your pricing and packaging?
David : If you’re looking at your business and you see that you are doing more deals, and they’re closing faster, you should raise your pricing. And you pay attention to how long it takes to close deals and whether the number of deals is staying consistent as you do that. And, at some point, you’re going to find out when you’re losing deals on price. I think a moment where companies have to plan ahead to avoid having to course correct is after they roll out massive pricing and packaging changes, which are pretty natural as companies move up market. But how they navigate that transition to larger accounts, and how they either bring along or move away from those smaller, earlier customers who got them to where they are, tends to be really important because they can get a lot of noise on Twitter, they can get a lot of blowback from their customers. So Zendesk is a company where they rolled out a major packaging change. And when they rolled it out, they hadn’t planned on grandfathering in their early customers. They got a lot of pushback, and very quickly, they put out a blog post and said, “We hear what you’re saying, we appreciate you building the business that we’ve become today. We do need to have a package for the future. But all the people that have been customers so far will be grandfathered in for at least a period of time into the old model.”
Kristina : If you iterate pricing constantly, you don’t really have this problem because your customers will be used to pricing changes. You normally pair them with new features, and it all kind of works out. But if you have to go through a big grandfather change, I tend to lean towards treating your early customers really, really well. They adopted when you weren’t a big company yet. They probably co-built the product with you in many ways. And so, it’s great to get more dollars out of your customer base, but treat your early customers well.
Das : Are there any other failure modes that you see startups really falling into around pricing and packaging or any common mistakes that they make?
David : I think a lot of founders don’t always map out the cost or model of their pricing and their product relative to their cost of actually doing sales and marketing and customer acquisition.
Kristina : Inside sales is so popular in Silicon Valley. When you’re selling more to an SMB or mid-market type customer, the expectation is that you’re educating and helping the prospective customer over the phone. And so, you’re not expected to be as high touch. But 5K is almost the minimum price point you need to sell to the SMB with an inside sales team in order to pay for the outbound costs and all the conversions, because there is typically a team that sits around the quota carrying rep. And so, price matching — how much your price point is compared to what your go-to-market motion is — matters a lot. Other big failure modes that I see, people guess the ramp time of a sales rep wrong. And ramp time really ties to the segment of customer you’re selling into. It tends be that if you’re selling into the enterprise, the ramp time for sales reps, because sales cycles are so long, tend to be much longer as well. They could be six months plus, could be a year. While if you’re selling more into SMB or mid-market, the ramp time to get a rep up and running can be much shorter, three to six months. Because the sales cycles are shorter, they just iterate much faster, and they ramp up much more quickly.
David : The other thing that people have to understand is that sales velocity is a really important component to figuring out how many reps you should be hiring, whether they should be inside reps or field reps. If it takes you 90 days to close a deal, that can’t be a $5,000 a year deal, that has to be a $50,000 or even $150,000 a year deal.
Das : Kristina, I know you’ve done a lot of work with metrics. So how do those play in?
Kristina : Probably the one way to sum it all together is how many months does it take to pay back customer acquisition cost. Very commonly within the SaaS world, we talk about a 12-month CAC payback. We typically want to see for every dollar you spend on sales and marketing, you get a dollar back within a year. That means you can tweak the inputs any way you want. Let’s say that doing paid acquisition is really effective for you. Then, you can spend proportionally more on paid acquisition and less on sales reps. Vice versa, if you have a great inbound engine, you actually can hire a lot more sales reps and spend more on sales headcount. With all formulas, it’s a guide rail, so if you have customers that retain really, really well, let’s say you’re selling to the enterprise, and you’ve got a 90% or 95% annual retention rate, then your CAC payback could be between 12 and 24 months. But let’s say you’re selling to the SMB and churn is 2% or 3% monthly, which ends up being like 80% to 90% annual retention. Then, because your customer is less sticky, I would recommend looking at a CAC payback of 6 to 12 months.
Das : How should you think about doing a free trial versus a paid trial?
David : On the one hand, the bottoms up motion where people can try essentially a full version of a product before they buy it is extremely powerful. On the other hand, I’ve started to try to think about how I advise companies, when they are thinking about a free trial for something that might cost $100,000 or $200,000 a year? Do we do a paid pilot that has some sort of contractual obligation that if we meet then turns into a commercial engagement?
Kristina : I do think the beauty of the bottoms up business is that you can get people to try the entire experience of the product for free, and they fall in love with it, and a certain percentage will convert. And that works really, really well for products that can self-serve. When you start moving up market to more complex products, the challenge with trials is it takes work to actually implement the product, whether it be integrations, IT has to give access, etc. You lose that self-serve ability, which is so amazing in the trial. And so, I tend to be more in the camp of paid trials, if it costs you money to actually deploy the trial. And when you’re selling to bigger customers, they associate value when they have to pay. Once a customer has to pay you, then they feel a need to make the project successful and thus they will onboard, schedule things, give you data and access.
David : If you can get to a point where you get the customer to do that paid pilot, such that the only difference between a pilot and an actual customer is just the signing of a contract, that’s very powerful. Now, that does force you to have a really good pre-sales motion to make sure that you can deliver on the promise you’ve made your customers. When companies don’t have a great product, and they paper over it with professional services and sales engineering and post-sales support, that paid pilot thing doesn’t work because the experience isn’t good enough. So, it really is incumbent on the SaaS company that does a paid pilot to make sure that they are able to deliver on that experience.
Kristina : And one emerging trend recently is people signing an annual contract with a one or three month out, as a replacement to the paid pilot. Because it’s the best of both worlds, the SaaS company that’s selling the product gets a higher level of commitment. And the customer gets the optionality of opting out in the same way as a trial without any clawback. It really comes down to where procurement falls. Sometimes procurement is at the beginning of that decision, which makes it more like an annual contract. Sometimes procurement is at the one or three month opt-out period, which means the customer already has a great experience, loves the product, and it is an easier way to convert procurements to actually sign on…
David : And that is a really good segue into renewals. I always tell founders, you might have this subscription business, but it’s not a recurring revenue business until the second year when the revenue actually recurs. I think you really have the first three months to get a customer up and running and happy. And if they’re not, you then have about three months to fix it. And if all that works out, then the remaining six months of the contract can be focused on upsell and expansion.
Das : Awesome. Thank you, Kristina. Thank you, David.
Kristina : Thanks so much for having us. This was fun.
David : Yeah, a lot of fun, great topics, and our favorite thing to talk about.
'''
summarizer(text)
```
|
{"language": "en", "license": "apache-2.0", "tags": ["bart", "seq2seq", "summarization"], "datasets": ["cnndaily/newyorkdaily/xsum/samsum/dialogsum"], "metrics": ["rouge"], "widget": [{"text": "Hi, I'm David and I'm supposed to be an industrial designer. Um, I just got the project announcement about what the project is. Designing a remote control. That's about it, didn't get anything else. Did you get the same thing? Cool. There's too much gear. Okay. Can't draw. Um. Yeah. Um, well anyway, I don't know, it's just the first animal I can think off the top of my head. Um. Yes. Big reason is 'cause I'm allergic to most animals. Allergic to animal fur, so um fish was a natural choice. Um, yeah, and I kind of like whales. They come in and go eat everything in sight. And they're quite harmless and mild and interesting. Tail's a bit big, I think. It's an after dinner dog then. Hmm. It does make sense from maybe the design point of view 'cause you have more complicated characters like European languages, then you need more buttons. So, possibly. Hmm. Yeah. And you keep losing them. Finding them is really a pain, you know. I mean it's usually quite small, or when you want it right, it slipped behind the couch or it's kicked under the table. You know. Yep. Mm-hmm. I think one factor would be production cost. Because there's a cap there, so um depends on how much you can cram into that price. Um. I think that that's the main factor. Cool.\nOkay. Right. Um well this is the kick-off meeting for our our project. Um and um this is just what we're gonna be doing over the next twenty five minutes. Um so first of all, just to kind of make sure that we all know each other, I'm Laura and I'm the project manager. Do you want to introduce yourself again? Okay. Great. Okay. Um so we're designing a new remote control and um Oh I have to record who's here actually. So that's David, Andrew and Craig, isn't it? And you all arrived on time. Um yeah so des uh design a new remote control. Um, as you can see it's supposed to be original, trendy and user friendly. Um so that's kind of our our brief, as it were. Um and so there are three different stages to the design. Um I'm not really sure what what you guys have already received um in your emails. What did you get? Mm-hmm. Is that what everybody got? Okay. Um. So we're gonna have like individual work and then a meeting about it. And repeat that process three times. Um and at this point we get try out the whiteboard over there. Um. So uh you get to draw your favourite animal and sum up your favourite characteristics of it. So who would like to go first? Very good. Mm-hmm. Yeah. Yeah. Right. Lovely. Right. You can take as long over this as you like, because we haven't got an awful lot to discuss. Ok oh we do we do. Don't feel like you're in a rush, anyway. Ach why not We might have to get you up again then. I don't know what mine is. I'm gonna have to think on the spot now. Is that a whale? Ah. Okay. God, I still don't know what I'm gonna write about. Um. I was gonna choose a dog as well. But I'll just draw a different kind of dog. M my favourite animal is my own dog at home. Um That doesn't really look like him, actually. He looks more like a pig, actually. Ah well. Do you? Oh that's very good of you. Uh. Um he's a mixture of uh various things. Um and what do I like about him, um That's just to suggest that his tail wags. Um he's very friendly and cheery and always pleased to see you, and very kind of affectionate and um uh and he's quite quite wee as well so you know he can doesn't take up too much space. Um and uh And he does a funny thing where he chases his tail as well, which is quite amusing, so It is. I think it is. He only does it after he's had his dinner and um he'll just all of a sudden just get up and start chasing his tail 'round the living room. Yeah, so uh Yeah, maybe. Maybe. Right, um where did you find this? Just down here? Yeah. Okay. Um what are we doing next? Uh um. Okay, uh we now need to discuss the project finance. Um so according to the brief um we're gonna be selling this remote control for twenty five Euro, um and we're aiming to make fifty million Euro. Um so we're gonna be selling this on an international scale. And uh we don't want it to cost any more than uh twelve fifty Euros, so fifty percent of the selling price. Sure. All together. Um I dunno. I imagine That's a good question. I imagine it probably is our sale actually because it's probably up to the the um the retailer to uh sell it for whatever price they want. Um. But I I don't know, I mean do you think the fact that it's going to be sold internationally will have a bearing on how we design it at all? Think it will? Um. Hmm. Oh yeah, regions and stuff, yeah. Yeah. Okay. Yeah. Well for a remote control, do you think that will be I suppose it's depends on how complicated our remote control is. Yeah, yeah. Okay. What, just like in terms of like the wealth of the country? Like how much money people have to spend on things like? Aye, I see what you mean, yeah. Marketing. Good marketing thoughts. Oh gosh, I should be writing all this down. Um. Mm. Yeah. Yeah, yeah. Like how much does, you know, a remote control cost. Well twenty five Euro, I mean that's um that's about like eighteen pounds or something, isn't it? Or no, is it as much as that? Sixteen seventeen eighteen pounds. Um, I dunno, I've never bought a remote control, so I don't know how how good a remote control that would get you. Um. But yeah, I suppose it has to look kind of cool and gimmicky. Um right, okay. Let me just scoot on ahead here. Okay. Um well d Does anybody have anything to add to uh to the finance issue at all? Thin No, actually. That would be useful, though, wouldn't it, if you knew like what your money would get you now. Mm-hmm. Yeah, yeah. Oh. Five minutes to end of meeting. Oh, okay. We're a bit behind. Yeah. Right, so do you think that should be like a main design aim of our remote control d you know, do your your satellite and your regular telly and your V_C_R_ and everything? Mm-hmm. Yeah. Or even like, you know, notes about um what you wanna watch. Like you might put in there oh I want to watch such and such and look a Oh that's a good idea. So extra functionalities. Mm-hmm. Hmm. Um okay, uh I'd wel we're gonna have to wrap up pretty quickly in the next couple of minutes. Um I'll just check we've nothing else. Okay. Um so anything else anybody wants to add about what they don't like about remote controls they've used, what they would really like to be part of this new one at all? You keep losing them. Okay. Yeah. W You get those ones where you can, if you like, whistle or make a really high pitched noise they beep. There I mean is that something we'd want to include, do you think? Dunno. Okay maybe. My goodness. Still feels quite primitive. Maybe like a touch screen or something? Okay. Uh-huh, okay. Well I guess that's up to our industrial designer. It looks better. Yeah. Okay. Okay. Right, well um so just to wrap up, the next meeting's gonna be in thirty minutes. So that's about um about ten to twelve by my watch. Um so inbetween now and then, um as the industrial designer, you're gonna be working on you know the actual working design of it so y you know what you're doing there. Um for user interface, technical functions, I guess that's you know like what we've been talking about, what it'll actually do. Um and uh marketing executive, you'll be just thinking about what it actually what, you know, what requirements it has to has to fulfil and you'll all get instructions emailed to you, I guess. Um. Yeah, so it's th the functional design stage is next, I guess. And uh and that's the end of the meeting. So I got that little message a lot sooner than I thought I would, so Mm-hmm. Uh-huh, yeah. Th Okay, well just very quickly 'cause this we're supposed to finish now. Um I guess that's up to us, I mean you probably want some kind of unique selling point of it, so um, you know Yeah. Mm-hmm. Yeah. Okay. Right, okay, we'll that's that's the end of the meeting, then. Um. So, uh thank you all for coming.\nUm I'm Craig and I'm User Interface. Yeah. Well, my favourite animal would be a monkey. Then they're small cute and furry, and uh when planet of the apes becomes real, I'm gonna be up there with them. Yeah. I know um My parents went out and bought um remote controls because um they got fed up of having four or five different remote controls for each things the house. So um for them it was just how many devices control. Uh.\nMm-hmm. Great. And I'm Andrew and I'm uh our marketing expert. Mm-hmm. Mm-hmm. Yeah, that's that's it. Yeah. I will go. That's fine. Alright. So This one here, right? Okay. Very nice. Alright. My favourite animal is like A beagle. Um charac favourite characteristics of it? Is that right? Uh, right, well basically um high priority for any animal for me is that they be willing to take a lot of physical affection from their family. And, yeah that they have lots of personality and uh be fit and in robust good health. So this is blue. Blue beagle. My family's beagle. I coulda told you a whole lot more about beagles. Boy, let me tell you. Impressionist. Alright. Mm. Superb sketch, by the way. Yep. I see a dog in there. Yep. Now I see a rooster. What kind is it? Is he aware that th it's his own cha tail he's chasing? Hmm. Probably when he was little he got lots of attention for doing it and has forever been conditioned. 'Kay. Um, can we just go over that again? Uh, so bas at twel Alright, yeah. Okay. So cost like production cost is twelve fifty, but selling price is is that wholesale or retail? Like on the shelf. Our sale our sale anyway. Yeah, okay okay. Okay. Mm-hmm. Alright. Yes. Mm-hmm. Mm-hmm. Well right away I'm wondering if there's um th th uh, like with D_V_D_ players, if there are zones. Um f frequencies or something um as well as uh characters, um different uh keypad styles and s symbols. Um. I don't know. Yeah. Yeah. Yeah. And then a and then al the other thing international is on top of the price. I'm thinking the price might might appeal to a certain market in one region, whereas in another it'll be different, so Just a chara just a characteristic of the Just Or just like, basic product podi positioning, the twenty five Euro remote control might be a big hit in London, might not be such a big hit in Greece, who knows, something like that, yeah. Yep. Right away I'm making some kind of assumptions about what what information we're given here, thinking, 'kay trendy probably means something other than just basic, something other than just standard. Um so I'm wondering right away, is selling twenty five Euros, is that sort of the thi is this gonna to be like the premium product kinda thing or Uh-huh. Mm-hmm. Yep. Yeah, I'd say so, yeah. No. Yeah, yeah. Mm-hmm. Do we have any other background information on like how that compares to other other Yeah. Mm-hmm. Yeah, interesting thing about discussing um production of a remote control for me is that l as you point out, I just don't think of remote controls as somethin something people consciously assess in their purchasing habits. It's just like getting shoelaces with shoes or something. It just comes along. Do you know what I mean? Like so sort of like how do you I I mean one one way of looking at it would be, well the people producing television sets, maybe they have to buy remote controls. Or another way is maybe people who have T_V_ sets are really fed up with their remote control and they really want a better one or something. But Right. Right. Okay so Right, so in function one of the priorities might be to combine as many uses I think so. Yeah, yeah. Yeah. Well like um, maybe what we could use is a sort of like a example of a successful other piece technology is palm palm pilots. They're gone from being just like little sort of scribble boards to cameras, M_P_ three players, telephones, everything, agenda. So, like, I wonder if we might add something new to the to the remote control market, such as the lighting in your house, or um Yeah, yeah. An Yeah. Like, p personally for me, at home I've I've combined the um the audio video of my television set and my D_V_D_ player and my C_D_ player. So they w all work actually function together but I have different remote controls for each of them. So it's sort of ironic that that then they're in there um you know, the sound and everything it's just one system. But each one's got its own little part. Mm. Mm. Mm. Mm-hmm. Mm-hmm. Yeah. Yeah. That's just really good id Yep. Uh, sure. I remember when the first remote control my my family had was on a cable. Actually had a cable between it and the T_V_ and big like buttons that sort of like, like on a blender or something. And um, you know, when I think about what they are now, it's better, but actually it's still kind of, I dunno, like a massive junky thing on the table. Maybe we could think about how, could be more, you know, streamlined. S Something like that, yeah. Or whatever would be technologically reasonable. 'Cause it could b it could it could be that f it could be that functionally that doesn't make it any better, but that just the appeal of of not having You know, these days there's a r pe things in people's homes are becoming more and more like chic, you know. Um, nicer materials and might be be worth exploring anyway. Okay. Um. Before we wrap up, just to make sure we're all on the same page here, um, do we We were given sort of an example of a coffee machine or something, right? Well, um are we at ma right now on the assumption that our television remote control may have features which go beyond the television? Or are we keeping sort of like a a design commitment to television features? I I don't know. Yep. Yeah, sure. Okay. Okay, yeah. Okay. Okay. Okay. Alright."}], "model-index": [{"name": "bart-large-meeting-summary-xsum-samsum-dialogsum", "results": [{"task": {"type": "abstractive-text-summarization", "name": "Abstractive Text Summarization"}, "dataset": {"name": "cnndaily/newyorkdaily/xsum/samsum/dialogsum", "type": "cnndaily/newyorkdaily/xsum/samsum/dialogsum"}, "metrics": [{"type": "rouge-1", "value": "NA", "name": "Validation ROGUE-1"}, {"type": "rouge-2", "value": "NA", "name": "Validation ROGUE-2"}, {"type": "rouge-L", "value": "NA", "name": "Validation ROGUE-L"}, {"type": "rouge-Lsum", "value": "NA", "name": "Validation ROGUE-Lsum"}, {"type": "rouge-1", "value": "NA", "name": "Test ROGUE-1"}, {"type": "rouge-2", "value": "NA", "name": "Test ROGUE-2"}, {"type": "rouge-L", "value": "NA", "name": "Test ROGUE-L"}, {"type": "rouge-Lsum", "value": "NA", "name": "Test ROGUE-Lsum"}]}]}]}
|
knkarthick/MEETING-SUMMARY-BART-LARGE-XSUM-SAMSUM-DIALOGSUM
| null |
[
"transformers",
"pytorch",
"tf",
"safetensors",
"bart",
"text2text-generation",
"seq2seq",
"summarization",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #safetensors #bart #text2text-generation #seq2seq #summarization #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
Model obtained by Fine Tuning 'facebook/bart-large-xsum'
## Usage
# Example 1
# Example 2
# Example 3
# Example 4
|
[
"## Usage",
"# Example 1",
"# Example 2",
"# Example 3",
"# Example 4"
] |
[
"TAGS\n#transformers #pytorch #tf #safetensors #bart #text2text-generation #seq2seq #summarization #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"## Usage",
"# Example 1",
"# Example 2",
"# Example 3",
"# Example 4"
] |
summarization
|
transformers
|
Model obtained by Fine Tuning 'facebook/bart-large-xsum' using AMI Meeting Corpus, SAMSUM Dataset, DIALOGSUM Dataset, XSUM Dataset!
## Usage
# Example 1
```python
from transformers import pipeline
summarizer = pipeline("summarization", model="knkarthick/MEETING_SUMMARY")
text = '''The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building, and the tallest structure in Paris. Its base is square, measuring 125 metres (410 ft) on each side. During its construction, the Eiffel Tower surpassed the Washington Monument to become the tallest man-made structure in the world, a title it held for 41 years until the Chrysler Building in New York City was finished in 1930. It was the first structure to reach a height of 300 metres. Due to the addition of a broadcasting aerial at the top of the tower in 1957, it is now taller than the Chrysler Building by 5.2 metres (17 ft). Excluding transmitters, the Eiffel Tower is the second tallest free-standing structure in France after the Millau Viaduct.
'''
summarizer(text)
```
# Example 2
```python
from transformers import pipeline
summarizer = pipeline("summarization", model="knkarthick/MEETING_SUMMARY")
text = '''Bangalore is the capital and the largest city of the Indian state of Karnataka. It has a population of more than 8 million and a metropolitan population of around 11 million, making it the third most populous city and fifth most populous urban agglomeration in India. Located in southern India on the Deccan Plateau, at a height of over 900 m (3,000 ft) above sea level, Bangalore is known for its pleasant climate throughout the year. Its elevation is the highest among the major cities of India.The city's history dates back to around 890 CE, in a stone inscription found at the Nageshwara Temple in Begur, Bangalore. The Begur inscription is written in Halegannada (ancient Kannada), mentions 'Bengaluru Kalaga' (battle of Bengaluru). It was a significant turning point in the history of Bangalore as it bears the earliest reference to the name 'Bengaluru'. In 1537 CE, Kempé Gowdā – a feudal ruler under the Vijayanagara Empire – established a mud fort considered to be the foundation of modern Bangalore and its oldest areas, or petes, which exist to the present day.
After the fall of Vijayanagar empire in 16th century, the Mughals sold Bangalore to Chikkadevaraja Wodeyar (1673–1704), the then ruler of the Kingdom of Mysore for three lakh rupees. When Haider Ali seized control of the Kingdom of Mysore, the administration of Bangalore passed into his hands.
The city was captured by the British East India Company after victory in the Fourth Anglo-Mysore War (1799), who returned administrative control of the city to the Maharaja of Mysore. The old city developed in the dominions of the Maharaja of Mysore and was made capital of the Princely State of Mysore, which existed as a nominally sovereign entity of the British Raj. In 1809, the British shifted their cantonment to Bangalore, outside the old city, and a town grew up around it, which was governed as part of British India. Following India's independence in 1947, Bangalore became the capital of Mysore State, and remained capital when the new Indian state of Karnataka was formed in 1956. The two urban settlements of Bangalore – city and cantonment – which had developed as independent entities merged into a single urban centre in 1949. The existing Kannada name, Bengalūru, was declared the official name of the city in 2006.
Bangalore is widely regarded as the "Silicon Valley of India" (or "IT capital of India") because of its role as the nation's leading information technology (IT) exporter. Indian technological organisations are headquartered in the city. A demographically diverse city, Bangalore is the second fastest-growing major metropolis in India. Recent estimates of the metro economy of its urban area have ranked Bangalore either the fourth- or fifth-most productive metro area of India. As of 2017, Bangalore was home to 7,700 millionaires and 8 billionaires with a total wealth of $320 billion. It is home to many educational and research institutions. Numerous state-owned aerospace and defence organisations are located in the city. The city also houses the Kannada film industry. It was ranked the most liveable Indian city with a population of over a million under the Ease of Living Index 2020.
'''
summarizer(text)
```
# Example 3
```python
from transformers import pipeline
summarizer = pipeline("summarization", model="knkarthick/MEETING_SUMMARY")
text = '''Hi, I'm David and I'm supposed to be an industrial designer. Um, I just got the project announcement about what the project is. Designing a remote control. That's about it, didn't get anything else. Did you get the same thing? Cool. There's too much gear. Okay. Can't draw. Um. Yeah. Um, well anyway, I don't know, it's just the first animal I can think off the top of my head. Um. Yes. Big reason is 'cause I'm allergic to most animals. Allergic to animal fur, so um fish was a natural choice. Um, yeah, and I kind of like whales. They come in and go eat everything in sight. And they're quite harmless and mild and interesting. Tail's a bit big, I think. It's an after dinner dog then. Hmm. It does make sense from maybe the design point of view 'cause you have more complicated characters like European languages, then you need more buttons. So, possibly. Hmm. Yeah. And you keep losing them. Finding them is really a pain, you know. I mean it's usually quite small, or when you want it right, it slipped behind the couch or it's kicked under the table. You know. Yep. Mm-hmm. I think one factor would be production cost. Because there's a cap there, so um depends on how much you can cram into that price. Um. I think that that's the main factor. Cool.
Okay. Right. Um well this is the kick-off meeting for our our project. Um and um this is just what we're gonna be doing over the next twenty five minutes. Um so first of all, just to kind of make sure that we all know each other, I'm Laura and I'm the project manager. Do you want to introduce yourself again? Okay. Great. Okay. Um so we're designing a new remote control and um Oh I have to record who's here actually. So that's David, Andrew and Craig, isn't it? And you all arrived on time. Um yeah so des uh design a new remote control. Um, as you can see it's supposed to be original, trendy and user friendly. Um so that's kind of our our brief, as it were. Um and so there are three different stages to the design. Um I'm not really sure what what you guys have already received um in your emails. What did you get? Mm-hmm. Is that what everybody got? Okay. Um. So we're gonna have like individual work and then a meeting about it. And repeat that process three times. Um and at this point we get try out the whiteboard over there. Um. So uh you get to draw your favourite animal and sum up your favourite characteristics of it. So who would like to go first? Very good. Mm-hmm. Yeah. Yeah. Right. Lovely. Right. You can take as long over this as you like, because we haven't got an awful lot to discuss. Ok oh we do we do. Don't feel like you're in a rush, anyway. Ach why not We might have to get you up again then. I don't know what mine is. I'm gonna have to think on the spot now. Is that a whale? Ah. Okay. God, I still don't know what I'm gonna write about. Um. I was gonna choose a dog as well. But I'll just draw a different kind of dog. M my favourite animal is my own dog at home. Um That doesn't really look like him, actually. He looks more like a pig, actually. Ah well. Do you? Oh that's very good of you. Uh. Um he's a mixture of uh various things. Um and what do I like about him, um That's just to suggest that his tail wags. Um he's very friendly and cheery and always pleased to see you, and very kind of affectionate and um uh and he's quite quite wee as well so you know he can doesn't take up too much space. Um and uh And he does a funny thing where he chases his tail as well, which is quite amusing, so It is. I think it is. He only does it after he's had his dinner and um he'll just all of a sudden just get up and start chasing his tail 'round the living room. Yeah, so uh Yeah, maybe. Maybe. Right, um where did you find this? Just down here? Yeah. Okay. Um what are we doing next? Uh um. Okay, uh we now need to discuss the project finance. Um so according to the brief um we're gonna be selling this remote control for twenty five Euro, um and we're aiming to make fifty million Euro. Um so we're gonna be selling this on an international scale. And uh we don't want it to cost any more than uh twelve fifty Euros, so fifty percent of the selling price. Sure. All together. Um I dunno. I imagine That's a good question. I imagine it probably is our sale actually because it's probably up to the the um the retailer to uh sell it for whatever price they want. Um. But I I don't know, I mean do you think the fact that it's going to be sold internationally will have a bearing on how we design it at all? Think it will? Um. Hmm. Oh yeah, regions and stuff, yeah. Yeah. Okay. Yeah. Well for a remote control, do you think that will be I suppose it's depends on how complicated our remote control is. Yeah, yeah. Okay. What, just like in terms of like the wealth of the country? Like how much money people have to spend on things like? Aye, I see what you mean, yeah. Marketing. Good marketing thoughts. Oh gosh, I should be writing all this down. Um. Mm. Yeah. Yeah, yeah. Like how much does, you know, a remote control cost. Well twenty five Euro, I mean that's um that's about like eighteen pounds or something, isn't it? Or no, is it as much as that? Sixteen seventeen eighteen pounds. Um, I dunno, I've never bought a remote control, so I don't know how how good a remote control that would get you. Um. But yeah, I suppose it has to look kind of cool and gimmicky. Um right, okay. Let me just scoot on ahead here. Okay. Um well d Does anybody have anything to add to uh to the finance issue at all? Thin No, actually. That would be useful, though, wouldn't it, if you knew like what your money would get you now. Mm-hmm. Yeah, yeah. Oh. Five minutes to end of meeting. Oh, okay. We're a bit behind. Yeah. Right, so do you think that should be like a main design aim of our remote control d you know, do your your satellite and your regular telly and your V_C_R_ and everything? Mm-hmm. Yeah. Or even like, you know, notes about um what you wanna watch. Like you might put in there oh I want to watch such and such and look a Oh that's a good idea. So extra functionalities. Mm-hmm. Hmm. Um okay, uh I'd wel we're gonna have to wrap up pretty quickly in the next couple of minutes. Um I'll just check we've nothing else. Okay. Um so anything else anybody wants to add about what they don't like about remote controls they've used, what they would really like to be part of this new one at all? You keep losing them. Okay. Yeah. W You get those ones where you can, if you like, whistle or make a really high pitched noise they beep. There I mean is that something we'd want to include, do you think? Dunno. Okay maybe. My goodness. Still feels quite primitive. Maybe like a touch screen or something? Okay. Uh-huh, okay. Well I guess that's up to our industrial designer. It looks better. Yeah. Okay. Okay. Right, well um so just to wrap up, the next meeting's gonna be in thirty minutes. So that's about um about ten to twelve by my watch. Um so inbetween now and then, um as the industrial designer, you're gonna be working on you know the actual working design of it so y you know what you're doing there. Um for user interface, technical functions, I guess that's you know like what we've been talking about, what it'll actually do. Um and uh marketing executive, you'll be just thinking about what it actually what, you know, what requirements it has to has to fulfil and you'll all get instructions emailed to you, I guess. Um. Yeah, so it's th the functional design stage is next, I guess. And uh and that's the end of the meeting. So I got that little message a lot sooner than I thought I would, so Mm-hmm. Uh-huh, yeah. Th Okay, well just very quickly 'cause this we're supposed to finish now. Um I guess that's up to us, I mean you probably want some kind of unique selling point of it, so um, you know Yeah. Mm-hmm. Yeah. Okay. Right, okay, we'll that's that's the end of the meeting, then. Um. So, uh thank you all for coming.
Um I'm Craig and I'm User Interface. Yeah. Well, my favourite animal would be a monkey. Then they're small cute and furry, and uh when planet of the apes becomes real, I'm gonna be up there with them. Yeah. I know um My parents went out and bought um remote controls because um they got fed up of having four or five different remote controls for each things the house. So um for them it was just how many devices control. Uh.
Mm-hmm. Great. And I'm Andrew and I'm uh our marketing expert. Mm-hmm. Mm-hmm. Yeah, that's that's it. Yeah. I will go. That's fine. Alright. So This one here, right? Okay. Very nice. Alright. My favourite animal is like A beagle. Um charac favourite characteristics of it? Is that right? Uh, right, well basically um high priority for any animal for me is that they be willing to take a lot of physical affection from their family. And, yeah that they have lots of personality and uh be fit and in robust good health. So this is blue. Blue beagle. My family's beagle. I coulda told you a whole lot more about beagles. Boy, let me tell you. Impressionist. Alright. Mm. Superb sketch, by the way. Yep. I see a dog in there. Yep. Now I see a rooster. What kind is it? Is he aware that th it's his own cha tail he's chasing? Hmm. Probably when he was little he got lots of attention for doing it and has forever been conditioned. 'Kay. Um, can we just go over that again? Uh, so bas at twel Alright, yeah. Okay. So cost like production cost is twelve fifty, but selling price is is that wholesale or retail? Like on the shelf. Our sale our sale anyway. Yeah, okay okay. Okay. Mm-hmm. Alright. Yes. Mm-hmm. Mm-hmm. Well right away I'm wondering if there's um th th uh, like with D_V_D_ players, if there are zones. Um f frequencies or something um as well as uh characters, um different uh keypad styles and s symbols. Um. I don't know. Yeah. Yeah. Yeah. And then a and then al the other thing international is on top of the price. I'm thinking the price might might appeal to a certain market in one region, whereas in another it'll be different, so Just a chara just a characteristic of the Just Or just like, basic product podi positioning, the twenty five Euro remote control might be a big hit in London, might not be such a big hit in Greece, who knows, something like that, yeah. Yep. Right away I'm making some kind of assumptions about what what information we're given here, thinking, 'kay trendy probably means something other than just basic, something other than just standard. Um so I'm wondering right away, is selling twenty five Euros, is that sort of the thi is this gonna to be like the premium product kinda thing or Uh-huh. Mm-hmm. Yep. Yeah, I'd say so, yeah. No. Yeah, yeah. Mm-hmm. Do we have any other background information on like how that compares to other other Yeah. Mm-hmm. Yeah, interesting thing about discussing um production of a remote control for me is that l as you point out, I just don't think of remote controls as somethin something people consciously assess in their purchasing habits. It's just like getting shoelaces with shoes or something. It just comes along. Do you know what I mean? Like so sort of like how do you I I mean one one way of looking at it would be, well the people producing television sets, maybe they have to buy remote controls. Or another way is maybe people who have T_V_ sets are really fed up with their remote control and they really want a better one or something. But Right. Right. Okay so Right, so in function one of the priorities might be to combine as many uses I think so. Yeah, yeah. Yeah. Well like um, maybe what we could use is a sort of like a example of a successful other piece technology is palm palm pilots. They're gone from being just like little sort of scribble boards to cameras, M_P_ three players, telephones, everything, agenda. So, like, I wonder if we might add something new to the to the remote control market, such as the lighting in your house, or um Yeah, yeah. An Yeah. Like, p personally for me, at home I've I've combined the um the audio video of my television set and my D_V_D_ player and my C_D_ player. So they w all work actually function together but I have different remote controls for each of them. So it's sort of ironic that that then they're in there um you know, the sound and everything it's just one system. But each one's got its own little part. Mm. Mm. Mm. Mm-hmm. Mm-hmm. Yeah. Yeah. That's just really good id Yep. Uh, sure. I remember when the first remote control my my family had was on a cable. Actually had a cable between it and the T_V_ and big like buttons that sort of like, like on a blender or something. And um, you know, when I think about what they are now, it's better, but actually it's still kind of, I dunno, like a massive junky thing on the table. Maybe we could think about how, could be more, you know, streamlined. S Something like that, yeah. Or whatever would be technologically reasonable. 'Cause it could b it could it could be that f it could be that functionally that doesn't make it any better, but that just the appeal of of not having You know, these days there's a r pe things in people's homes are becoming more and more like chic, you know. Um, nicer materials and might be be worth exploring anyway. Okay. Um. Before we wrap up, just to make sure we're all on the same page here, um, do we We were given sort of an example of a coffee machine or something, right? Well, um are we at ma right now on the assumption that our television remote control may have features which go beyond the television? Or are we keeping sort of like a a design commitment to television features? I I don't know. Yep. Yeah, sure. Okay. Okay, yeah. Okay. Okay. Okay. Alright.
'''
summarizer(text)
```
# Example 4
```python
from transformers import pipeline
summarizer = pipeline("summarization", model="knkarthick/MEETING_SUMMARY")
text = '''
Das : Hi and welcome to the a16z podcast. I’m Das, and in this episode, I talk SaaS go-to-market with David Ulevitch and our newest enterprise general partner Kristina Shen. The first half of the podcast looks at how remote work impacts the SaaS go-to-market and what the smartest founders are doing to survive the current crisis. The second half covers pricing approaches and strategy, including how to think about free versus paid trials and navigating the transition to larger accounts. But we start with why it’s easier to move upmarket than down… and the advantage that gives a SaaS startup against incumbents.
David : If you have a cohort of customers that are paying you $10,000 a year for your product, you’re going to find a customer that self-selects and is willing to pay $100,000 a year. Once you get one of those, your organization will figure out how you sell to, how you satisfy and support, customers at that price point and that size. But it’s really hard for a company that sells up market to move down market, because they’ve already baked in all that expensive, heavy lifting sales motion. And so as you go down market with a lower price point, usually, you can’t actually support it.
Das : Does that mean that it’s easier for a company to do this go-to-market if they’re a new startup as opposed to if they’re a pre-existing SaaS?
Kristina : It’s culturally very, very hard to give a product away for free that you’re already charging for. It feels like you’re eating away at your own potential revenue when you do it. So most people who try it end up pulling back very quickly.
David : This is actually one of the key reasons why the bottoms up SaaS motion is just so competitive, and compelling, and so destructive against the traditional sales-driven test motion. If you have that great product and people are choosing to use it, it’s very hard for somebody with a sales-driven motion, and all the cost that’s loaded into that, to be able to compete against it. There are so many markets where initially, we would look at companies and say, “Oh, well, this couldn’t possibly be bottoms up. It has to be sold to the CIO. It has to be sold to the CSO or the CFO.” But in almost every case we’ve been wrong, and there has been a bottoms up motion. The canonical example is Slack. It’s crazy that Slack is a bottoms up company, because you’re talking about corporate messaging, and how could you ever have a messaging solution that only a few people might be using, that only a team might be using? But now it’s just, “Oh, yeah, some people started using it, and then more people started using it, and then everyone had Slack.”
Kristina : I think another classic example is Dropbox versus Box. Both started as bottoms up businesses, try before you buy. But Box quickly found, “Hey, I’d rather sell to IT.” And Dropbox said, “Hey, we’ve got a great freemium motion going.” And they catalyzed their business around referrals and giving away free storage and shared storage in a way that really helped drive their bottoms up business.
Das : It’s a big leap to go from selling to smaller customers to larger customers. How have you seen SaaS companies know or get the timing right on that? Especially since it does seem like that’s really related to scaling your sales force?
Kristina : Don’t try to go from a 100-person company to a 20,000-person company. Start targeting early adopters, maybe they’re late stage pre-IPO companies, then newly IPO’d companies. Starting in tech tends to be a little bit easier because they tend to be early adopters. Going vertical by vertical can be a great strategy as well. Targeting one customer who might be branded in that space, can help brand yourself in that category. And then all their competitors will also want your product if you do a good job. A lot of times people will dedicate a sales rep to each vertical, so that they become really, really knowledgeable in that space, and also build their own brand and reputation and know who are the right customers to target.
Das : So right now, you’ve got a lot more people working remote. Does this move to remote work mean that on-premise software is dying? And is it accelerating the move to software as a service?
Kristina : This remote work and working from home is only going to catalyze more of the conversion from on-premise over to cloud and SaaS. In general, software spend declines 20% during an economic downturn. This happened in ’08, this happened in ’01. But when we look at the last downturn in ’08, SaaS spend actually, for public companies, increased, on average, 10%, which means there’s a 30% spread, which really shows us that there was a huge catalyst from people moving on-premise to SaaS.
David : And as people work remote, the ability to use SaaS tools is much easier than having to VPN back into your corporate network. We’ve been seeing that, inside sales teams have been doing larger and larger deals, essentially moving up market on the inside, without having to engage with field sales teams. In fact, a lot of the new SaaS companies today rather than building out a field team, they have a hybrid team, where people are working and closing deals on the inside and if they had to go out and meet with a customer, they would do that. But by and large, most of it was happening over the phone, over email, and over videoconferencing. And all the deals now, by definition, are gonna be done remote because people can’t go visit their customers in person.
Das : So with bottoms up, did user behavior and buyer behavior change, so the go-to-market evolved? Or did the go-to-market evolve and then you saw user and buyer behavior change? I’m curious with this move to remote work. Is that going to trigger more changes or has the go-to-market enabled that change in user behavior, even though we see that change coming because of a lot of forces outside of the market?
Kristina : I definitely think they are interrelated. But I do think it was a user change that catalyzed everything. We decided that we preferred better software, and we tried a couple products. We were able to purchase off our credit card. And then IT and procurement eventually said, “Wow, everyone’s buying these already, I might as well get a company license and a company deal so I’m not paying as much.” While obviously software vendors had to offer the products that could be self-served, users started to realize they had the power, they wanted to use better software, they paid with their credit cards. And now software vendors are forced to change their go-to-market to actually suit that use case.
Das : If that’s the case that when user behavior has changed, it’s tended to be the catalyzing force of bigger changes in the go-to-market, what are some of the changes you foresee for SaaS because the world has changed to this new reality of remote work and more distributed teams?
David : We’re in a very uncertain economic environment right now. And a couple of things will become very clear over the next 3 to 9 to 15 months — you’re going to find out which SaaS products are absolutely essential to helping a business operate and run, and which ones were just nice to have and may not get renewed. I think on the customer, buying side, you’re very likely to see people push back on big annual commitments and prefer to go month-to-month where they can. Or you’ll see more incentives from SaaS startups to offer discounts for annual contracts. You’re going to see people that might sign an annual contract, but they may not want to pay upfront. They may prefer to meter the cash out ratably over the term of the contract. And as companies had empowered and allowed budget authority to be pushed down in organizations, you’re gonna see that budget authority get pulled back, more scrutiny on spending, and likely a lot of SaaS products not get renewed that turned out to not be essential.
Kristina : I think the smartest founders are making sure they have the runway to continue to exist. And they’re doing that in a couple of ways. They’re preserving cash, and they are making sure that their existing customers are super, super happy, because retaining your customers is so important in this environment. And they’re making sure that they have efficient or profitable customer acquisition. Don’t spend valuable dollars acquiring customers. But acquire customers efficiently that will add to a great existing customer base.
Das : To go into pricing and packaging for SaaS for a moment, what are some of the different pricing approaches that you see SaaS companies taking?
Kristina : The old school way of doing SaaS go-to-market is bundle everything together, make the pricing super complex, so you don’t actually understand what you’re paying for. You’re forced to purchase it because you need one component of the product. New modern SaaS pricing is keep it simple, keep it tied to value, and make sure you’re solving one thing really, really well.
David : You want to make it easy for your customers to give you money. And if your customers don’t understand your pricing, that’s a huge red flag. Sometimes founders will try to over engineer their pricing model.
Kristina : We talk a lot about everything has to be 10X better than the alternatives. But it’s much easier to be 10X better when you solve one thing very, very well, and then have simple pricing around it. I think the most common that most people know about is PEPM or per employee per month, where you’re charging basically for every single seat. Another really common model is the freemium model. So, think about a Dropbox, or an Asana, or a Skype, where it’s trigger based. You try the product for free, but when you hit a certain amount of storage, or a certain amount of users, then it converts over to paid. And then you also have a time trial, where you get the full experience of the product for some limited time period. And then you’re asked if you want to continue using the product to pay. And then there’s pay as go, and particularly, pay as you go as a usage model. So, Slack will say, “Hey, if your users aren’t actually using the product this month, we won’t actually charge you for it.”
David : The example that Kristina made about Slack and users, everybody understands what a user is, and if they’re using the product, they pay for it, and if they’re not using it, they don’t pay for it. That’s a very friendly way to make it easy for your customers to give you money. If Slack came up with a pricing model that was like based on number of messages, or number of API integration calls, the customer would have no idea what that means.
Kristina : There’s also the consumption model. So Twilio only charges you for every SMS text or phone call that you make on the platform any given month. And so they make money or lose money as your usage goes. The pricing is very aligned to your productivity.
David : Generally, those are for products where the usage only goes in one direction. If you think of a company like Databricks, where they’re charging for storage, or Amazon’s S3 service, it is very aligned with the customer, but it also strategically aligns with the business because they know the switching cost is very high, the churn is very low. And generally, in those businesses, you’re only going to store more data, so they can charge based on usage or volume of data.
Kristina : Recently, there’s been a huge trend of payment as a revenue. It’s particularly common in vertical markets where SaaS companies are adding payments as a revenue in addition to their employee or subscription revenue. If you look at Shopify, for example, more than 50% of their revenue is actually payment revenue. They’re making money every single time you purchase something off one of their shopping cart websites.
Das : When you’re working with a founder or a SaaS startup, how have you seen them find the right pricing model for their product, for their market?
Kristina : Step one is just talk to a lot of customers. Try to figure out what is the market pricing for possible alternatives or competitors, understand their pain points and their willingness to pay. And just throw a price out there, because you have to have a starting point in order to actually test and iterate. Particularly in the SMB, or the bottoms up business, you can test and iterate pretty quickly because you have so many data points.
David : I always tell founders, step one is to just go out there and talk to customers. Step two is just double your prices. I don’t think there’s ever been a great company with a great product that’s fallen apart because their pricing was wrong. But a lot of SaaS startup founders really under price, and you don’t want to find out two or three years later that you were 200% underpriced. A very common thing that SaaS companies do, they’ll have the basic package that either is free or low cost, that you can just sign up online for. They’ll have a middle package where they share some pricing, and then they’ll have the enterprise package where you have to contact sales to find out more. And that way they don’t actually have to show the pricing for that third package. And that gives the salespeople the flexibility to adjust pricing on a per deal basis.
Das : When you’re working with companies, why are they underpricing their products?
David : I think it’s psychological. People need to price on value, and they don’t know how much value they’re delivering relative to “Oh, it only cost me $100 a month to provide this service, so I just need to charge $200.” But if it turns out you’re saving your customer $50,000 a year, then you’re wildly underpriced. You have to remember that SaaS is essentially a proxy for outsourced IT. You’re spending money on a SaaS service to not pay to develop something internally, or to have to pay IT to support something that’s more complex on-prem. Software is much cheaper than people, and so generally, the price point can be much higher.
Kristina : And the other thing is your value increases over time. You’re delivering more features, more products, you understand the customer better. It’s the beauty of the SaaS model and cloud model that you can iterate and push code immediately, and the customer immediately sees value. A lot of times people have the same price point from the first customer sold to three years later and the 200th customer. Quite frankly, you’ve delivered so much value along the way that your price point should have gone up. The other thing I’ll say is a lot of people discount per seat pricing a lot as they move up market. We tend to tell people that the best validation of your product having great product market fit is your ability to hold your price point. So while there is some natural discounting on a per seat basis because people do deserve some volume discounting, I would say try to resist that as much as possible.
Das : Especially for a technical founder, it’s so tempting to get in there and fiddle with these knobs. How do you know when it is time to experiment with your pricing and packaging?
David : If you’re looking at your business and you see that you are doing more deals, and they’re closing faster, you should raise your pricing. And you pay attention to how long it takes to close deals and whether the number of deals is staying consistent as you do that. And, at some point, you’re going to find out when you’re losing deals on price. I think a moment where companies have to plan ahead to avoid having to course correct is after they roll out massive pricing and packaging changes, which are pretty natural as companies move up market. But how they navigate that transition to larger accounts, and how they either bring along or move away from those smaller, earlier customers who got them to where they are, tends to be really important because they can get a lot of noise on Twitter, they can get a lot of blowback from their customers. So Zendesk is a company where they rolled out a major packaging change. And when they rolled it out, they hadn’t planned on grandfathering in their early customers. They got a lot of pushback, and very quickly, they put out a blog post and said, “We hear what you’re saying, we appreciate you building the business that we’ve become today. We do need to have a package for the future. But all the people that have been customers so far will be grandfathered in for at least a period of time into the old model.”
Kristina : If you iterate pricing constantly, you don’t really have this problem because your customers will be used to pricing changes. You normally pair them with new features, and it all kind of works out. But if you have to go through a big grandfather change, I tend to lean towards treating your early customers really, really well. They adopted when you weren’t a big company yet. They probably co-built the product with you in many ways. And so, it’s great to get more dollars out of your customer base, but treat your early customers well.
Das : Are there any other failure modes that you see startups really falling into around pricing and packaging or any common mistakes that they make?
David : I think a lot of founders don’t always map out the cost or model of their pricing and their product relative to their cost of actually doing sales and marketing and customer acquisition.
Kristina : Inside sales is so popular in Silicon Valley. When you’re selling more to an SMB or mid-market type customer, the expectation is that you’re educating and helping the prospective customer over the phone. And so, you’re not expected to be as high touch. But 5K is almost the minimum price point you need to sell to the SMB with an inside sales team in order to pay for the outbound costs and all the conversions, because there is typically a team that sits around the quota carrying rep. And so, price matching — how much your price point is compared to what your go-to-market motion is — matters a lot. Other big failure modes that I see, people guess the ramp time of a sales rep wrong. And ramp time really ties to the segment of customer you’re selling into. It tends be that if you’re selling into the enterprise, the ramp time for sales reps, because sales cycles are so long, tend to be much longer as well. They could be six months plus, could be a year. While if you’re selling more into SMB or mid-market, the ramp time to get a rep up and running can be much shorter, three to six months. Because the sales cycles are shorter, they just iterate much faster, and they ramp up much more quickly.
David : The other thing that people have to understand is that sales velocity is a really important component to figuring out how many reps you should be hiring, whether they should be inside reps or field reps. If it takes you 90 days to close a deal, that can’t be a $5,000 a year deal, that has to be a $50,000 or even $150,000 a year deal.
Das : Kristina, I know you’ve done a lot of work with metrics. So how do those play in?
Kristina : Probably the one way to sum it all together is how many months does it take to pay back customer acquisition cost. Very commonly within the SaaS world, we talk about a 12-month CAC payback. We typically want to see for every dollar you spend on sales and marketing, you get a dollar back within a year. That means you can tweak the inputs any way you want. Let’s say that doing paid acquisition is really effective for you. Then, you can spend proportionally more on paid acquisition and less on sales reps. Vice versa, if you have a great inbound engine, you actually can hire a lot more sales reps and spend more on sales headcount. With all formulas, it’s a guide rail, so if you have customers that retain really, really well, let’s say you’re selling to the enterprise, and you’ve got a 90% or 95% annual retention rate, then your CAC payback could be between 12 and 24 months. But let’s say you’re selling to the SMB and churn is 2% or 3% monthly, which ends up being like 80% to 90% annual retention. Then, because your customer is less sticky, I would recommend looking at a CAC payback of 6 to 12 months.
Das : How should you think about doing a free trial versus a paid trial?
David : On the one hand, the bottoms up motion where people can try essentially a full version of a product before they buy it is extremely powerful. On the other hand, I’ve started to try to think about how I advise companies, when they are thinking about a free trial for something that might cost $100,000 or $200,000 a year? Do we do a paid pilot that has some sort of contractual obligation that if we meet then turns into a commercial engagement?
Kristina : I do think the beauty of the bottoms up business is that you can get people to try the entire experience of the product for free, and they fall in love with it, and a certain percentage will convert. And that works really, really well for products that can self-serve. When you start moving up market to more complex products, the challenge with trials is it takes work to actually implement the product, whether it be integrations, IT has to give access, etc. You lose that self-serve ability, which is so amazing in the trial. And so, I tend to be more in the camp of paid trials, if it costs you money to actually deploy the trial. And when you’re selling to bigger customers, they associate value when they have to pay. Once a customer has to pay you, then they feel a need to make the project successful and thus they will onboard, schedule things, give you data and access.
David : If you can get to a point where you get the customer to do that paid pilot, such that the only difference between a pilot and an actual customer is just the signing of a contract, that’s very powerful. Now, that does force you to have a really good pre-sales motion to make sure that you can deliver on the promise you’ve made your customers. When companies don’t have a great product, and they paper over it with professional services and sales engineering and post-sales support, that paid pilot thing doesn’t work because the experience isn’t good enough. So, it really is incumbent on the SaaS company that does a paid pilot to make sure that they are able to deliver on that experience.
Kristina : And one emerging trend recently is people signing an annual contract with a one or three month out, as a replacement to the paid pilot. Because it’s the best of both worlds, the SaaS company that’s selling the product gets a higher level of commitment. And the customer gets the optionality of opting out in the same way as a trial without any clawback. It really comes down to where procurement falls. Sometimes procurement is at the beginning of that decision, which makes it more like an annual contract. Sometimes procurement is at the one or three month opt-out period, which means the customer already has a great experience, loves the product, and it is an easier way to convert procurements to actually sign on…
David : And that is a really good segue into renewals. I always tell founders, you might have this subscription business, but it’s not a recurring revenue business until the second year when the revenue actually recurs. I think you really have the first three months to get a customer up and running and happy. And if they’re not, you then have about three months to fix it. And if all that works out, then the remaining six months of the contract can be focused on upsell and expansion.
Das : Awesome. Thank you, Kristina. Thank you, David.
Kristina : Thanks so much for having us. This was fun.
David : Yeah, a lot of fun, great topics, and our favorite thing to talk about.
'''
summarizer(text)
```
|
{"language": "en", "license": "apache-2.0", "tags": ["bart", "seq2seq", "summarization"], "datasets": ["cnndaily/newyorkdaily/xsum/samsum/dialogsum/AMI"], "metrics": ["rouge"], "widget": [{"text": "Hi, I'm David and I'm supposed to be an industrial designer. Um, I just got the project announcement about what the project is. Designing a remote control. That's about it, didn't get anything else. Did you get the same thing? Cool. There's too much gear. Okay. Can't draw. Um. Yeah. Um, well anyway, I don't know, it's just the first animal I can think off the top of my head. Um. Yes. Big reason is 'cause I'm allergic to most animals. Allergic to animal fur, so um fish was a natural choice. Um, yeah, and I kind of like whales. They come in and go eat everything in sight. And they're quite harmless and mild and interesting. Tail's a bit big, I think. It's an after dinner dog then. Hmm. It does make sense from maybe the design point of view 'cause you have more complicated characters like European languages, then you need more buttons. So, possibly. Hmm. Yeah. And you keep losing them. Finding them is really a pain, you know. I mean it's usually quite small, or when you want it right, it slipped behind the couch or it's kicked under the table. You know. Yep. Mm-hmm. I think one factor would be production cost. Because there's a cap there, so um depends on how much you can cram into that price. Um. I think that that's the main factor. Cool.\nOkay. Right. Um well this is the kick-off meeting for our our project. Um and um this is just what we're gonna be doing over the next twenty five minutes. Um so first of all, just to kind of make sure that we all know each other, I'm Laura and I'm the project manager. Do you want to introduce yourself again? Okay. Great. Okay. Um so we're designing a new remote control and um Oh I have to record who's here actually. So that's David, Andrew and Craig, isn't it? And you all arrived on time. Um yeah so des uh design a new remote control. Um, as you can see it's supposed to be original, trendy and user friendly. Um so that's kind of our our brief, as it were. Um and so there are three different stages to the design. Um I'm not really sure what what you guys have already received um in your emails. What did you get? Mm-hmm. Is that what everybody got? Okay. Um. So we're gonna have like individual work and then a meeting about it. And repeat that process three times. Um and at this point we get try out the whiteboard over there. Um. So uh you get to draw your favourite animal and sum up your favourite characteristics of it. So who would like to go first? Very good. Mm-hmm. Yeah. Yeah. Right. Lovely. Right. You can take as long over this as you like, because we haven't got an awful lot to discuss. Ok oh we do we do. Don't feel like you're in a rush, anyway. Ach why not We might have to get you up again then. I don't know what mine is. I'm gonna have to think on the spot now. Is that a whale? Ah. Okay. God, I still don't know what I'm gonna write about. Um. I was gonna choose a dog as well. But I'll just draw a different kind of dog. M my favourite animal is my own dog at home. Um That doesn't really look like him, actually. He looks more like a pig, actually. Ah well. Do you? Oh that's very good of you. Uh. Um he's a mixture of uh various things. Um and what do I like about him, um That's just to suggest that his tail wags. Um he's very friendly and cheery and always pleased to see you, and very kind of affectionate and um uh and he's quite quite wee as well so you know he can doesn't take up too much space. Um and uh And he does a funny thing where he chases his tail as well, which is quite amusing, so It is. I think it is. He only does it after he's had his dinner and um he'll just all of a sudden just get up and start chasing his tail 'round the living room. Yeah, so uh Yeah, maybe. Maybe. Right, um where did you find this? Just down here? Yeah. Okay. Um what are we doing next? Uh um. Okay, uh we now need to discuss the project finance. Um so according to the brief um we're gonna be selling this remote control for twenty five Euro, um and we're aiming to make fifty million Euro. Um so we're gonna be selling this on an international scale. And uh we don't want it to cost any more than uh twelve fifty Euros, so fifty percent of the selling price. Sure. All together. Um I dunno. I imagine That's a good question. I imagine it probably is our sale actually because it's probably up to the the um the retailer to uh sell it for whatever price they want. Um. But I I don't know, I mean do you think the fact that it's going to be sold internationally will have a bearing on how we design it at all? Think it will? Um. Hmm. Oh yeah, regions and stuff, yeah. Yeah. Okay. Yeah. Well for a remote control, do you think that will be I suppose it's depends on how complicated our remote control is. Yeah, yeah. Okay. What, just like in terms of like the wealth of the country? Like how much money people have to spend on things like? Aye, I see what you mean, yeah. Marketing. Good marketing thoughts. Oh gosh, I should be writing all this down. Um. Mm. Yeah. Yeah, yeah. Like how much does, you know, a remote control cost. Well twenty five Euro, I mean that's um that's about like eighteen pounds or something, isn't it? Or no, is it as much as that? Sixteen seventeen eighteen pounds. Um, I dunno, I've never bought a remote control, so I don't know how how good a remote control that would get you. Um. But yeah, I suppose it has to look kind of cool and gimmicky. Um right, okay. Let me just scoot on ahead here. Okay. Um well d Does anybody have anything to add to uh to the finance issue at all? Thin No, actually. That would be useful, though, wouldn't it, if you knew like what your money would get you now. Mm-hmm. Yeah, yeah. Oh. Five minutes to end of meeting. Oh, okay. We're a bit behind. Yeah. Right, so do you think that should be like a main design aim of our remote control d you know, do your your satellite and your regular telly and your V_C_R_ and everything? Mm-hmm. Yeah. Or even like, you know, notes about um what you wanna watch. Like you might put in there oh I want to watch such and such and look a Oh that's a good idea. So extra functionalities. Mm-hmm. Hmm. Um okay, uh I'd wel we're gonna have to wrap up pretty quickly in the next couple of minutes. Um I'll just check we've nothing else. Okay. Um so anything else anybody wants to add about what they don't like about remote controls they've used, what they would really like to be part of this new one at all? You keep losing them. Okay. Yeah. W You get those ones where you can, if you like, whistle or make a really high pitched noise they beep. There I mean is that something we'd want to include, do you think? Dunno. Okay maybe. My goodness. Still feels quite primitive. Maybe like a touch screen or something? Okay. Uh-huh, okay. Well I guess that's up to our industrial designer. It looks better. Yeah. Okay. Okay. Right, well um so just to wrap up, the next meeting's gonna be in thirty minutes. So that's about um about ten to twelve by my watch. Um so inbetween now and then, um as the industrial designer, you're gonna be working on you know the actual working design of it so y you know what you're doing there. Um for user interface, technical functions, I guess that's you know like what we've been talking about, what it'll actually do. Um and uh marketing executive, you'll be just thinking about what it actually what, you know, what requirements it has to has to fulfil and you'll all get instructions emailed to you, I guess. Um. Yeah, so it's th the functional design stage is next, I guess. And uh and that's the end of the meeting. So I got that little message a lot sooner than I thought I would, so Mm-hmm. Uh-huh, yeah. Th Okay, well just very quickly 'cause this we're supposed to finish now. Um I guess that's up to us, I mean you probably want some kind of unique selling point of it, so um, you know Yeah. Mm-hmm. Yeah. Okay. Right, okay, we'll that's that's the end of the meeting, then. Um. So, uh thank you all for coming.\nUm I'm Craig and I'm User Interface. Yeah. Well, my favourite animal would be a monkey. Then they're small cute and furry, and uh when planet of the apes becomes real, I'm gonna be up there with them. Yeah. I know um My parents went out and bought um remote controls because um they got fed up of having four or five different remote controls for each things the house. So um for them it was just how many devices control. Uh.\nMm-hmm. Great. And I'm Andrew and I'm uh our marketing expert. Mm-hmm. Mm-hmm. Yeah, that's that's it. Yeah. I will go. That's fine. Alright. So This one here, right? Okay. Very nice. Alright. My favourite animal is like A beagle. Um charac favourite characteristics of it? Is that right? Uh, right, well basically um high priority for any animal for me is that they be willing to take a lot of physical affection from their family. And, yeah that they have lots of personality and uh be fit and in robust good health. So this is blue. Blue beagle. My family's beagle. I coulda told you a whole lot more about beagles. Boy, let me tell you. Impressionist. Alright. Mm. Superb sketch, by the way. Yep. I see a dog in there. Yep. Now I see a rooster. What kind is it? Is he aware that th it's his own cha tail he's chasing? Hmm. Probably when he was little he got lots of attention for doing it and has forever been conditioned. 'Kay. Um, can we just go over that again? Uh, so bas at twel Alright, yeah. Okay. So cost like production cost is twelve fifty, but selling price is is that wholesale or retail? Like on the shelf. Our sale our sale anyway. Yeah, okay okay. Okay. Mm-hmm. Alright. Yes. Mm-hmm. Mm-hmm. Well right away I'm wondering if there's um th th uh, like with D_V_D_ players, if there are zones. Um f frequencies or something um as well as uh characters, um different uh keypad styles and s symbols. Um. I don't know. Yeah. Yeah. Yeah. And then a and then al the other thing international is on top of the price. I'm thinking the price might might appeal to a certain market in one region, whereas in another it'll be different, so Just a chara just a characteristic of the Just Or just like, basic product podi positioning, the twenty five Euro remote control might be a big hit in London, might not be such a big hit in Greece, who knows, something like that, yeah. Yep. Right away I'm making some kind of assumptions about what what information we're given here, thinking, 'kay trendy probably means something other than just basic, something other than just standard. Um so I'm wondering right away, is selling twenty five Euros, is that sort of the thi is this gonna to be like the premium product kinda thing or Uh-huh. Mm-hmm. Yep. Yeah, I'd say so, yeah. No. Yeah, yeah. Mm-hmm. Do we have any other background information on like how that compares to other other Yeah. Mm-hmm. Yeah, interesting thing about discussing um production of a remote control for me is that l as you point out, I just don't think of remote controls as somethin something people consciously assess in their purchasing habits. It's just like getting shoelaces with shoes or something. It just comes along. Do you know what I mean? Like so sort of like how do you I I mean one one way of looking at it would be, well the people producing television sets, maybe they have to buy remote controls. Or another way is maybe people who have T_V_ sets are really fed up with their remote control and they really want a better one or something. But Right. Right. Okay so Right, so in function one of the priorities might be to combine as many uses I think so. Yeah, yeah. Yeah. Well like um, maybe what we could use is a sort of like a example of a successful other piece technology is palm palm pilots. They're gone from being just like little sort of scribble boards to cameras, M_P_ three players, telephones, everything, agenda. So, like, I wonder if we might add something new to the to the remote control market, such as the lighting in your house, or um Yeah, yeah. An Yeah. Like, p personally for me, at home I've I've combined the um the audio video of my television set and my D_V_D_ player and my C_D_ player. So they w all work actually function together but I have different remote controls for each of them. So it's sort of ironic that that then they're in there um you know, the sound and everything it's just one system. But each one's got its own little part. Mm. Mm. Mm. Mm-hmm. Mm-hmm. Yeah. Yeah. That's just really good id Yep. Uh, sure. I remember when the first remote control my my family had was on a cable. Actually had a cable between it and the T_V_ and big like buttons that sort of like, like on a blender or something. And um, you know, when I think about what they are now, it's better, but actually it's still kind of, I dunno, like a massive junky thing on the table. Maybe we could think about how, could be more, you know, streamlined. S Something like that, yeah. Or whatever would be technologically reasonable. 'Cause it could b it could it could be that f it could be that functionally that doesn't make it any better, but that just the appeal of of not having You know, these days there's a r pe things in people's homes are becoming more and more like chic, you know. Um, nicer materials and might be be worth exploring anyway. Okay. Um. Before we wrap up, just to make sure we're all on the same page here, um, do we We were given sort of an example of a coffee machine or something, right? Well, um are we at ma right now on the assumption that our television remote control may have features which go beyond the television? Or are we keeping sort of like a a design commitment to television features? I I don't know. Yep. Yeah, sure. Okay. Okay, yeah. Okay. Okay. Okay. Alright."}], "model-index": [{"name": "MEETING_SUMMARY", "results": [{"task": {"type": "abstractive-text-summarization", "name": "Abstractive Text Summarization"}, "dataset": {"name": "samsum", "type": "samsum"}, "metrics": [{"type": "rouge-1", "value": 53.8795, "name": "Validation ROGUE-1"}, {"type": "rouge-2", "value": 28.4975, "name": "Validation ROGUE-2"}, {"type": "rouge-L", "value": 44.1899, "name": "Validation ROGUE-L"}, {"type": "rouge-Lsum", "value": 49.4863, "name": "Validation ROGUE-Lsum"}, {"type": "gen-length", "value": 30.088, "name": "Validation ROGUE-Lsum"}, {"type": "rouge-1", "value": 53.2284, "name": "Test ROGUE-1"}, {"type": "rouge-2", "value": 28.184, "name": "Test ROGUE-2"}, {"type": "rouge-L", "value": 44.122, "name": "Test ROGUE-L"}, {"type": "rouge-Lsum", "value": 49.0301, "name": "Test ROGUE-Lsum"}, {"type": "gen-length", "value": 29.9951, "name": "Test ROGUE-Lsum"}]}, {"task": {"type": "summarization", "name": "Summarization"}, "dataset": {"name": "bazzhangz/sumdataset", "type": "bazzhangz/sumdataset", "config": "bazzhangz--sumdataset", "split": "train"}, "metrics": [{"type": "rouge", "value": 40.5544, "name": "ROUGE-1", "verified": true}, {"type": "rouge", "value": 17.0751, "name": "ROUGE-2", "verified": true}, {"type": "rouge", "value": 32.153, "name": "ROUGE-L", "verified": true}, {"type": "rouge", "value": 36.4277, "name": "ROUGE-LSUM", "verified": true}, {"type": "loss", "value": 2.116729736328125, "name": "loss", "verified": true}, {"type": "gen_len", "value": 42.1978, "name": "gen_len", "verified": true}]}, {"task": {"type": "abstractive-text-summarization", "name": "Abstractive Text Summarization"}, "dataset": {"name": "xsum", "type": "xsum"}, "metrics": [{"type": "rouge-1", "value": 35.9078, "name": "Validation ROGUE-1"}, {"type": "rouge-2", "value": 14.2497, "name": "Validation ROGUE-2"}, {"type": "rouge-L", "value": 28.1421, "name": "Validation ROGUE-L"}, {"type": "rouge-Lsum", "value": 28.9826, "name": "Validation ROGUE-Lsum"}, {"type": "gen-length", "value": 32.0167, "name": "Validation ROGUE-Lsum"}, {"type": "rouge-1", "value": 36.0241, "name": "Test ROGUE-1"}, {"type": "rouge-2", "value": 14.3715, "name": "Test ROGUE-2"}, {"type": "rouge-L", "value": 28.1968, "name": "Test ROGUE-L"}, {"type": "rouge-Lsum", "value": 29.0527, "name": "Test ROGUE-Lsum"}, {"type": "gen-length", "value": 31.9933, "name": "Test ROGUE-Lsum"}]}, {"task": {"type": "abstractive-text-summarization", "name": "Abstractive Text Summarization"}, "dataset": {"name": "dialogsum", "type": "dialogsum"}, "metrics": [{"type": "rouge-1", "value": 39.8612, "name": "Validation ROGUE-1"}, {"type": "rouge-2", "value": 16.6917, "name": "Validation ROGUE-2"}, {"type": "rouge-L", "value": 32.2718, "name": "Validation ROGUE-L"}, {"type": "rouge-Lsum", "value": 35.8748, "name": "Validation ROGUE-Lsum"}, {"type": "gen-length", "value": 41.726, "name": "Validation ROGUE-Lsum"}, {"type": "rouge-1", "value": 36.9608, "name": "Test ROGUE-1"}, {"type": "rouge-2", "value": 14.3058, "name": "Test ROGUE-2"}, {"type": "rouge-L", "value": 29.3261, "name": "Test ROGUE-L"}, {"type": "rouge-Lsum", "value": 32.9, "name": "Test ROGUE-Lsum"}, {"type": "gen-length", "value": 43.086, "name": "Test ROGUE-Lsum"}]}, {"task": {"type": "summarization", "name": "Summarization"}, "dataset": {"name": "samsum", "type": "samsum", "config": "samsum", "split": "test"}, "metrics": [{"type": "rouge", "value": 53.1878, "name": "ROUGE-1", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOTVkNTczYjFmYzBmMzczNWE0MGY4MDAyZWExOGNjZmY1Yzk2ZGM1MGNjZmFmYWUyZmIxZjdjOTk4OTc4OGJlMSIsInZlcnNpb24iOjF9.yyzPpGtESuZXy_lBESrboGxdGYB7I6jaIjquCYqliE2xdbGf5awDFpDUwlZHDuw6RD2mIZv1FC8PPs9lOHuSAg"}, {"type": "rouge", "value": 28.1666, "name": "ROUGE-2", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMjAzOTdjNGYxNWMzYmFjYjRmMTcxYzI0MmNlNmM5Nzg2MzBlNDdmZWFkN2EwMDE2ZTZmYzc0Zjg0ZDc0M2IxNiIsInZlcnNpb24iOjF9.cPH6O50T6HekO227Xzha-EN_Jp7JS9fh5EP9I0tHxbpGptKtZOQC-NG68zfU2eJKlRSrmgaBYs8tjfTvpAgyDg"}, {"type": "rouge", "value": 44.117, "name": "ROUGE-L", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNmNmMzJkYjMxMjhlZDM4YmU3NmI1MDExNzhiYmVhMzEyZGJjNDJkNzczNGQwOTMwNzg2YjU1ZWQ4MDhiMzkxYiIsInZlcnNpb24iOjF9.lcEXK15UqZOdXnPjVqIhFd6o_PLROSIONTRFX5NbwanjEI_MWMLpDh_V0Kpnvs_W0sE6cXh2yoifSYNDA5W7Bw"}, {"type": "rouge", "value": 49.0094, "name": "ROUGE-LSUM", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYThkYjk4ZjMzYjI0OTAxNDJiZTU5MzE0YjI5MjEzYTYwNWEzMmU5NjU2ZjQ5NzJhMzkyNmVhNWFjZmM1MjAwMSIsInZlcnNpb24iOjF9.LTn6LpKuMO4Rv4NgsbPmtr2ewiKyoqAXlf6YJfM_6GKwVTKpnJxwx7gaaAtMb0jVlgieITMP11JmbeRfMEhgDg"}, {"type": "loss", "value": 1.710614562034607, "name": "loss", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNjNjZmM0ZjkwYWYyMWIyMmFiMWI1ODBiYjRjNzVhM2JhN2NmNmM1ZDUwZWRjNDQxNzUwMWM4YjYxYTg1MWYwNyIsInZlcnNpb24iOjF9.hGXZhp9pe-HDJilXVvMCkqz-92YZvH6Qr7q9Z7fJkm8N9s0b4sl-4PwjQYJEOLEAhoRO2s-F5T3bmCYCaMiNBQ"}, {"type": "gen_len", "value": 29.9951, "name": "gen_len", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZmY1NzZiMDAzNGJlNTg4Nzc0YzU1MTA3YTI3MzVmNGZkNWQ0ZDE4MGZlNGI1MzJmYzA3MjQ0MDZhMTcyYTk2NCIsInZlcnNpb24iOjF9.8dvMfY7Y-nw-K8NGgTXIGFMxaSUWQYBE1w3N5YYOn4iwnCe2ugo2qPIOxLY91q7CaAOMCSskFV3BDStQ4p0ZCg"}]}]}]}
|
knkarthick/MEETING_SUMMARY
| null |
[
"transformers",
"pytorch",
"tf",
"safetensors",
"bart",
"text2text-generation",
"seq2seq",
"summarization",
"en",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #safetensors #bart #text2text-generation #seq2seq #summarization #en #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #has_space #region-us
|
Model obtained by Fine Tuning 'facebook/bart-large-xsum' using AMI Meeting Corpus, SAMSUM Dataset, DIALOGSUM Dataset, XSUM Dataset!
## Usage
# Example 1
# Example 2
# Example 3
# Example 4
|
[
"## Usage",
"# Example 1",
"# Example 2",
"# Example 3",
"# Example 4"
] |
[
"TAGS\n#transformers #pytorch #tf #safetensors #bart #text2text-generation #seq2seq #summarization #en #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"## Usage",
"# Example 1",
"# Example 2",
"# Example 3",
"# Example 4"
] |
summarization
|
transformers
|
## `bart-large-xsum-samsum`
This model was obtained by fine-tuning `facebook/bart-large-xsum` on [Samsum](https://huggingface.co/datasets/samsum) dataset.
## Usage
```python
from transformers import pipeline
summarizer = pipeline("summarization", model="knkarthick/bart-large-xsum-samsum")
conversation = '''Hannah: Hey, do you have Betty's number?
Amanda: Lemme check
Amanda: Sorry, can't find it.
Amanda: Ask Larry
Amanda: He called her last time we were at the park together
Hannah: I don't know him well
Amanda: Don't be shy, he's very nice
Hannah: If you say so..
Hannah: I'd rather you texted him
Amanda: Just text him 🙂
Hannah: Urgh.. Alright
Hannah: Bye
Amanda: Bye bye
'''
summarizer(conversation)
```
|
{"language": "en", "license": "apache-2.0", "tags": ["bart", "seq2seq", "summarization"], "datasets": ["samsum"], "widget": [{"text": "Hannah: Hey, do you have Betty's number?\nAmanda: Lemme check\nAmanda: Sorry, can't find it.\nAmanda: Ask Larry\nAmanda: He called her last time we were at the park together\nHannah: I don't know him well\nAmanda: Don't be shy, he's very nice\nHannah: If you say so..\nHannah: I'd rather you texted him\nAmanda: Just text him \ud83d\ude42\nHannah: Urgh.. Alright\nHannah: Bye\nAmanda: Bye bye\n"}], "model-index": [{"name": "bart-large-xsum-samsum", "results": [{"task": {"type": "abstractive-text-summarization", "name": "Abstractive Text Summarization"}, "dataset": {"name": "SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization", "type": "samsum"}, "metrics": [{"type": "rouge-1", "value": 54.3921, "name": "Validation ROUGE-1"}, {"type": "rouge-2", "value": 29.8078, "name": "Validation ROUGE-2"}, {"type": "rouge-l", "value": 45.1543, "name": "Validation ROUGE-L"}, {"type": "rouge-1", "value": 53.3059, "name": "Test ROUGE-1"}, {"type": "rouge-2", "value": 28.355, "name": "Test ROUGE-2"}, {"type": "rouge-l", "value": 44.0953, "name": "Test ROUGE-L"}]}, {"task": {"type": "summarization", "name": "Summarization"}, "dataset": {"name": "samsum", "type": "samsum", "config": "samsum", "split": "train"}, "metrics": [{"type": "rouge", "value": 46.2492, "name": "ROUGE-1", "verified": true}, {"type": "rouge", "value": 21.346, "name": "ROUGE-2", "verified": true}, {"type": "rouge", "value": 37.2787, "name": "ROUGE-L", "verified": true}, {"type": "rouge", "value": 42.1317, "name": "ROUGE-LSUM", "verified": true}, {"type": "loss", "value": 1.6859958171844482, "name": "loss", "verified": true}, {"type": "gen_len", "value": 23.7103, "name": "gen_len", "verified": true}]}]}]}
|
knkarthick/bart-large-xsum-samsum
| null |
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"seq2seq",
"summarization",
"en",
"dataset:samsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #bart #text2text-generation #seq2seq #summarization #en #dataset-samsum #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
|
## 'bart-large-xsum-samsum'
This model was obtained by fine-tuning 'facebook/bart-large-xsum' on Samsum dataset.
## Usage
|
[
"## 'bart-large-xsum-samsum'\nThis model was obtained by fine-tuning 'facebook/bart-large-xsum' on Samsum dataset.",
"## Usage"
] |
[
"TAGS\n#transformers #pytorch #bart #text2text-generation #seq2seq #summarization #en #dataset-samsum #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"## 'bart-large-xsum-samsum'\nThis model was obtained by fine-tuning 'facebook/bart-large-xsum' on Samsum dataset.",
"## Usage"
] |
summarization
|
transformers
|
## `bart-large-xsum-samsum`
This model was obtained by fine-tuning `facebook/bart-large-xsum` on [Samsum](https://huggingface.co/datasets/samsum) dataset.
## Usage
```python
from transformers import pipeline
summarizer = pipeline("summarization", model="knkarthick/bart-large-xsum-samsum")
conversation = '''Hannah: Hey, do you have Betty's number?
Amanda: Lemme check
Amanda: Sorry, can't find it.
Amanda: Ask Larry
Amanda: He called her last time we were at the park together
Hannah: I don't know him well
Amanda: Don't be shy, he's very nice
Hannah: If you say so..
Hannah: I'd rather you texted him
Amanda: Just text him 🙂
Hannah: Urgh.. Alright
Hannah: Bye
Amanda: Bye bye
'''
summarizer(conversation)
```
|
{"language": "en", "license": "apache-2.0", "tags": ["bart", "seq2seq", "summarization"], "datasets": ["samsum"], "widget": [{"text": "Hannah: Hey, do you have Betty's number?\nAmanda: Lemme check\nAmanda: Sorry, can't find it.\nAmanda: Ask Larry\nAmanda: He called her last time we were at the park together\nHannah: I don't know him well\nAmanda: Don't be shy, he's very nice\nHannah: If you say so..\nHannah: I'd rather you texted him\nAmanda: Just text him \ud83d\ude42\nHannah: Urgh.. Alright\nHannah: Bye\nAmanda: Bye bye\n"}], "model-index": [{"name": "bart-large-xsum-samsum", "results": [{"task": {"type": "abstractive-text-summarization", "name": "Abstractive Text Summarization"}, "dataset": {"name": "SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization", "type": "samsum"}, "metrics": [{"type": "rouge-1", "value": 54.3921, "name": "Validation ROUGE-1"}, {"type": "rouge-2", "value": 29.8078, "name": "Validation ROUGE-2"}, {"type": "rouge-l", "value": 45.1543, "name": "Validation ROUGE-L"}, {"type": "rouge-1", "value": 53.3059, "name": "Test ROUGE-1"}, {"type": "rouge-2", "value": 28.355, "name": "Test ROUGE-2"}, {"type": "rouge-l", "value": 44.0953, "name": "Test ROUGE-L"}]}]}]}
|
knkarthick/meeting-summary-samsum
| null |
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"seq2seq",
"summarization",
"en",
"dataset:samsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #bart #text2text-generation #seq2seq #summarization #en #dataset-samsum #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #has_space #region-us
|
## 'bart-large-xsum-samsum'
This model was obtained by fine-tuning 'facebook/bart-large-xsum' on Samsum dataset.
## Usage
|
[
"## 'bart-large-xsum-samsum'\nThis model was obtained by fine-tuning 'facebook/bart-large-xsum' on Samsum dataset.",
"## Usage"
] |
[
"TAGS\n#transformers #pytorch #bart #text2text-generation #seq2seq #summarization #en #dataset-samsum #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"## 'bart-large-xsum-samsum'\nThis model was obtained by fine-tuning 'facebook/bart-large-xsum' on Samsum dataset.",
"## Usage"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-base-v2-finetuned-squad
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1607
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.8695 | 1.0 | 5540 | 0.9092 |
| 0.6594 | 2.0 | 11080 | 0.9148 |
| 0.5053 | 3.0 | 16620 | 0.9641 |
| 0.3477 | 4.0 | 22160 | 1.1607 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "albert-base-v2-finetuned-squad", "results": []}]}
|
knlu1016/albert-base-v2-finetuned-squad
| null |
[
"transformers",
"pytorch",
"tensorboard",
"albert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #albert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us
|
albert-base-v2-finetuned-squad
==============================
This model is a fine-tuned version of albert-base-v2 on the squad dataset.
It achieves the following results on the evaluation set:
* Loss: 1.1607
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 4
### Training results
### Framework versions
* Transformers 4.13.0
* Pytorch 1.10.0+cu111
* Datasets 1.16.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 4",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #albert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 4",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.13.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
null | null |
vi_law_bert
|
{}
|
kodiak619/vi_law_bert
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#region-us
|
vi_law_bert
|
[] |
[
"TAGS\n#region-us \n"
] |
automatic-speech-recognition
|
transformers
|
Testing Khmer ASR baseline.
|
{}
|
kongkeaouch/wav2vec2-xls-r-300m-kh
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #endpoints_compatible #region-us
|
Testing Khmer ASR baseline.
|
[] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #endpoints_compatible #region-us \n"
] |
token-classification
|
spacy
|
| Feature | Description |
| --- | --- |
| **Name** | `en_core_med7_lg` |
| **Version** | `3.4.2.1` |
| **spaCy** | `>=3.4.2,<3.5.0` |
| **Default Pipeline** | `tok2vec`, `ner` |
| **Components** | `tok2vec`, `ner` |
| **Vectors** | 514157 keys, 514157 unique vectors (300 dimensions) |
| **Sources** | n/a |
| **License** | `MIT` |
| **Author** | [Andrey Kormilitzin](https://www.kormilitzin.com/) |
### Label Scheme
<details>
<summary>View label scheme (7 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `DOSAGE`, `DRUG`, `DURATION`, `FORM`, `FREQUENCY`, `ROUTE`, `STRENGTH` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `ENTS_F` | 87.70 |
| `ENTS_P` | 86.50 |
| `ENTS_R` | 88.93 |
| `TOK2VEC_LOSS` | 226109.53 |
| `NER_LOSS` | 302222.55 |
### BibTeX entry and citation info
```bibtex
@article{kormilitzin2021med7,
title={Med7: A transferable clinical natural language processing model for electronic health records},
author={Kormilitzin, Andrey and Vaci, Nemanja and Liu, Qiang and Nevado-Holgado, Alejo},
journal={Artificial Intelligence in Medicine},
volume={118},
pages={102086},
year={2021},
publisher={Elsevier}
}
```
|
{"language": ["en"], "license": "mit", "tags": ["spacy", "token-classification"]}
|
kormilitzin/en_core_med7_lg
| null |
[
"spacy",
"token-classification",
"en",
"license:mit",
"model-index",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#spacy #token-classification #en #license-mit #model-index #has_space #region-us
|
### Label Scheme
View label scheme (7 labels for 1 components)
### Accuracy
### BibTeX entry and citation info
|
[
"### Label Scheme\n\n\n\nView label scheme (7 labels for 1 components)",
"### Accuracy",
"### BibTeX entry and citation info"
] |
[
"TAGS\n#spacy #token-classification #en #license-mit #model-index #has_space #region-us \n",
"### Label Scheme\n\n\n\nView label scheme (7 labels for 1 components)",
"### Accuracy",
"### BibTeX entry and citation info"
] |
token-classification
|
spacy
|
| Feature | Description |
| --- | --- |
| **Name** | `en_core_med7_trf` |
| **Version** | `3.4.2.1` |
| **spaCy** | `>=3.4.2,<3.5.0` |
| **Default Pipeline** | `transformer`, `ner` |
| **Components** | `transformer`, `ner` |
| **Vectors** | 514157 keys, 514157 unique vectors (300 dimensions) |
| **Sources** | n/a |
| **License** | `MIT` |
| **Author** | [Andrey Kormilitzin](https://www.kormilitzin.com/) |
### Label Scheme
<details>
<summary>View label scheme (7 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `DOSAGE`, `DRUG`, `DURATION`, `FORM`, `FREQUENCY`, `ROUTE`, `STRENGTH` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `ENTS_F` | 90.33 |
| `ENTS_P` | 88.22 |
| `ENTS_R` | 92.54 |
| `TRANSFORMER_LOSS` | 2502627.06 |
| `NER_LOSS` | 114576.77 |
### BibTeX entry and citation info
```bibtex
@article{kormilitzin2021med7,
title={Med7: A transferable clinical natural language processing model for electronic health records},
author={Kormilitzin, Andrey and Vaci, Nemanja and Liu, Qiang and Nevado-Holgado, Alejo},
journal={Artificial Intelligence in Medicine},
volume={118},
pages={102086},
year={2021},
publisher={Elsevier}
}
```
|
{"language": ["en"], "license": "mit", "tags": ["spacy", "token-classification"]}
|
kormilitzin/en_core_med7_trf
| null |
[
"spacy",
"token-classification",
"en",
"license:mit",
"model-index",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#spacy #token-classification #en #license-mit #model-index #has_space #region-us
|
### Label Scheme
View label scheme (7 labels for 1 components)
### Accuracy
### BibTeX entry and citation info
|
[
"### Label Scheme\n\n\n\nView label scheme (7 labels for 1 components)",
"### Accuracy",
"### BibTeX entry and citation info"
] |
[
"TAGS\n#spacy #token-classification #en #license-mit #model-index #has_space #region-us \n",
"### Label Scheme\n\n\n\nView label scheme (7 labels for 1 components)",
"### Accuracy",
"### BibTeX entry and citation info"
] |
feature-extraction
|
transformers
|
Converted for Tensorflow
```
!pip install transformers sentencepiece
from transformers import TFAutoModel, AutoTokenizer
name = "ai4bharat/indic-bert"
model = TFAutoModel.from_pretrained(name, from_pt=True)
tokenizer = AutoTokenizer.from_pretrained(name)
model.save_pretrained("local-indic-bert")
tokenizer.save_pretrained("local-indic-bert")
```
|
{}
|
kornesh/indic-bert
| null |
[
"transformers",
"tf",
"albert",
"feature-extraction",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #tf #albert #feature-extraction #endpoints_compatible #region-us
|
Converted for Tensorflow
|
[] |
[
"TAGS\n#transformers #tf #albert #feature-extraction #endpoints_compatible #region-us \n"
] |
feature-extraction
|
transformers
|
Converted for Tensorflow
```
!pip install transformers sentencepiece
from transformers import TFAutoModel, AutoTokenizer
name = "xlm-roberta-base"
model = TFAutoModel.from_pretrained(name, from_pt=True)
tokenizer = AutoTokenizer.from_pretrained(name)
model.save_pretrained("local-xlm-roberta-base")
tokenizer.save_pretrained("local-xlm-roberta-base")
```
|
{}
|
kornesh/xlm-roberta-base
| null |
[
"transformers",
"tf",
"xlm-roberta",
"feature-extraction",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #tf #xlm-roberta #feature-extraction #endpoints_compatible #region-us
|
Converted for Tensorflow
|
[] |
[
"TAGS\n#transformers #tf #xlm-roberta #feature-extraction #endpoints_compatible #region-us \n"
] |
feature-extraction
|
transformers
|
Converted for Tensorflow
```
name = "xlm-roberta-large"
!rm -rf local
!git clone https://huggingface.co/kornesh/"$name" local
model = TFAutoModel.from_pretrained(name, from_pt=True)
tokenizer = AutoTokenizer.from_pretrained(name)
model.save_pretrained("local")
tokenizer.save_pretrained("local")
!cd local/ && git lfs install && git add . && git commit -m "Initial commit" && git push
```
|
{}
|
kornesh/xlm-roberta-large
| null |
[
"transformers",
"tf",
"xlm-roberta",
"feature-extraction",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #tf #xlm-roberta #feature-extraction #endpoints_compatible #region-us
|
Converted for Tensorflow
|
[] |
[
"TAGS\n#transformers #tf #xlm-roberta #feature-extraction #endpoints_compatible #region-us \n"
] |
text-classification
|
transformers
|
# Pre-trained BERT on Twitter US Election 2020 for Stance Detection towards Joe Biden (KE-MLM)
Pre-trained weights for **KE-MLM model** in [Knowledge Enhance Masked Language Model for Stance Detection](https://www.aclweb.org/anthology/2021.naacl-main.376), NAACL 2021.
# Training Data
This model is pre-trained on over 5 million English tweets about the 2020 US Presidential Election. Then fine-tuned using our [stance-labeled data](https://github.com/GU-DataLab/stance-detection-KE-MLM) for stance detection towards Joe Biden.
# Training Objective
This model is initialized with BERT-base and trained with normal MLM objective with classification layer fine-tuned for stance detection towards Joe Biden.
# Usage
This pre-trained language model is fine-tuned to the stance detection task specifically for Joe Biden.
Please see the [official repository](https://github.com/GU-DataLab/stance-detection-KE-MLM) for more detail.
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
import numpy as np
# choose GPU if available
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# select mode path here
pretrained_LM_path = "kornosk/bert-election2020-twitter-stance-biden-KE-MLM"
# load model
tokenizer = AutoTokenizer.from_pretrained(pretrained_LM_path)
model = AutoModelForSequenceClassification.from_pretrained(pretrained_LM_path)
id2label = {
0: "AGAINST",
1: "FAVOR",
2: "NONE"
}
##### Prediction Neutral #####
sentence = "Hello World."
inputs = tokenizer(sentence.lower(), return_tensors="pt")
outputs = model(**inputs)
predicted_probability = torch.softmax(outputs[0], dim=1)[0].tolist()
print("Sentence:", sentence)
print("Prediction:", id2label[np.argmax(predicted_probability)])
print("Against:", predicted_probability[0])
print("Favor:", predicted_probability[1])
print("Neutral:", predicted_probability[2])
##### Prediction Favor #####
sentence = "Go Go Biden!!!"
inputs = tokenizer(sentence.lower(), return_tensors="pt")
outputs = model(**inputs)
predicted_probability = torch.softmax(outputs[0], dim=1)[0].tolist()
print("Sentence:", sentence)
print("Prediction:", id2label[np.argmax(predicted_probability)])
print("Against:", predicted_probability[0])
print("Favor:", predicted_probability[1])
print("Neutral:", predicted_probability[2])
##### Prediction Against #####
sentence = "Biden is the worst."
inputs = tokenizer(sentence.lower(), return_tensors="pt")
outputs = model(**inputs)
predicted_probability = torch.softmax(outputs[0], dim=1)[0].tolist()
print("Sentence:", sentence)
print("Prediction:", id2label[np.argmax(predicted_probability)])
print("Against:", predicted_probability[0])
print("Favor:", predicted_probability[1])
print("Neutral:", predicted_probability[2])
# please consider citing our paper if you feel this is useful :)
```
# Reference
- [Knowledge Enhance Masked Language Model for Stance Detection](https://www.aclweb.org/anthology/2021.naacl-main.376), NAACL 2021.
# Citation
```bibtex
@inproceedings{kawintiranon2021knowledge,
title={Knowledge Enhanced Masked Language Model for Stance Detection},
author={Kawintiranon, Kornraphop and Singh, Lisa},
booktitle={Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies},
year={2021},
publisher={Association for Computational Linguistics},
url={https://www.aclweb.org/anthology/2021.naacl-main.376}
}
```
|
{"language": "en", "license": "gpl-3.0", "tags": ["twitter", "stance-detection", "election2020", "politics"]}
|
kornosk/bert-election2020-twitter-stance-biden-KE-MLM
| null |
[
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"twitter",
"stance-detection",
"election2020",
"politics",
"en",
"license:gpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #bert #text-classification #twitter #stance-detection #election2020 #politics #en #license-gpl-3.0 #autotrain_compatible #endpoints_compatible #region-us
|
# Pre-trained BERT on Twitter US Election 2020 for Stance Detection towards Joe Biden (KE-MLM)
Pre-trained weights for KE-MLM model in Knowledge Enhance Masked Language Model for Stance Detection, NAACL 2021.
# Training Data
This model is pre-trained on over 5 million English tweets about the 2020 US Presidential Election. Then fine-tuned using our stance-labeled data for stance detection towards Joe Biden.
# Training Objective
This model is initialized with BERT-base and trained with normal MLM objective with classification layer fine-tuned for stance detection towards Joe Biden.
# Usage
This pre-trained language model is fine-tuned to the stance detection task specifically for Joe Biden.
Please see the official repository for more detail.
# Reference
- Knowledge Enhance Masked Language Model for Stance Detection, NAACL 2021.
|
[
"# Pre-trained BERT on Twitter US Election 2020 for Stance Detection towards Joe Biden (KE-MLM)\n\nPre-trained weights for KE-MLM model in Knowledge Enhance Masked Language Model for Stance Detection, NAACL 2021.",
"# Training Data\n\nThis model is pre-trained on over 5 million English tweets about the 2020 US Presidential Election. Then fine-tuned using our stance-labeled data for stance detection towards Joe Biden.",
"# Training Objective\n\nThis model is initialized with BERT-base and trained with normal MLM objective with classification layer fine-tuned for stance detection towards Joe Biden.",
"# Usage\n\nThis pre-trained language model is fine-tuned to the stance detection task specifically for Joe Biden.\n\nPlease see the official repository for more detail.",
"# Reference\n\n- Knowledge Enhance Masked Language Model for Stance Detection, NAACL 2021."
] |
[
"TAGS\n#transformers #pytorch #jax #bert #text-classification #twitter #stance-detection #election2020 #politics #en #license-gpl-3.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Pre-trained BERT on Twitter US Election 2020 for Stance Detection towards Joe Biden (KE-MLM)\n\nPre-trained weights for KE-MLM model in Knowledge Enhance Masked Language Model for Stance Detection, NAACL 2021.",
"# Training Data\n\nThis model is pre-trained on over 5 million English tweets about the 2020 US Presidential Election. Then fine-tuned using our stance-labeled data for stance detection towards Joe Biden.",
"# Training Objective\n\nThis model is initialized with BERT-base and trained with normal MLM objective with classification layer fine-tuned for stance detection towards Joe Biden.",
"# Usage\n\nThis pre-trained language model is fine-tuned to the stance detection task specifically for Joe Biden.\n\nPlease see the official repository for more detail.",
"# Reference\n\n- Knowledge Enhance Masked Language Model for Stance Detection, NAACL 2021."
] |
text-classification
|
transformers
|
# Pre-trained BERT on Twitter US Election 2020 for Stance Detection towards Joe Biden (f-BERT)
Pre-trained weights for **f-BERT** in [Knowledge Enhance Masked Language Model for Stance Detection](https://www.aclweb.org/anthology/2021.naacl-main.376), NAACL 2021.
# Training Data
This model is pre-trained on over 5 million English tweets about the 2020 US Presidential Election. Then fine-tuned using our [stance-labeled data](https://github.com/GU-DataLab/stance-detection-KE-MLM) for stance detection towards Joe Biden.
# Training Objective
This model is initialized with BERT-base and trained with normal MLM objective with classification layer fine-tuned for stance detection towards Joe Biden.
# Usage
This pre-trained language model is fine-tuned to the stance detection task specifically for Joe Biden.
Please see the [official repository](https://github.com/GU-DataLab/stance-detection-KE-MLM) for more detail.
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
import numpy as np
# choose GPU if available
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# select mode path here
pretrained_LM_path = "kornosk/bert-election2020-twitter-stance-biden"
# load model
tokenizer = AutoTokenizer.from_pretrained(pretrained_LM_path)
model = AutoModelForSequenceClassification.from_pretrained(pretrained_LM_path)
id2label = {
0: "AGAINST",
1: "FAVOR",
2: "NONE"
}
##### Prediction Neutral #####
sentence = "Hello World."
inputs = tokenizer(sentence.lower(), return_tensors="pt")
outputs = model(**inputs)
predicted_probability = torch.softmax(outputs[0], dim=1)[0].tolist()
print("Sentence:", sentence)
print("Prediction:", id2label[np.argmax(predicted_probability)])
print("Against:", predicted_probability[0])
print("Favor:", predicted_probability[1])
print("Neutral:", predicted_probability[2])
##### Prediction Favor #####
sentence = "Go Go Biden!!!"
inputs = tokenizer(sentence.lower(), return_tensors="pt")
outputs = model(**inputs)
predicted_probability = torch.softmax(outputs[0], dim=1)[0].tolist()
print("Sentence:", sentence)
print("Prediction:", id2label[np.argmax(predicted_probability)])
print("Against:", predicted_probability[0])
print("Favor:", predicted_probability[1])
print("Neutral:", predicted_probability[2])
##### Prediction Against #####
sentence = "Biden is the worst."
inputs = tokenizer(sentence.lower(), return_tensors="pt")
outputs = model(**inputs)
predicted_probability = torch.softmax(outputs[0], dim=1)[0].tolist()
print("Sentence:", sentence)
print("Prediction:", id2label[np.argmax(predicted_probability)])
print("Against:", predicted_probability[0])
print("Favor:", predicted_probability[1])
print("Neutral:", predicted_probability[2])
# please consider citing our paper if you feel this is useful :)
```
# Reference
- [Knowledge Enhance Masked Language Model for Stance Detection](https://www.aclweb.org/anthology/2021.naacl-main.376), NAACL 2021.
# Citation
```bibtex
@inproceedings{kawintiranon2021knowledge,
title={Knowledge Enhanced Masked Language Model for Stance Detection},
author={Kawintiranon, Kornraphop and Singh, Lisa},
booktitle={Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies},
year={2021},
publisher={Association for Computational Linguistics},
url={https://www.aclweb.org/anthology/2021.naacl-main.376}
}
```
|
{"language": "en", "license": "gpl-3.0", "tags": ["twitter", "stance-detection", "election2020", "politics"]}
|
kornosk/bert-election2020-twitter-stance-biden
| null |
[
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"twitter",
"stance-detection",
"election2020",
"politics",
"en",
"license:gpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #bert #text-classification #twitter #stance-detection #election2020 #politics #en #license-gpl-3.0 #autotrain_compatible #endpoints_compatible #region-us
|
# Pre-trained BERT on Twitter US Election 2020 for Stance Detection towards Joe Biden (f-BERT)
Pre-trained weights for f-BERT in Knowledge Enhance Masked Language Model for Stance Detection, NAACL 2021.
# Training Data
This model is pre-trained on over 5 million English tweets about the 2020 US Presidential Election. Then fine-tuned using our stance-labeled data for stance detection towards Joe Biden.
# Training Objective
This model is initialized with BERT-base and trained with normal MLM objective with classification layer fine-tuned for stance detection towards Joe Biden.
# Usage
This pre-trained language model is fine-tuned to the stance detection task specifically for Joe Biden.
Please see the official repository for more detail.
# Reference
- Knowledge Enhance Masked Language Model for Stance Detection, NAACL 2021.
|
[
"# Pre-trained BERT on Twitter US Election 2020 for Stance Detection towards Joe Biden (f-BERT)\n\nPre-trained weights for f-BERT in Knowledge Enhance Masked Language Model for Stance Detection, NAACL 2021.",
"# Training Data\n\nThis model is pre-trained on over 5 million English tweets about the 2020 US Presidential Election. Then fine-tuned using our stance-labeled data for stance detection towards Joe Biden.",
"# Training Objective\n\nThis model is initialized with BERT-base and trained with normal MLM objective with classification layer fine-tuned for stance detection towards Joe Biden.",
"# Usage\n\nThis pre-trained language model is fine-tuned to the stance detection task specifically for Joe Biden.\n\nPlease see the official repository for more detail.",
"# Reference\n\n- Knowledge Enhance Masked Language Model for Stance Detection, NAACL 2021."
] |
[
"TAGS\n#transformers #pytorch #jax #bert #text-classification #twitter #stance-detection #election2020 #politics #en #license-gpl-3.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Pre-trained BERT on Twitter US Election 2020 for Stance Detection towards Joe Biden (f-BERT)\n\nPre-trained weights for f-BERT in Knowledge Enhance Masked Language Model for Stance Detection, NAACL 2021.",
"# Training Data\n\nThis model is pre-trained on over 5 million English tweets about the 2020 US Presidential Election. Then fine-tuned using our stance-labeled data for stance detection towards Joe Biden.",
"# Training Objective\n\nThis model is initialized with BERT-base and trained with normal MLM objective with classification layer fine-tuned for stance detection towards Joe Biden.",
"# Usage\n\nThis pre-trained language model is fine-tuned to the stance detection task specifically for Joe Biden.\n\nPlease see the official repository for more detail.",
"# Reference\n\n- Knowledge Enhance Masked Language Model for Stance Detection, NAACL 2021."
] |
text-classification
|
transformers
|
# Pre-trained BERT on Twitter US Election 2020 for Stance Detection towards Donald Trump (KE-MLM)
Pre-trained weights for **KE-MLM model** in [Knowledge Enhance Masked Language Model for Stance Detection](https://www.aclweb.org/anthology/2021.naacl-main.376), NAACL 2021.
# Training Data
This model is pre-trained on over 5 million English tweets about the 2020 US Presidential Election. Then fine-tuned using our [stance-labeled data](https://github.com/GU-DataLab/stance-detection-KE-MLM) for stance detection towards Donald Trump.
# Training Objective
This model is initialized with BERT-base and trained with normal MLM objective with classification layer fine-tuned for stance detection towards Donald Trump.
# Usage
This pre-trained language model is fine-tuned to the stance detection task specifically for Donald Trump.
Please see the [official repository](https://github.com/GU-DataLab/stance-detection-KE-MLM) for more detail.
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
import numpy as np
# choose GPU if available
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# select mode path here
pretrained_LM_path = "kornosk/bert-election2020-twitter-stance-trump-KE-MLM"
# load model
tokenizer = AutoTokenizer.from_pretrained(pretrained_LM_path)
model = AutoModelForSequenceClassification.from_pretrained(pretrained_LM_path)
id2label = {
0: "AGAINST",
1: "FAVOR",
2: "NONE"
}
##### Prediction Neutral #####
sentence = "Hello World."
inputs = tokenizer(sentence.lower(), return_tensors="pt")
outputs = model(**inputs)
predicted_probability = torch.softmax(outputs[0], dim=1)[0].tolist()
print("Sentence:", sentence)
print("Prediction:", id2label[np.argmax(predicted_probability)])
print("Against:", predicted_probability[0])
print("Favor:", predicted_probability[1])
print("Neutral:", predicted_probability[2])
##### Prediction Favor #####
sentence = "Go Go Trump!!!"
inputs = tokenizer(sentence.lower(), return_tensors="pt")
outputs = model(**inputs)
predicted_probability = torch.softmax(outputs[0], dim=1)[0].tolist()
print("Sentence:", sentence)
print("Prediction:", id2label[np.argmax(predicted_probability)])
print("Against:", predicted_probability[0])
print("Favor:", predicted_probability[1])
print("Neutral:", predicted_probability[2])
##### Prediction Against #####
sentence = "Trump is the worst."
inputs = tokenizer(sentence.lower(), return_tensors="pt")
outputs = model(**inputs)
predicted_probability = torch.softmax(outputs[0], dim=1)[0].tolist()
print("Sentence:", sentence)
print("Prediction:", id2label[np.argmax(predicted_probability)])
print("Against:", predicted_probability[0])
print("Favor:", predicted_probability[1])
print("Neutral:", predicted_probability[2])
# please consider citing our paper if you feel this is useful :)
```
# Reference
- [Knowledge Enhance Masked Language Model for Stance Detection](https://www.aclweb.org/anthology/2021.naacl-main.376), NAACL 2021.
# Citation
```bibtex
@inproceedings{kawintiranon2021knowledge,
title={Knowledge Enhanced Masked Language Model for Stance Detection},
author={Kawintiranon, Kornraphop and Singh, Lisa},
booktitle={Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies},
year={2021},
publisher={Association for Computational Linguistics},
url={https://www.aclweb.org/anthology/2021.naacl-main.376}
}
```
|
{"language": "en", "license": "gpl-3.0", "tags": ["twitter", "stance-detection", "election2020", "politics"]}
|
kornosk/bert-election2020-twitter-stance-trump-KE-MLM
| null |
[
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"twitter",
"stance-detection",
"election2020",
"politics",
"en",
"license:gpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #bert #text-classification #twitter #stance-detection #election2020 #politics #en #license-gpl-3.0 #autotrain_compatible #endpoints_compatible #region-us
|
# Pre-trained BERT on Twitter US Election 2020 for Stance Detection towards Donald Trump (KE-MLM)
Pre-trained weights for KE-MLM model in Knowledge Enhance Masked Language Model for Stance Detection, NAACL 2021.
# Training Data
This model is pre-trained on over 5 million English tweets about the 2020 US Presidential Election. Then fine-tuned using our stance-labeled data for stance detection towards Donald Trump.
# Training Objective
This model is initialized with BERT-base and trained with normal MLM objective with classification layer fine-tuned for stance detection towards Donald Trump.
# Usage
This pre-trained language model is fine-tuned to the stance detection task specifically for Donald Trump.
Please see the official repository for more detail.
# Reference
- Knowledge Enhance Masked Language Model for Stance Detection, NAACL 2021.
|
[
"# Pre-trained BERT on Twitter US Election 2020 for Stance Detection towards Donald Trump (KE-MLM)\n\nPre-trained weights for KE-MLM model in Knowledge Enhance Masked Language Model for Stance Detection, NAACL 2021.",
"# Training Data\n\nThis model is pre-trained on over 5 million English tweets about the 2020 US Presidential Election. Then fine-tuned using our stance-labeled data for stance detection towards Donald Trump.",
"# Training Objective\n\nThis model is initialized with BERT-base and trained with normal MLM objective with classification layer fine-tuned for stance detection towards Donald Trump.",
"# Usage\n\nThis pre-trained language model is fine-tuned to the stance detection task specifically for Donald Trump.\n\nPlease see the official repository for more detail.",
"# Reference\n\n- Knowledge Enhance Masked Language Model for Stance Detection, NAACL 2021."
] |
[
"TAGS\n#transformers #pytorch #jax #bert #text-classification #twitter #stance-detection #election2020 #politics #en #license-gpl-3.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Pre-trained BERT on Twitter US Election 2020 for Stance Detection towards Donald Trump (KE-MLM)\n\nPre-trained weights for KE-MLM model in Knowledge Enhance Masked Language Model for Stance Detection, NAACL 2021.",
"# Training Data\n\nThis model is pre-trained on over 5 million English tweets about the 2020 US Presidential Election. Then fine-tuned using our stance-labeled data for stance detection towards Donald Trump.",
"# Training Objective\n\nThis model is initialized with BERT-base and trained with normal MLM objective with classification layer fine-tuned for stance detection towards Donald Trump.",
"# Usage\n\nThis pre-trained language model is fine-tuned to the stance detection task specifically for Donald Trump.\n\nPlease see the official repository for more detail.",
"# Reference\n\n- Knowledge Enhance Masked Language Model for Stance Detection, NAACL 2021."
] |
text-classification
|
transformers
|
# Pre-trained BERT on Twitter US Election 2020 for Stance Detection towards Donald Trump (f-BERT)
Pre-trained weights for **f-BERT** in [Knowledge Enhance Masked Language Model for Stance Detection](https://www.aclweb.org/anthology/2021.naacl-main.376), NAACL 2021.
# Training Data
This model is pre-trained on over 5 million English tweets about the 2020 US Presidential Election. Then fine-tuned using our [stance-labeled data](https://github.com/GU-DataLab/stance-detection-KE-MLM) for stance detection towards Donald Trump.
# Training Objective
This model is initialized with BERT-base and trained with normal MLM objective with classification layer fine-tuned for stance detection towards Donald Trump.
# Usage
This pre-trained language model is fine-tuned to the stance detection task specifically for Donald Trump.
Please see the [official repository](https://github.com/GU-DataLab/stance-detection-KE-MLM) for more detail.
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
import numpy as np
# choose GPU if available
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# select mode path here
pretrained_LM_path = "kornosk/bert-election2020-twitter-stance-trump"
# load model
tokenizer = AutoTokenizer.from_pretrained(pretrained_LM_path)
model = AutoModelForSequenceClassification.from_pretrained(pretrained_LM_path)
id2label = {
0: "AGAINST",
1: "FAVOR",
2: "NONE"
}
##### Prediction Neutral #####
sentence = "Hello World."
inputs = tokenizer(sentence.lower(), return_tensors="pt")
outputs = model(**inputs)
predicted_probability = torch.softmax(outputs[0], dim=1)[0].tolist()
print("Sentence:", sentence)
print("Prediction:", id2label[np.argmax(predicted_probability)])
print("Against:", predicted_probability[0])
print("Favor:", predicted_probability[1])
print("Neutral:", predicted_probability[2])
##### Prediction Favor #####
sentence = "Go Go Trump!!!"
inputs = tokenizer(sentence.lower(), return_tensors="pt")
outputs = model(**inputs)
predicted_probability = torch.softmax(outputs[0], dim=1)[0].tolist()
print("Sentence:", sentence)
print("Prediction:", id2label[np.argmax(predicted_probability)])
print("Against:", predicted_probability[0])
print("Favor:", predicted_probability[1])
print("Neutral:", predicted_probability[2])
##### Prediction Against #####
sentence = "Trump is the worst."
inputs = tokenizer(sentence.lower(), return_tensors="pt")
outputs = model(**inputs)
predicted_probability = torch.softmax(outputs[0], dim=1)[0].tolist()
print("Sentence:", sentence)
print("Prediction:", id2label[np.argmax(predicted_probability)])
print("Against:", predicted_probability[0])
print("Favor:", predicted_probability[1])
print("Neutral:", predicted_probability[2])
# please consider citing our paper if you feel this is useful :)
```
# Reference
- [Knowledge Enhance Masked Language Model for Stance Detection](https://www.aclweb.org/anthology/2021.naacl-main.376), NAACL 2021.
# Citation
```bibtex
@inproceedings{kawintiranon2021knowledge,
title={Knowledge Enhanced Masked Language Model for Stance Detection},
author={Kawintiranon, Kornraphop and Singh, Lisa},
booktitle={Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies},
year={2021},
publisher={Association for Computational Linguistics},
url={https://www.aclweb.org/anthology/2021.naacl-main.376}
}
```
|
{"language": "en", "license": "gpl-3.0", "tags": ["twitter", "stance-detection", "election2020", "politics"]}
|
kornosk/bert-election2020-twitter-stance-trump
| null |
[
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"twitter",
"stance-detection",
"election2020",
"politics",
"en",
"license:gpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #bert #text-classification #twitter #stance-detection #election2020 #politics #en #license-gpl-3.0 #autotrain_compatible #endpoints_compatible #region-us
|
# Pre-trained BERT on Twitter US Election 2020 for Stance Detection towards Donald Trump (f-BERT)
Pre-trained weights for f-BERT in Knowledge Enhance Masked Language Model for Stance Detection, NAACL 2021.
# Training Data
This model is pre-trained on over 5 million English tweets about the 2020 US Presidential Election. Then fine-tuned using our stance-labeled data for stance detection towards Donald Trump.
# Training Objective
This model is initialized with BERT-base and trained with normal MLM objective with classification layer fine-tuned for stance detection towards Donald Trump.
# Usage
This pre-trained language model is fine-tuned to the stance detection task specifically for Donald Trump.
Please see the official repository for more detail.
# Reference
- Knowledge Enhance Masked Language Model for Stance Detection, NAACL 2021.
|
[
"# Pre-trained BERT on Twitter US Election 2020 for Stance Detection towards Donald Trump (f-BERT)\n\nPre-trained weights for f-BERT in Knowledge Enhance Masked Language Model for Stance Detection, NAACL 2021.",
"# Training Data\n\nThis model is pre-trained on over 5 million English tweets about the 2020 US Presidential Election. Then fine-tuned using our stance-labeled data for stance detection towards Donald Trump.",
"# Training Objective\n\nThis model is initialized with BERT-base and trained with normal MLM objective with classification layer fine-tuned for stance detection towards Donald Trump.",
"# Usage\n\nThis pre-trained language model is fine-tuned to the stance detection task specifically for Donald Trump.\n\nPlease see the official repository for more detail.",
"# Reference\n\n- Knowledge Enhance Masked Language Model for Stance Detection, NAACL 2021."
] |
[
"TAGS\n#transformers #pytorch #jax #bert #text-classification #twitter #stance-detection #election2020 #politics #en #license-gpl-3.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Pre-trained BERT on Twitter US Election 2020 for Stance Detection towards Donald Trump (f-BERT)\n\nPre-trained weights for f-BERT in Knowledge Enhance Masked Language Model for Stance Detection, NAACL 2021.",
"# Training Data\n\nThis model is pre-trained on over 5 million English tweets about the 2020 US Presidential Election. Then fine-tuned using our stance-labeled data for stance detection towards Donald Trump.",
"# Training Objective\n\nThis model is initialized with BERT-base and trained with normal MLM objective with classification layer fine-tuned for stance detection towards Donald Trump.",
"# Usage\n\nThis pre-trained language model is fine-tuned to the stance detection task specifically for Donald Trump.\n\nPlease see the official repository for more detail.",
"# Reference\n\n- Knowledge Enhance Masked Language Model for Stance Detection, NAACL 2021."
] |
fill-mask
|
transformers
|
# Pre-trained BERT on Twitter US Political Election 2020
Pre-trained weights for [Knowledge Enhance Masked Language Model for Stance Detection](https://www.aclweb.org/anthology/2021.naacl-main.376), NAACL 2021.
We use the initialized weights from BERT-base (uncased) or `bert-base-uncased`.
# Training Data
This model is pre-trained on over 5 million English tweets about the 2020 US Presidential Election.
# Training Objective
This model is initialized with BERT-base and trained with normal MLM objective.
# Usage
This pre-trained language model **can be fine-tunned to any downstream task (e.g. classification)**.
Please see the [official repository](https://github.com/GU-DataLab/stance-detection-KE-MLM) for more detail.
```python
from transformers import BertTokenizer, BertForMaskedLM, pipeline
import torch
# Choose GPU if available
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# Select mode path here
pretrained_LM_path = "kornosk/bert-political-election2020-twitter-mlm"
# Load model
tokenizer = BertTokenizer.from_pretrained(pretrained_LM_path)
model = BertForMaskedLM.from_pretrained(pretrained_LM_path)
# Fill mask
example = "Trump is the [MASK] of USA"
fill_mask = pipeline('fill-mask', model=model, tokenizer=tokenizer)
# Use following line instead of the above one does not work.
# Huggingface have been updated, newer version accepts a string of model name instead.
fill_mask = pipeline('fill-mask', model=pretrained_LM_path, tokenizer=tokenizer)
outputs = fill_mask(example)
print(outputs)
# See embeddings
inputs = tokenizer(example, return_tensors="pt")
outputs = model(**inputs)
print(outputs)
# OR you can use this model to train on your downstream task!
# Please consider citing our paper if you feel this is useful :)
```
# Reference
- [Knowledge Enhance Masked Language Model for Stance Detection](https://www.aclweb.org/anthology/2021.naacl-main.376), NAACL 2021.
# Citation
```bibtex
@inproceedings{kawintiranon2021knowledge,
title={Knowledge Enhanced Masked Language Model for Stance Detection},
author={Kawintiranon, Kornraphop and Singh, Lisa},
booktitle={Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies},
year={2021},
publisher={Association for Computational Linguistics},
url={https://www.aclweb.org/anthology/2021.naacl-main.376}
}
```
|
{"language": "en", "license": "gpl-3.0", "tags": ["twitter", "masked-token-prediction", "election2020", "politics"]}
|
kornosk/bert-political-election2020-twitter-mlm
| null |
[
"transformers",
"pytorch",
"jax",
"bert",
"fill-mask",
"twitter",
"masked-token-prediction",
"election2020",
"politics",
"en",
"license:gpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #bert #fill-mask #twitter #masked-token-prediction #election2020 #politics #en #license-gpl-3.0 #autotrain_compatible #endpoints_compatible #region-us
|
# Pre-trained BERT on Twitter US Political Election 2020
Pre-trained weights for Knowledge Enhance Masked Language Model for Stance Detection, NAACL 2021.
We use the initialized weights from BERT-base (uncased) or 'bert-base-uncased'.
# Training Data
This model is pre-trained on over 5 million English tweets about the 2020 US Presidential Election.
# Training Objective
This model is initialized with BERT-base and trained with normal MLM objective.
# Usage
This pre-trained language model can be fine-tunned to any downstream task (e.g. classification).
Please see the official repository for more detail.
# Reference
- Knowledge Enhance Masked Language Model for Stance Detection, NAACL 2021.
|
[
"# Pre-trained BERT on Twitter US Political Election 2020\n\nPre-trained weights for Knowledge Enhance Masked Language Model for Stance Detection, NAACL 2021.\n\nWe use the initialized weights from BERT-base (uncased) or 'bert-base-uncased'.",
"# Training Data\n\nThis model is pre-trained on over 5 million English tweets about the 2020 US Presidential Election.",
"# Training Objective\n\nThis model is initialized with BERT-base and trained with normal MLM objective.",
"# Usage\n\nThis pre-trained language model can be fine-tunned to any downstream task (e.g. classification).\n\nPlease see the official repository for more detail.",
"# Reference\n\n- Knowledge Enhance Masked Language Model for Stance Detection, NAACL 2021."
] |
[
"TAGS\n#transformers #pytorch #jax #bert #fill-mask #twitter #masked-token-prediction #election2020 #politics #en #license-gpl-3.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Pre-trained BERT on Twitter US Political Election 2020\n\nPre-trained weights for Knowledge Enhance Masked Language Model for Stance Detection, NAACL 2021.\n\nWe use the initialized weights from BERT-base (uncased) or 'bert-base-uncased'.",
"# Training Data\n\nThis model is pre-trained on over 5 million English tweets about the 2020 US Presidential Election.",
"# Training Objective\n\nThis model is initialized with BERT-base and trained with normal MLM objective.",
"# Usage\n\nThis pre-trained language model can be fine-tunned to any downstream task (e.g. classification).\n\nPlease see the official repository for more detail.",
"# Reference\n\n- Knowledge Enhance Masked Language Model for Stance Detection, NAACL 2021."
] |
feature-extraction
|
transformers
|
hello
|
{}
|
kouohhashi/roberta_ja
| null |
[
"transformers",
"pytorch",
"jax",
"roberta",
"feature-extraction",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #jax #roberta #feature-extraction #endpoints_compatible #region-us
|
hello
|
[] |
[
"TAGS\n#transformers #pytorch #jax #roberta #feature-extraction #endpoints_compatible #region-us \n"
] |
text-generation
|
transformers
|
# Tony Stark DialoGPT Model
|
{"tags": ["Conversational"]}
|
kp17/DialoGPT-small-tonystark
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"Conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #Conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Tony Stark DialoGPT Model
|
[
"# Tony Stark DialoGPT Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #Conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Tony Stark DialoGPT Model"
] |
null | null |
# SaShiMi

> **It's Raw! Audio Generation with State-Space Models**\
> Karan Goel, Albert Gu, Chris Donahue, Christopher Ré\
> Paper: https://arxiv.org/pdf/2202.09729.pdf
This repository contains a release of the artifacts for the SaShiMi paper. To use our code and artifacts in your research, please refer to the instructions at [https://github.com/HazyResearch/state-spaces/tree/main/sashimi](https://github.com/HazyResearch/state-spaces/tree/main/sashimi).
|
{}
|
krandiash/sashimi-release
| null |
[
"arxiv:2202.09729",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2202.09729"
] |
[] |
TAGS
#arxiv-2202.09729 #region-us
|
# SaShiMi
!SaShiMi
> It's Raw! Audio Generation with State-Space Models\
> Karan Goel, Albert Gu, Chris Donahue, Christopher Ré\
> Paper: URL
This repository contains a release of the artifacts for the SaShiMi paper. To use our code and artifacts in your research, please refer to the instructions at URL
|
[
"# SaShiMi\n\n!SaShiMi\n> It's Raw! Audio Generation with State-Space Models\\\n> Karan Goel, Albert Gu, Chris Donahue, Christopher Ré\\\n> Paper: URL\n\nThis repository contains a release of the artifacts for the SaShiMi paper. To use our code and artifacts in your research, please refer to the instructions at URL"
] |
[
"TAGS\n#arxiv-2202.09729 #region-us \n",
"# SaShiMi\n\n!SaShiMi\n> It's Raw! Audio Generation with State-Space Models\\\n> Karan Goel, Albert Gu, Chris Donahue, Christopher Ré\\\n> Paper: URL\n\nThis repository contains a release of the artifacts for the SaShiMi paper. To use our code and artifacts in your research, please refer to the instructions at URL"
] |
automatic-speech-recognition
|
transformers
|
## Evaluation on Zeroth-Korean ASR corpus
[Google colab notebook(Korean)](https://colab.research.google.com/github/indra622/tutorials/blob/master/wav2vec2_korean_tutorial.ipynb)
```
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
from datasets import load_dataset
import soundfile as sf
import torch
from jiwer import wer
processor = Wav2Vec2Processor.from_pretrained("kresnik/wav2vec2-large-xlsr-korean")
model = Wav2Vec2ForCTC.from_pretrained("kresnik/wav2vec2-large-xlsr-korean").to('cuda')
ds = load_dataset("kresnik/zeroth_korean", "clean")
test_ds = ds['test']
def map_to_array(batch):
speech, _ = sf.read(batch["file"])
batch["speech"] = speech
return batch
test_ds = test_ds.map(map_to_array)
def map_to_pred(batch):
inputs = processor(batch["speech"], sampling_rate=16000, return_tensors="pt", padding="longest")
input_values = inputs.input_values.to("cuda")
with torch.no_grad():
logits = model(input_values).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
batch["transcription"] = transcription
return batch
result = test_ds.map(map_to_pred, batched=True, batch_size=16, remove_columns=["speech"])
print("WER:", wer(result["text"], result["transcription"]))
```
### Expected WER: 4.74%
### Expected CER: 1.78%
|
{"language": "ko", "license": "apache-2.0", "tags": ["speech", "audio", "automatic-speech-recognition"], "datasets": ["kresnik/zeroth_korean"], "model-index": [{"name": "Wav2Vec2 XLSR Korean", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Zeroth Korean", "type": "kresnik/zeroth_korean", "args": "clean"}, "metrics": [{"type": "wer", "value": 4.74, "name": "Test WER"}, {"type": "cer", "value": 1.78, "name": "Test CER"}]}]}]}
|
kresnik/wav2vec2-large-xlsr-korean
| null |
[
"transformers",
"pytorch",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"speech",
"audio",
"ko",
"dataset:kresnik/zeroth_korean",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ko"
] |
TAGS
#transformers #pytorch #safetensors #wav2vec2 #automatic-speech-recognition #speech #audio #ko #dataset-kresnik/zeroth_korean #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us
|
## Evaluation on Zeroth-Korean ASR corpus
Google colab notebook(Korean)
### Expected WER: 4.74%
### Expected CER: 1.78%
|
[
"## Evaluation on Zeroth-Korean ASR corpus\n\nGoogle colab notebook(Korean)",
"### Expected WER: 4.74%",
"### Expected CER: 1.78%"
] |
[
"TAGS\n#transformers #pytorch #safetensors #wav2vec2 #automatic-speech-recognition #speech #audio #ko #dataset-kresnik/zeroth_korean #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n",
"## Evaluation on Zeroth-Korean ASR corpus\n\nGoogle colab notebook(Korean)",
"### Expected WER: 4.74%",
"### Expected CER: 1.78%"
] |
null |
transformers
|
# 📈 Financial Korean ELECTRA model
Pretrained ELECTRA Language Model for Korean (`finance-koelectra-base-discriminator`)
> ELECTRA is a new method for self-supervised language representation learning. It can be used to
> pre-train transformer networks using relatively little compute. ELECTRA models are trained to
> distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to
> the discriminator of a GAN.
More details about ELECTRA can be found in the [ICLR paper](https://openreview.net/forum?id=r1xMH1BtvB)
or in the [official ELECTRA repository](https://github.com/google-research/electra) on GitHub.
## Stats
The current version of the model is trained on a financial news data of Naver news.
The final training corpus has a size of 25GB and 2.3B tokens.
This model was trained a cased model on a TITAN RTX for 500k steps.
## Usage
```python
from transformers import ElectraForPreTraining, ElectraTokenizer
import torch
discriminator = ElectraForPreTraining.from_pretrained("krevas/finance-koelectra-base-discriminator")
tokenizer = ElectraTokenizer.from_pretrained("krevas/finance-koelectra-base-discriminator")
sentence = "내일 해당 종목이 대폭 상승할 것이다"
fake_sentence = "내일 해당 종목이 맛있게 상승할 것이다"
fake_tokens = tokenizer.tokenize(fake_sentence)
fake_inputs = tokenizer.encode(fake_sentence, return_tensors="pt")
discriminator_outputs = discriminator(fake_inputs)
predictions = torch.round((torch.sign(discriminator_outputs[0]) + 1) / 2)
[print("%7s" % token, end="") for token in fake_tokens]
[print("%7s" % int(prediction), end="") for prediction in predictions.tolist()[1:-1]]
print("fake token : %s" % fake_tokens[predictions.tolist()[1:-1].index(1)])
```
# Huggingface model hub
All models are available on the [Huggingface model hub](https://huggingface.co/krevas).
|
{"language": "ko"}
|
krevas/finance-koelectra-base-discriminator
| null |
[
"transformers",
"pytorch",
"electra",
"pretraining",
"ko",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ko"
] |
TAGS
#transformers #pytorch #electra #pretraining #ko #endpoints_compatible #region-us
|
# Financial Korean ELECTRA model
Pretrained ELECTRA Language Model for Korean ('finance-koelectra-base-discriminator')
> ELECTRA is a new method for self-supervised language representation learning. It can be used to
> pre-train transformer networks using relatively little compute. ELECTRA models are trained to
> distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to
> the discriminator of a GAN.
More details about ELECTRA can be found in the ICLR paper
or in the official ELECTRA repository on GitHub.
## Stats
The current version of the model is trained on a financial news data of Naver news.
The final training corpus has a size of 25GB and 2.3B tokens.
This model was trained a cased model on a TITAN RTX for 500k steps.
## Usage
# Huggingface model hub
All models are available on the Huggingface model hub.
|
[
"# Financial Korean ELECTRA model\n\nPretrained ELECTRA Language Model for Korean ('finance-koelectra-base-discriminator')\n\n> ELECTRA is a new method for self-supervised language representation learning. It can be used to\n> pre-train transformer networks using relatively little compute. ELECTRA models are trained to\n> distinguish \"real\" input tokens vs \"fake\" input tokens generated by another neural network, similar to\n> the discriminator of a GAN.\n\nMore details about ELECTRA can be found in the ICLR paper\nor in the official ELECTRA repository on GitHub.",
"## Stats\n\nThe current version of the model is trained on a financial news data of Naver news.\n\nThe final training corpus has a size of 25GB and 2.3B tokens.\n\nThis model was trained a cased model on a TITAN RTX for 500k steps.",
"## Usage",
"# Huggingface model hub\n\nAll models are available on the Huggingface model hub."
] |
[
"TAGS\n#transformers #pytorch #electra #pretraining #ko #endpoints_compatible #region-us \n",
"# Financial Korean ELECTRA model\n\nPretrained ELECTRA Language Model for Korean ('finance-koelectra-base-discriminator')\n\n> ELECTRA is a new method for self-supervised language representation learning. It can be used to\n> pre-train transformer networks using relatively little compute. ELECTRA models are trained to\n> distinguish \"real\" input tokens vs \"fake\" input tokens generated by another neural network, similar to\n> the discriminator of a GAN.\n\nMore details about ELECTRA can be found in the ICLR paper\nor in the official ELECTRA repository on GitHub.",
"## Stats\n\nThe current version of the model is trained on a financial news data of Naver news.\n\nThe final training corpus has a size of 25GB and 2.3B tokens.\n\nThis model was trained a cased model on a TITAN RTX for 500k steps.",
"## Usage",
"# Huggingface model hub\n\nAll models are available on the Huggingface model hub."
] |
fill-mask
|
transformers
|
# 📈 Financial Korean ELECTRA model
Pretrained ELECTRA Language Model for Korean (`finance-koelectra-base-generator`)
> ELECTRA is a new method for self-supervised language representation learning. It can be used to
> pre-train transformer networks using relatively little compute. ELECTRA models are trained to
> distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to
> the discriminator of a GAN.
More details about ELECTRA can be found in the [ICLR paper](https://openreview.net/forum?id=r1xMH1BtvB)
or in the [official ELECTRA repository](https://github.com/google-research/electra) on GitHub.
## Stats
The current version of the model is trained on a financial news data of Naver news.
The final training corpus has a size of 25GB and 2.3B tokens.
This model was trained a cased model on a TITAN RTX for 500k steps.
## Usage
```python
from transformers import pipeline
fill_mask = pipeline(
"fill-mask",
model="krevas/finance-koelectra-base-generator",
tokenizer="krevas/finance-koelectra-base-generator"
)
print(fill_mask(f"내일 해당 종목이 대폭 {fill_mask.tokenizer.mask_token}할 것이다."))
```
# Huggingface model hub
All models are available on the [Huggingface model hub](https://huggingface.co/krevas).
|
{"language": "ko"}
|
krevas/finance-koelectra-base-generator
| null |
[
"transformers",
"pytorch",
"electra",
"fill-mask",
"ko",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ko"
] |
TAGS
#transformers #pytorch #electra #fill-mask #ko #autotrain_compatible #endpoints_compatible #region-us
|
# Financial Korean ELECTRA model
Pretrained ELECTRA Language Model for Korean ('finance-koelectra-base-generator')
> ELECTRA is a new method for self-supervised language representation learning. It can be used to
> pre-train transformer networks using relatively little compute. ELECTRA models are trained to
> distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to
> the discriminator of a GAN.
More details about ELECTRA can be found in the ICLR paper
or in the official ELECTRA repository on GitHub.
## Stats
The current version of the model is trained on a financial news data of Naver news.
The final training corpus has a size of 25GB and 2.3B tokens.
This model was trained a cased model on a TITAN RTX for 500k steps.
## Usage
# Huggingface model hub
All models are available on the Huggingface model hub.
|
[
"# Financial Korean ELECTRA model\n\nPretrained ELECTRA Language Model for Korean ('finance-koelectra-base-generator')\n\n> ELECTRA is a new method for self-supervised language representation learning. It can be used to\n> pre-train transformer networks using relatively little compute. ELECTRA models are trained to\n> distinguish \"real\" input tokens vs \"fake\" input tokens generated by another neural network, similar to\n> the discriminator of a GAN.\n\nMore details about ELECTRA can be found in the ICLR paper\nor in the official ELECTRA repository on GitHub.",
"## Stats\n\nThe current version of the model is trained on a financial news data of Naver news.\n\nThe final training corpus has a size of 25GB and 2.3B tokens.\n\nThis model was trained a cased model on a TITAN RTX for 500k steps.",
"## Usage",
"# Huggingface model hub\n\nAll models are available on the Huggingface model hub."
] |
[
"TAGS\n#transformers #pytorch #electra #fill-mask #ko #autotrain_compatible #endpoints_compatible #region-us \n",
"# Financial Korean ELECTRA model\n\nPretrained ELECTRA Language Model for Korean ('finance-koelectra-base-generator')\n\n> ELECTRA is a new method for self-supervised language representation learning. It can be used to\n> pre-train transformer networks using relatively little compute. ELECTRA models are trained to\n> distinguish \"real\" input tokens vs \"fake\" input tokens generated by another neural network, similar to\n> the discriminator of a GAN.\n\nMore details about ELECTRA can be found in the ICLR paper\nor in the official ELECTRA repository on GitHub.",
"## Stats\n\nThe current version of the model is trained on a financial news data of Naver news.\n\nThe final training corpus has a size of 25GB and 2.3B tokens.\n\nThis model was trained a cased model on a TITAN RTX for 500k steps.",
"## Usage",
"# Huggingface model hub\n\nAll models are available on the Huggingface model hub."
] |
null |
transformers
|
# 📈 Financial Korean ELECTRA model
Pretrained ELECTRA Language Model for Korean (`finance-koelectra-small-discriminator`)
> ELECTRA is a new method for self-supervised language representation learning. It can be used to
> pre-train transformer networks using relatively little compute. ELECTRA models are trained to
> distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to
> the discriminator of a GAN.
More details about ELECTRA can be found in the [ICLR paper](https://openreview.net/forum?id=r1xMH1BtvB)
or in the [official ELECTRA repository](https://github.com/google-research/electra) on GitHub.
## Stats
The current version of the model is trained on a financial news data of Naver news.
The final training corpus has a size of 25GB and 2.3B tokens.
This model was trained a cased model on a TITAN RTX for 500k steps.
## Usage
```python
from transformers import ElectraForPreTraining, ElectraTokenizer
import torch
discriminator = ElectraForPreTraining.from_pretrained("krevas/finance-koelectra-small-discriminator")
tokenizer = ElectraTokenizer.from_pretrained("krevas/finance-koelectra-small-discriminator")
sentence = "내일 해당 종목이 대폭 상승할 것이다"
fake_sentence = "내일 해당 종목이 맛있게 상승할 것이다"
fake_tokens = tokenizer.tokenize(fake_sentence)
fake_inputs = tokenizer.encode(fake_sentence, return_tensors="pt")
discriminator_outputs = discriminator(fake_inputs)
predictions = torch.round((torch.sign(discriminator_outputs[0]) + 1) / 2)
[print("%7s" % token, end="") for token in fake_tokens]
[print("%7s" % int(prediction), end="") for prediction in predictions.tolist()[1:-1]]
print("fake token : %s" % fake_tokens[predictions.tolist()[1:-1].index(1)])
```
# Huggingface model hub
All models are available on the [Huggingface model hub](https://huggingface.co/krevas).
|
{"language": "ko"}
|
krevas/finance-koelectra-small-discriminator
| null |
[
"transformers",
"pytorch",
"electra",
"pretraining",
"ko",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ko"
] |
TAGS
#transformers #pytorch #electra #pretraining #ko #endpoints_compatible #region-us
|
# Financial Korean ELECTRA model
Pretrained ELECTRA Language Model for Korean ('finance-koelectra-small-discriminator')
> ELECTRA is a new method for self-supervised language representation learning. It can be used to
> pre-train transformer networks using relatively little compute. ELECTRA models are trained to
> distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to
> the discriminator of a GAN.
More details about ELECTRA can be found in the ICLR paper
or in the official ELECTRA repository on GitHub.
## Stats
The current version of the model is trained on a financial news data of Naver news.
The final training corpus has a size of 25GB and 2.3B tokens.
This model was trained a cased model on a TITAN RTX for 500k steps.
## Usage
# Huggingface model hub
All models are available on the Huggingface model hub.
|
[
"# Financial Korean ELECTRA model\n\nPretrained ELECTRA Language Model for Korean ('finance-koelectra-small-discriminator')\n\n> ELECTRA is a new method for self-supervised language representation learning. It can be used to\n> pre-train transformer networks using relatively little compute. ELECTRA models are trained to\n> distinguish \"real\" input tokens vs \"fake\" input tokens generated by another neural network, similar to\n> the discriminator of a GAN.\n\nMore details about ELECTRA can be found in the ICLR paper\nor in the official ELECTRA repository on GitHub.",
"## Stats\n\nThe current version of the model is trained on a financial news data of Naver news.\n\nThe final training corpus has a size of 25GB and 2.3B tokens.\n\nThis model was trained a cased model on a TITAN RTX for 500k steps.",
"## Usage",
"# Huggingface model hub\n\nAll models are available on the Huggingface model hub."
] |
[
"TAGS\n#transformers #pytorch #electra #pretraining #ko #endpoints_compatible #region-us \n",
"# Financial Korean ELECTRA model\n\nPretrained ELECTRA Language Model for Korean ('finance-koelectra-small-discriminator')\n\n> ELECTRA is a new method for self-supervised language representation learning. It can be used to\n> pre-train transformer networks using relatively little compute. ELECTRA models are trained to\n> distinguish \"real\" input tokens vs \"fake\" input tokens generated by another neural network, similar to\n> the discriminator of a GAN.\n\nMore details about ELECTRA can be found in the ICLR paper\nor in the official ELECTRA repository on GitHub.",
"## Stats\n\nThe current version of the model is trained on a financial news data of Naver news.\n\nThe final training corpus has a size of 25GB and 2.3B tokens.\n\nThis model was trained a cased model on a TITAN RTX for 500k steps.",
"## Usage",
"# Huggingface model hub\n\nAll models are available on the Huggingface model hub."
] |
fill-mask
|
transformers
|
# 📈 Financial Korean ELECTRA model
Pretrained ELECTRA Language Model for Korean (`finance-koelectra-small-generator`)
> ELECTRA is a new method for self-supervised language representation learning. It can be used to
> pre-train transformer networks using relatively little compute. ELECTRA models are trained to
> distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to
> the discriminator of a GAN.
More details about ELECTRA can be found in the [ICLR paper](https://openreview.net/forum?id=r1xMH1BtvB)
or in the [official ELECTRA repository](https://github.com/google-research/electra) on GitHub.
## Stats
The current version of the model is trained on a financial news data of Naver news.
The final training corpus has a size of 25GB and 2.3B tokens.
This model was trained a cased model on a TITAN RTX for 500k steps.
## Usage
```python
from transformers import pipeline
fill_mask = pipeline(
"fill-mask",
model="krevas/finance-koelectra-small-generator",
tokenizer="krevas/finance-koelectra-small-generator"
)
print(fill_mask(f"내일 해당 종목이 대폭 {fill_mask.tokenizer.mask_token}할 것이다."))
```
# Huggingface model hub
All models are available on the [Huggingface model hub](https://huggingface.co/krevas).
|
{"language": "ko"}
|
krevas/finance-koelectra-small-generator
| null |
[
"transformers",
"pytorch",
"safetensors",
"electra",
"fill-mask",
"ko",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ko"
] |
TAGS
#transformers #pytorch #safetensors #electra #fill-mask #ko #autotrain_compatible #endpoints_compatible #region-us
|
# Financial Korean ELECTRA model
Pretrained ELECTRA Language Model for Korean ('finance-koelectra-small-generator')
> ELECTRA is a new method for self-supervised language representation learning. It can be used to
> pre-train transformer networks using relatively little compute. ELECTRA models are trained to
> distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to
> the discriminator of a GAN.
More details about ELECTRA can be found in the ICLR paper
or in the official ELECTRA repository on GitHub.
## Stats
The current version of the model is trained on a financial news data of Naver news.
The final training corpus has a size of 25GB and 2.3B tokens.
This model was trained a cased model on a TITAN RTX for 500k steps.
## Usage
# Huggingface model hub
All models are available on the Huggingface model hub.
|
[
"# Financial Korean ELECTRA model\n\nPretrained ELECTRA Language Model for Korean ('finance-koelectra-small-generator')\n\n> ELECTRA is a new method for self-supervised language representation learning. It can be used to\n> pre-train transformer networks using relatively little compute. ELECTRA models are trained to\n> distinguish \"real\" input tokens vs \"fake\" input tokens generated by another neural network, similar to\n> the discriminator of a GAN.\n\nMore details about ELECTRA can be found in the ICLR paper\nor in the official ELECTRA repository on GitHub.",
"## Stats\n\nThe current version of the model is trained on a financial news data of Naver news.\n\nThe final training corpus has a size of 25GB and 2.3B tokens.\n\nThis model was trained a cased model on a TITAN RTX for 500k steps.",
"## Usage",
"# Huggingface model hub\n\nAll models are available on the Huggingface model hub."
] |
[
"TAGS\n#transformers #pytorch #safetensors #electra #fill-mask #ko #autotrain_compatible #endpoints_compatible #region-us \n",
"# Financial Korean ELECTRA model\n\nPretrained ELECTRA Language Model for Korean ('finance-koelectra-small-generator')\n\n> ELECTRA is a new method for self-supervised language representation learning. It can be used to\n> pre-train transformer networks using relatively little compute. ELECTRA models are trained to\n> distinguish \"real\" input tokens vs \"fake\" input tokens generated by another neural network, similar to\n> the discriminator of a GAN.\n\nMore details about ELECTRA can be found in the ICLR paper\nor in the official ELECTRA repository on GitHub.",
"## Stats\n\nThe current version of the model is trained on a financial news data of Naver news.\n\nThe final training corpus has a size of 25GB and 2.3B tokens.\n\nThis model was trained a cased model on a TITAN RTX for 500k steps.",
"## Usage",
"# Huggingface model hub\n\nAll models are available on the Huggingface model hub."
] |
text-generation
|
transformers
|
# Phoenix DialoGPT model
|
{"tags": ["conversational"]}
|
kripanshudixit/DialoGPT-small-phoenix
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Phoenix DialoGPT model
|
[
"# Phoenix DialoGPT model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Phoenix DialoGPT model"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-turkish-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3942
- Wer: 0.3149
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.9921 | 3.67 | 400 | 0.7820 | 0.7857 |
| 0.4496 | 7.34 | 800 | 0.4630 | 0.4977 |
| 0.2057 | 11.01 | 1200 | 0.4293 | 0.4627 |
| 0.1328 | 14.68 | 1600 | 0.4464 | 0.4068 |
| 0.1009 | 18.35 | 2000 | 0.4461 | 0.3742 |
| 0.0794 | 22.02 | 2400 | 0.4328 | 0.3467 |
| 0.0628 | 25.69 | 2800 | 0.4036 | 0.3263 |
| 0.0497 | 29.36 | 3200 | 0.3942 | 0.3149 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-large-xls-r-300m-turkish-colab", "results": []}]}
|
krirk/wav2vec2-large-xls-r-300m-turkish-colab
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
|
wav2vec2-large-xls-r-300m-turkish-colab
=======================================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common\_voice dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3942
* Wer: 0.3149
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 30
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.10.0+cu111
* Datasets 1.13.3
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.13.3\n* Tokenizers 0.10.3"
] |
text-generation
|
transformers
|
#Spock Model
|
{"tags": ["conversational"]}
|
kris/DialoGPT-small-spock
| null |
[
"transformers",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
#Spock Model
|
[] |
[
"TAGS\n#transformers #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation
|
transformers
|
#Spock model
|
{"tags": ["conversational"]}
|
kris/DialoGPT-small-spock3
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
#Spock model
|
[] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation
|
transformers
|
#Spock model
|
{"tags": ["conversational"]}
|
kris/DialoGPT-small-spock4
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
#Spock model
|
[] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation
|
transformers
|
#Spock model
|
{"tags": ["conversational"]}
|
kris/DialoGPT-small-spock5
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
#Spock model
|
[] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
sentence-similarity
|
sentence-transformers
|
# sts-GBERT-bi-encoder
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('sts-GBERT-bi-encoder')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('sts-GBERT-bi-encoder')
model = AutoModel.from_pretrained('sts-GBERT-bi-encoder')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, max pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sts-GBERT-bi-encoder)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 859 with parameters:
```
{'batch_size': 10, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"callback": null,
"epochs": 4,
"evaluation_steps": 0,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 344,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
{"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "sentence-similarity"}
|
krlng/sts-GBERT-bi-encoder
| null |
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#sentence-transformers #pytorch #bert #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us
|
# sts-GBERT-bi-encoder
This is a sentence-transformers model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Usage (HuggingFace Transformers)
Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL
## Training
The model was trained with the parameters:
DataLoader:
'URL.dataloader.DataLoader' of length 859 with parameters:
Loss:
'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss'
Parameters of the fit()-Method:
## Full Model Architecture
## Citing & Authors
|
[
"# sts-GBERT-bi-encoder\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 859 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss' \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
[
"TAGS\n#sentence-transformers #pytorch #bert #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us \n",
"# sts-GBERT-bi-encoder\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 859 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss' \n\nParameters of the fit()-Method:",
"## Full Model Architecture",
"## Citing & Authors"
] |
text-generation
|
transformers
|
#testing bot Model
|
{"tags": ["conversational"]}
|
kshitiz/testing-bot-repo
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
#testing bot Model
|
[] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# name
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "model_index": [{"name": "name", "results": [{"task": {"name": "Text Classification", "type": "text-classification"}, "dataset": {"name": "glue", "type": "glue", "args": "mrpc"}}]}]}
|
ksmcg/name
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# name
This model is a fine-tuned version of bert-base-uncased on the glue dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
[
"# name\n\nThis model is a fine-tuned version of bert-base-uncased on the glue dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.9.2\n- Pytorch 1.9.0+cu102\n- Datasets 1.11.0\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# name\n\nThis model is a fine-tuned version of bert-base-uncased on the glue dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.9.2\n- Pytorch 1.9.0+cu102\n- Datasets 1.11.0\n- Tokenizers 0.10.3"
] |
fill-mask
|
transformers
|
# I-BERT base model
This model, `ibert-roberta-base`, is an integer-only quantized version of [RoBERTa](https://arxiv.org/abs/1907.11692), and was introduced in [this paper](https://arxiv.org/abs/2101.01321).
I-BERT stores all parameters with INT8 representation, and carries out the entire inference using integer-only arithmetic.
In particular, I-BERT replaces all floating point operations in the Transformer architectures (e.g., MatMul, GELU, Softmax, and LayerNorm) with closely approximating integer operations.
This can result in upto 4x inference speed up as compared to floating point counterpart when tested on an Nvidia T4 GPU.
The best model parameters searched via quantization-aware finetuning can be then exported (e.g., to TensorRT) for integer-only deployment of the model.
## Finetuning Procedure
Finetuning of I-BERT consists of 3 stages: (1) Full-precision finetuning from the pretrained model on a down-stream task, (2) model quantization, and (3) integer-only finetuning (i.e., quantization-aware training) of the quantized model.
### Full-precision finetuning
Full-precision finetuning of I-BERT is similar to RoBERTa finetuning.
For instance, you can run the following command to finetune on the [MRPC](https://www.microsoft.com/en-us/download/details.aspx?id=52398) text classification task.
```
python examples/text-classification/run_glue.py \
--model_name_or_path kssteven/ibert-roberta-base \
--task_name MRPC \
--do_eval \
--do_train \
--evaluation_strategy epoch \
--max_seq_length 128 \
--per_device_train_batch_size 32 \
--save_steps 115 \
--learning_rate 2e-5 \
--num_train_epochs 10 \
--output_dir $OUTPUT_DIR
```
### Model Quantization
Once you are done with full-precision finetuning, open up `config.json` in your checkpoint directory and set the `quantize` attribute as `true`.
```
{
"_name_or_path": "kssteven/ibert-roberta-base",
"architectures": [
"IBertForSequenceClassification"
],
"attention_probs_dropout_prob": 0.1,
"bos_token_id": 0,
"eos_token_id": 2,
"finetuning_task": "mrpc",
"force_dequant": "none",
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-05,
"max_position_embeddings": 514,
"model_type": "ibert",
"num_attention_heads": 12,
"num_hidden_layers": 12,
"pad_token_id": 1,
"position_embedding_type": "absolute",
"quant_mode": true,
"tokenizer_class": "RobertaTokenizer",
"transformers_version": "4.4.0.dev0",
"type_vocab_size": 1,
"vocab_size": 50265
}
```
Then, your model will automatically run as the integer-only mode when you load the checkpoint.
Also, make sure to delete `optimizer.pt`, `scheduler.pt` and `trainer_state.json` in the same directory.
Otherwise, HF will not reset the optimizer, scheduler, or trainer state for the following integer-only finetuning.
### Integer-only finetuning (Quantization-aware training)
Finally, you will be able to run integer-only finetuning simply by loading the checkpoint file you modified.
Note that the only difference in the example command below is `model_name_or_path`.
```
python examples/text-classification/run_glue.py \
--model_name_or_path $CHECKPOINT_DIR
--task_name MRPC \
--do_eval \
--do_train \
--evaluation_strategy epoch \
--max_seq_length 128 \
--per_device_train_batch_size 32 \
--save_steps 115 \
--learning_rate 1e-6 \
--num_train_epochs 10 \
--output_dir $OUTPUT_DIR
```
## Citation info
If you use I-BERT, please cite [our papaer](https://arxiv.org/abs/2101.01321).
```
@article{kim2021bert,
title={I-BERT: Integer-only BERT Quantization},
author={Kim, Sehoon and Gholami, Amir and Yao, Zhewei and Mahoney, Michael W and Keutzer, Kurt},
journal={arXiv preprint arXiv:2101.01321},
year={2021}
}
```
|
{}
|
kssteven/ibert-roberta-base
| null |
[
"transformers",
"pytorch",
"ibert",
"fill-mask",
"arxiv:1907.11692",
"arxiv:2101.01321",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1907.11692",
"2101.01321"
] |
[] |
TAGS
#transformers #pytorch #ibert #fill-mask #arxiv-1907.11692 #arxiv-2101.01321 #autotrain_compatible #endpoints_compatible #region-us
|
# I-BERT base model
This model, 'ibert-roberta-base', is an integer-only quantized version of RoBERTa, and was introduced in this paper.
I-BERT stores all parameters with INT8 representation, and carries out the entire inference using integer-only arithmetic.
In particular, I-BERT replaces all floating point operations in the Transformer architectures (e.g., MatMul, GELU, Softmax, and LayerNorm) with closely approximating integer operations.
This can result in upto 4x inference speed up as compared to floating point counterpart when tested on an Nvidia T4 GPU.
The best model parameters searched via quantization-aware finetuning can be then exported (e.g., to TensorRT) for integer-only deployment of the model.
## Finetuning Procedure
Finetuning of I-BERT consists of 3 stages: (1) Full-precision finetuning from the pretrained model on a down-stream task, (2) model quantization, and (3) integer-only finetuning (i.e., quantization-aware training) of the quantized model.
### Full-precision finetuning
Full-precision finetuning of I-BERT is similar to RoBERTa finetuning.
For instance, you can run the following command to finetune on the MRPC text classification task.
### Model Quantization
Once you are done with full-precision finetuning, open up 'URL' in your checkpoint directory and set the 'quantize' attribute as 'true'.
Then, your model will automatically run as the integer-only mode when you load the checkpoint.
Also, make sure to delete 'URL', 'URL' and 'trainer_state.json' in the same directory.
Otherwise, HF will not reset the optimizer, scheduler, or trainer state for the following integer-only finetuning.
### Integer-only finetuning (Quantization-aware training)
Finally, you will be able to run integer-only finetuning simply by loading the checkpoint file you modified.
Note that the only difference in the example command below is 'model_name_or_path'.
info
If you use I-BERT, please cite our papaer.
|
[
"# I-BERT base model\n\nThis model, 'ibert-roberta-base', is an integer-only quantized version of RoBERTa, and was introduced in this paper.\nI-BERT stores all parameters with INT8 representation, and carries out the entire inference using integer-only arithmetic.\nIn particular, I-BERT replaces all floating point operations in the Transformer architectures (e.g., MatMul, GELU, Softmax, and LayerNorm) with closely approximating integer operations.\nThis can result in upto 4x inference speed up as compared to floating point counterpart when tested on an Nvidia T4 GPU.\nThe best model parameters searched via quantization-aware finetuning can be then exported (e.g., to TensorRT) for integer-only deployment of the model.",
"## Finetuning Procedure\n\nFinetuning of I-BERT consists of 3 stages: (1) Full-precision finetuning from the pretrained model on a down-stream task, (2) model quantization, and (3) integer-only finetuning (i.e., quantization-aware training) of the quantized model.",
"### Full-precision finetuning\n\nFull-precision finetuning of I-BERT is similar to RoBERTa finetuning.\nFor instance, you can run the following command to finetune on the MRPC text classification task.",
"### Model Quantization\n\nOnce you are done with full-precision finetuning, open up 'URL' in your checkpoint directory and set the 'quantize' attribute as 'true'.\n\n\n\nThen, your model will automatically run as the integer-only mode when you load the checkpoint.\nAlso, make sure to delete 'URL', 'URL' and 'trainer_state.json' in the same directory.\nOtherwise, HF will not reset the optimizer, scheduler, or trainer state for the following integer-only finetuning.",
"### Integer-only finetuning (Quantization-aware training)\n\nFinally, you will be able to run integer-only finetuning simply by loading the checkpoint file you modified.\nNote that the only difference in the example command below is 'model_name_or_path'.\n\n\n\n\ninfo\n\nIf you use I-BERT, please cite our papaer."
] |
[
"TAGS\n#transformers #pytorch #ibert #fill-mask #arxiv-1907.11692 #arxiv-2101.01321 #autotrain_compatible #endpoints_compatible #region-us \n",
"# I-BERT base model\n\nThis model, 'ibert-roberta-base', is an integer-only quantized version of RoBERTa, and was introduced in this paper.\nI-BERT stores all parameters with INT8 representation, and carries out the entire inference using integer-only arithmetic.\nIn particular, I-BERT replaces all floating point operations in the Transformer architectures (e.g., MatMul, GELU, Softmax, and LayerNorm) with closely approximating integer operations.\nThis can result in upto 4x inference speed up as compared to floating point counterpart when tested on an Nvidia T4 GPU.\nThe best model parameters searched via quantization-aware finetuning can be then exported (e.g., to TensorRT) for integer-only deployment of the model.",
"## Finetuning Procedure\n\nFinetuning of I-BERT consists of 3 stages: (1) Full-precision finetuning from the pretrained model on a down-stream task, (2) model quantization, and (3) integer-only finetuning (i.e., quantization-aware training) of the quantized model.",
"### Full-precision finetuning\n\nFull-precision finetuning of I-BERT is similar to RoBERTa finetuning.\nFor instance, you can run the following command to finetune on the MRPC text classification task.",
"### Model Quantization\n\nOnce you are done with full-precision finetuning, open up 'URL' in your checkpoint directory and set the 'quantize' attribute as 'true'.\n\n\n\nThen, your model will automatically run as the integer-only mode when you load the checkpoint.\nAlso, make sure to delete 'URL', 'URL' and 'trainer_state.json' in the same directory.\nOtherwise, HF will not reset the optimizer, scheduler, or trainer state for the following integer-only finetuning.",
"### Integer-only finetuning (Quantization-aware training)\n\nFinally, you will be able to run integer-only finetuning simply by loading the checkpoint file you modified.\nNote that the only difference in the example command below is 'model_name_or_path'.\n\n\n\n\ninfo\n\nIf you use I-BERT, please cite our papaer."
] |
fill-mask
|
transformers
|
# I-BERT large model
This model, `ibert-roberta-large`, is an integer-only quantized version of [RoBERTa](https://arxiv.org/abs/1907.11692), and was introduced in [this papaer](https://arxiv.org/abs/2101.01321).
I-BERT stores all parameters with INT8 representation, and carries out the entire inference using integer-only arithmetic.
In particular, I-BERT replaces all floating point operations in the Transformer architectures (e.g., MatMul, GELU, Softmax, and LayerNorm) with closely approximating integer operations.
This can result in upto 4x inference speed up as compared to floating point counterpart when tested on an Nvidia T4 GPU.
The best model parameters searched via quantization-aware finetuning can be then exported (e.g., to TensorRT) for integer-only deployment of the model.
## Finetuning Procedure
Finetuning of I-BERT consists of 3 stages: (1) Full-precision finetuning from the pretrained model on a down-stream task, (2) model quantization, and (3) integer-only finetuning (i.e., quantization-aware training) of the quantized model.
### Full-precision finetuning
Full-precision finetuning of I-BERT is similar to RoBERTa finetuning.
For instance, you can run the following command to finetune on the [MRPC](https://www.microsoft.com/en-us/download/details.aspx?id=52398) text classification task.
```
python examples/text-classification/run_glue.py \
--model_name_or_path kssteven/ibert-roberta-large \
--task_name MRPC \
--do_eval \
--do_train \
--evaluation_strategy epoch \
--max_seq_length 128 \
--per_device_train_batch_size 32 \
--save_steps 115 \
--learning_rate 2e-5 \
--num_train_epochs 10 \
--output_dir $OUTPUT_DIR
```
### Model Quantization
Once you are done with full-precision finetuning, open up `config.json` in your checkpoint directory and set the `quantize` attribute as `true`.
```
{
"_name_or_path": "kssteven/ibert-roberta-large",
"architectures": [
"IBertForSequenceClassification"
],
"attention_probs_dropout_prob": 0.1,
"bos_token_id": 0,
"eos_token_id": 2,
"finetuning_task": "mrpc",
"force_dequant": "none",
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-05,
"max_position_embeddings": 514,
"model_type": "ibert",
"num_attention_heads": 12,
"num_hidden_layers": 12,
"pad_token_id": 1,
"position_embedding_type": "absolute",
"quant_mode": true,
"tokenizer_class": "RobertaTokenizer",
"transformers_version": "4.4.0.dev0",
"type_vocab_size": 1,
"vocab_size": 50265
}
```
Then, your model will automatically run as the integer-only mode when you load the checkpoint.
Also, make sure to delete `optimizer.pt`, `scheduler.pt` and `trainer_state.json` in the same directory.
Otherwise, HF will not reset the optimizer, scheduler, or trainer state for the following integer-only finetuning.
### Integer-only finetuning (Quantization-aware training)
Finally, you will be able to run integer-only finetuning simply by loading the checkpoint file you modified.
Note that the only difference in the example command below is `model_name_or_path`.
```
python examples/text-classification/run_glue.py \
--model_name_or_path $CHECKPOINT_DIR
--task_name MRPC \
--do_eval \
--do_train \
--evaluation_strategy epoch \
--max_seq_length 128 \
--per_device_train_batch_size 32 \
--save_steps 115 \
--learning_rate 1e-6 \
--num_train_epochs 10 \
--output_dir $OUTPUT_DIR
```
## Citation info
If you use I-BERT, please cite [our papaer](https://arxiv.org/abs/2101.01321).
```
@article{kim2021bert,
title={I-BERT: Integer-only BERT Quantization},
author={Kim, Sehoon and Gholami, Amir and Yao, Zhewei and Mahoney, Michael W and Keutzer, Kurt},
journal={arXiv preprint arXiv:2101.01321},
year={2021}
}
```
|
{}
|
kssteven/ibert-roberta-large
| null |
[
"transformers",
"pytorch",
"ibert",
"fill-mask",
"arxiv:1907.11692",
"arxiv:2101.01321",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1907.11692",
"2101.01321"
] |
[] |
TAGS
#transformers #pytorch #ibert #fill-mask #arxiv-1907.11692 #arxiv-2101.01321 #autotrain_compatible #endpoints_compatible #region-us
|
# I-BERT large model
This model, 'ibert-roberta-large', is an integer-only quantized version of RoBERTa, and was introduced in this papaer.
I-BERT stores all parameters with INT8 representation, and carries out the entire inference using integer-only arithmetic.
In particular, I-BERT replaces all floating point operations in the Transformer architectures (e.g., MatMul, GELU, Softmax, and LayerNorm) with closely approximating integer operations.
This can result in upto 4x inference speed up as compared to floating point counterpart when tested on an Nvidia T4 GPU.
The best model parameters searched via quantization-aware finetuning can be then exported (e.g., to TensorRT) for integer-only deployment of the model.
## Finetuning Procedure
Finetuning of I-BERT consists of 3 stages: (1) Full-precision finetuning from the pretrained model on a down-stream task, (2) model quantization, and (3) integer-only finetuning (i.e., quantization-aware training) of the quantized model.
### Full-precision finetuning
Full-precision finetuning of I-BERT is similar to RoBERTa finetuning.
For instance, you can run the following command to finetune on the MRPC text classification task.
### Model Quantization
Once you are done with full-precision finetuning, open up 'URL' in your checkpoint directory and set the 'quantize' attribute as 'true'.
Then, your model will automatically run as the integer-only mode when you load the checkpoint.
Also, make sure to delete 'URL', 'URL' and 'trainer_state.json' in the same directory.
Otherwise, HF will not reset the optimizer, scheduler, or trainer state for the following integer-only finetuning.
### Integer-only finetuning (Quantization-aware training)
Finally, you will be able to run integer-only finetuning simply by loading the checkpoint file you modified.
Note that the only difference in the example command below is 'model_name_or_path'.
info
If you use I-BERT, please cite our papaer.
|
[
"# I-BERT large model\n\nThis model, 'ibert-roberta-large', is an integer-only quantized version of RoBERTa, and was introduced in this papaer.\nI-BERT stores all parameters with INT8 representation, and carries out the entire inference using integer-only arithmetic.\nIn particular, I-BERT replaces all floating point operations in the Transformer architectures (e.g., MatMul, GELU, Softmax, and LayerNorm) with closely approximating integer operations.\nThis can result in upto 4x inference speed up as compared to floating point counterpart when tested on an Nvidia T4 GPU.\nThe best model parameters searched via quantization-aware finetuning can be then exported (e.g., to TensorRT) for integer-only deployment of the model.",
"## Finetuning Procedure\n\nFinetuning of I-BERT consists of 3 stages: (1) Full-precision finetuning from the pretrained model on a down-stream task, (2) model quantization, and (3) integer-only finetuning (i.e., quantization-aware training) of the quantized model.",
"### Full-precision finetuning\n\nFull-precision finetuning of I-BERT is similar to RoBERTa finetuning.\nFor instance, you can run the following command to finetune on the MRPC text classification task.",
"### Model Quantization\n\nOnce you are done with full-precision finetuning, open up 'URL' in your checkpoint directory and set the 'quantize' attribute as 'true'.\n\n\n\nThen, your model will automatically run as the integer-only mode when you load the checkpoint.\nAlso, make sure to delete 'URL', 'URL' and 'trainer_state.json' in the same directory.\nOtherwise, HF will not reset the optimizer, scheduler, or trainer state for the following integer-only finetuning.",
"### Integer-only finetuning (Quantization-aware training)\n\nFinally, you will be able to run integer-only finetuning simply by loading the checkpoint file you modified.\nNote that the only difference in the example command below is 'model_name_or_path'.\n\n\n\n\ninfo\n\nIf you use I-BERT, please cite our papaer."
] |
[
"TAGS\n#transformers #pytorch #ibert #fill-mask #arxiv-1907.11692 #arxiv-2101.01321 #autotrain_compatible #endpoints_compatible #region-us \n",
"# I-BERT large model\n\nThis model, 'ibert-roberta-large', is an integer-only quantized version of RoBERTa, and was introduced in this papaer.\nI-BERT stores all parameters with INT8 representation, and carries out the entire inference using integer-only arithmetic.\nIn particular, I-BERT replaces all floating point operations in the Transformer architectures (e.g., MatMul, GELU, Softmax, and LayerNorm) with closely approximating integer operations.\nThis can result in upto 4x inference speed up as compared to floating point counterpart when tested on an Nvidia T4 GPU.\nThe best model parameters searched via quantization-aware finetuning can be then exported (e.g., to TensorRT) for integer-only deployment of the model.",
"## Finetuning Procedure\n\nFinetuning of I-BERT consists of 3 stages: (1) Full-precision finetuning from the pretrained model on a down-stream task, (2) model quantization, and (3) integer-only finetuning (i.e., quantization-aware training) of the quantized model.",
"### Full-precision finetuning\n\nFull-precision finetuning of I-BERT is similar to RoBERTa finetuning.\nFor instance, you can run the following command to finetune on the MRPC text classification task.",
"### Model Quantization\n\nOnce you are done with full-precision finetuning, open up 'URL' in your checkpoint directory and set the 'quantize' attribute as 'true'.\n\n\n\nThen, your model will automatically run as the integer-only mode when you load the checkpoint.\nAlso, make sure to delete 'URL', 'URL' and 'trainer_state.json' in the same directory.\nOtherwise, HF will not reset the optimizer, scheduler, or trainer state for the following integer-only finetuning.",
"### Integer-only finetuning (Quantization-aware training)\n\nFinally, you will be able to run integer-only finetuning simply by loading the checkpoint file you modified.\nNote that the only difference in the example command below is 'model_name_or_path'.\n\n\n\n\ninfo\n\nIf you use I-BERT, please cite our papaer."
] |
null | null |
I love this class
|
{}
|
ktalley524/Class_Eval_Results
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#region-us
|
I love this class
|
[] |
[
"TAGS\n#region-us \n"
] |
text-generation
|
transformers
|
# GPT-Neo 2.7B (By EleutherAI)
## Model Description
GPT-Neo 2.7B is a transformer model designed using EleutherAI's replication of the GPT-3 architecture. GPT-Neo refers to the class of models, while 2.7B represents the number of parameters of this particular pre-trained model.
## Training data
GPT-Neo 2.7B was trained on the Pile, a large scale curated dataset created by EleutherAI for the purpose of training this model.
## Training procedure
This model was trained for 420 billion tokens over 400,000 steps. It was trained as a masked autoregressive language model, using cross-entropy loss.
## Intended Use and Limitations
This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a prompt.
### How to use
You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run:
```py
>>> from transformers import pipeline
>>> generator = pipeline('text-generation', model='EleutherAI/gpt-neo-2.7B')
>>> generator("EleutherAI has", do_sample=True, min_length=50)
[{'generated_text': 'EleutherAI has made a commitment to create new software packages for each of its major clients and has'}]
```
### Limitations and Biases
GPT-Neo was trained as an autoregressive language model. This means that its core functionality is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work.
GPT-Neo was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending on your usecase GPT-Neo may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile.
As with all language models, it is hard to predict in advance how GPT-Neo will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.
## Eval results
All evaluations were done using our [evaluation harness](https://github.com/EleutherAI/lm-evaluation-harness). Some results for GPT-2 and GPT-3 are inconsistent with the values reported in the respective papers. We are currently looking into why, and would greatly appreciate feedback and further testing of our eval harness. If you would like to contribute evaluations you have done, please reach out on our [Discord](https://discord.gg/vtRgjbM).
### Linguistic Reasoning
| Model and Size | Pile BPB | Pile PPL | Wikitext PPL | Lambada PPL | Lambada Acc | Winogrande | Hellaswag |
| ---------------- | ---------- | ---------- | ------------- | ----------- | ----------- | ---------- | ----------- |
| GPT-Neo 1.3B | 0.7527 | 6.159 | 13.10 | 7.498 | 57.23% | 55.01% | 38.66% |
| GPT-2 1.5B | 1.0468 | ----- | 17.48 | 10.634 | 51.21% | 59.40% | 40.03% |
| **GPT-Neo 2.7B** | **0.7165** | **5.646** | **11.39** | **5.626** | **62.22%** | **56.50%** | **42.73%** |
| GPT-3 Ada | 0.9631 | ----- | ----- | 9.954 | 51.60% | 52.90% | 35.93% |
### Physical and Scientific Reasoning
| Model and Size | MathQA | PubMedQA | Piqa |
| ---------------- | ---------- | ---------- | ----------- |
| GPT-Neo 1.3B | 24.05% | 54.40% | 71.11% |
| GPT-2 1.5B | 23.64% | 58.33% | 70.78% |
| **GPT-Neo 2.7B** | **24.72%** | **57.54%** | **72.14%** |
| GPT-3 Ada | 24.29% | 52.80% | 68.88% |
### Down-Stream Applications
TBD
### BibTeX entry and citation info
To cite this model, use
```bibtex
@article{gao2020pile,
title={The Pile: An 800GB Dataset of Diverse Text for Language Modeling},
author={Gao, Leo and Biderman, Stella and Black, Sid and Golding, Laurence and Hoppe, Travis and Foster, Charles and Phang, Jason and He, Horace and Thite, Anish and Nabeshima, Noa and others},
journal={arXiv preprint arXiv:2101.00027},
year={2020}
}
```
To cite the codebase that this model was trained with, use
```bibtex
@software{gpt-neo,
author = {Black, Sid and Gao, Leo and Wang, Phil and Leahy, Connor and Biderman, Stella},
title = {{GPT-Neo}: Large Scale Autoregressive Language Modeling with Mesh-Tensorflow},
url = {http://github.com/eleutherai/gpt-neo},
version = {1.0},
year = {2021},
}
```
|
{"language": ["en"], "license": "apache-2.0", "tags": ["text generation", "pytorch", "the Pile", "causal-lm"], "datasets": ["the Pile"]}
|
ktangri/gpt-neo-demo
| null |
[
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"text generation",
"the Pile",
"causal-lm",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt_neo #text-generation #text generation #the Pile #causal-lm #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
GPT-Neo 2.7B (By EleutherAI)
============================
Model Description
-----------------
GPT-Neo 2.7B is a transformer model designed using EleutherAI's replication of the GPT-3 architecture. GPT-Neo refers to the class of models, while 2.7B represents the number of parameters of this particular pre-trained model.
Training data
-------------
GPT-Neo 2.7B was trained on the Pile, a large scale curated dataset created by EleutherAI for the purpose of training this model.
Training procedure
------------------
This model was trained for 420 billion tokens over 400,000 steps. It was trained as a masked autoregressive language model, using cross-entropy loss.
Intended Use and Limitations
----------------------------
This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a prompt.
### How to use
You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run:
### Limitations and Biases
GPT-Neo was trained as an autoregressive language model. This means that its core functionality is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work.
GPT-Neo was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending on your usecase GPT-Neo may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile.
As with all language models, it is hard to predict in advance how GPT-Neo will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.
Eval results
------------
All evaluations were done using our evaluation harness. Some results for GPT-2 and GPT-3 are inconsistent with the values reported in the respective papers. We are currently looking into why, and would greatly appreciate feedback and further testing of our eval harness. If you would like to contribute evaluations you have done, please reach out on our Discord.
### Linguistic Reasoning
### Physical and Scientific Reasoning
### Down-Stream Applications
TBD
### BibTeX entry and citation info
To cite this model, use
To cite the codebase that this model was trained with, use
|
[
"### How to use\n\n\nYou can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run:",
"### Limitations and Biases\n\n\nGPT-Neo was trained as an autoregressive language model. This means that its core functionality is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work.\nGPT-Neo was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending on your usecase GPT-Neo may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile.\nAs with all language models, it is hard to predict in advance how GPT-Neo will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.\n\n\nEval results\n------------\n\n\nAll evaluations were done using our evaluation harness. Some results for GPT-2 and GPT-3 are inconsistent with the values reported in the respective papers. We are currently looking into why, and would greatly appreciate feedback and further testing of our eval harness. If you would like to contribute evaluations you have done, please reach out on our Discord.",
"### Linguistic Reasoning",
"### Physical and Scientific Reasoning",
"### Down-Stream Applications\n\n\nTBD",
"### BibTeX entry and citation info\n\n\nTo cite this model, use\n\n\nTo cite the codebase that this model was trained with, use"
] |
[
"TAGS\n#transformers #pytorch #gpt_neo #text-generation #text generation #the Pile #causal-lm #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### How to use\n\n\nYou can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run:",
"### Limitations and Biases\n\n\nGPT-Neo was trained as an autoregressive language model. This means that its core functionality is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work.\nGPT-Neo was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending on your usecase GPT-Neo may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile.\nAs with all language models, it is hard to predict in advance how GPT-Neo will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.\n\n\nEval results\n------------\n\n\nAll evaluations were done using our evaluation harness. Some results for GPT-2 and GPT-3 are inconsistent with the values reported in the respective papers. We are currently looking into why, and would greatly appreciate feedback and further testing of our eval harness. If you would like to contribute evaluations you have done, please reach out on our Discord.",
"### Linguistic Reasoning",
"### Physical and Scientific Reasoning",
"### Down-Stream Applications\n\n\nTBD",
"### BibTeX entry and citation info\n\n\nTo cite this model, use\n\n\nTo cite the codebase that this model was trained with, use"
] |
question-answering
|
transformers
|
### Model
**[`albert-xlarge-v2`](https://huggingface.co/albert-xlarge-v2)** fine-tuned on **[`SQuAD V2`](https://rajpurkar.github.io/SQuAD-explorer/)** using **[`run_squad.py`](https://github.com/huggingface/transformers/blob/master/examples/question-answering/run_squad.py)**
### Training Parameters
Trained on 4 NVIDIA GeForce RTX 2080 Ti 11Gb
```bash
BASE_MODEL=albert-xlarge-v2
python run_squad.py \
--version_2_with_negative \
--model_type albert \
--model_name_or_path $BASE_MODEL \
--output_dir $OUTPUT_MODEL \
--do_eval \
--do_lower_case \
--train_file $SQUAD_DIR/train-v2.0.json \
--predict_file $SQUAD_DIR/dev-v2.0.json \
--per_gpu_train_batch_size 3 \
--per_gpu_eval_batch_size 64 \
--learning_rate 3e-5 \
--num_train_epochs 3.0 \
--max_seq_length 384 \
--doc_stride 128 \
--save_steps 2000 \
--threads 24 \
--warmup_steps 814 \
--gradient_accumulation_steps 4 \
--fp16 \
--do_train
```
### Evaluation
Evaluation on the dev set. I did not sweep for best threshold.
| | val |
|-------------------|-------------------|
| exact | 84.41842836688285 |
| f1 | 87.4628460501696 |
| total | 11873.0 |
| HasAns_exact | 80.68488529014844 |
| HasAns_f1 | 86.78245127423482 |
| HasAns_total | 5928.0 |
| NoAns_exact | 88.1412952060555 |
| NoAns_f1 | 88.1412952060555 |
| NoAns_total | 5945.0 |
| best_exact | 84.41842836688285 |
| best_exact_thresh | 0.0 |
| best_f1 | 87.46284605016956 |
| best_f1_thresh | 0.0 |
### Usage
See [huggingface documentation](https://huggingface.co/transformers/model_doc/albert.html#albertforquestionanswering). Training on `SQuAD V2` allows the model to score if a paragraph contains an answer:
```python
start_scores, end_scores = model(input_ids)
span_scores = start_scores.softmax(dim=1).log()[:,:,None] + end_scores.softmax(dim=1).log()[:,None,:]
ignore_score = span_scores[:,0,0] #no answer scores
```
|
{}
|
ktrapeznikov/albert-xlarge-v2-squad-v2
| null |
[
"transformers",
"pytorch",
"albert",
"question-answering",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #albert #question-answering #endpoints_compatible #has_space #region-us
|
### Model
'albert-xlarge-v2' fine-tuned on 'SQuAD V2' using 'run\_squad.py'
### Training Parameters
Trained on 4 NVIDIA GeForce RTX 2080 Ti 11Gb
### Evaluation
Evaluation on the dev set. I did not sweep for best threshold.
### Usage
See huggingface documentation. Training on 'SQuAD V2' allows the model to score if a paragraph contains an answer:
|
[
"### Model\n\n\n'albert-xlarge-v2' fine-tuned on 'SQuAD V2' using 'run\\_squad.py'",
"### Training Parameters\n\n\nTrained on 4 NVIDIA GeForce RTX 2080 Ti 11Gb",
"### Evaluation\n\n\nEvaluation on the dev set. I did not sweep for best threshold.",
"### Usage\n\n\nSee huggingface documentation. Training on 'SQuAD V2' allows the model to score if a paragraph contains an answer:"
] |
[
"TAGS\n#transformers #pytorch #albert #question-answering #endpoints_compatible #has_space #region-us \n",
"### Model\n\n\n'albert-xlarge-v2' fine-tuned on 'SQuAD V2' using 'run\\_squad.py'",
"### Training Parameters\n\n\nTrained on 4 NVIDIA GeForce RTX 2080 Ti 11Gb",
"### Evaluation\n\n\nEvaluation on the dev set. I did not sweep for best threshold.",
"### Usage\n\n\nSee huggingface documentation. Training on 'SQuAD V2' allows the model to score if a paragraph contains an answer:"
] |
question-answering
|
transformers
|
### Model
**[`monologg/biobert_v1.1_pubmed`](https://huggingface.co/monologg/biobert_v1.1_pubmed)** fine-tuned on **[`SQuAD V2`](https://rajpurkar.github.io/SQuAD-explorer/)** using **[`run_squad.py`](https://github.com/huggingface/transformers/blob/master/examples/question-answering/run_squad.py)**
This model is cased.
### Training Parameters
Trained on 4 NVIDIA GeForce RTX 2080 Ti 11Gb
```bash
BASE_MODEL=monologg/biobert_v1.1_pubmed
python run_squad.py \
--version_2_with_negative \
--model_type albert \
--model_name_or_path $BASE_MODEL \
--output_dir $OUTPUT_MODEL \
--do_eval \
--do_lower_case \
--train_file $SQUAD_DIR/train-v2.0.json \
--predict_file $SQUAD_DIR/dev-v2.0.json \
--per_gpu_train_batch_size 18 \
--per_gpu_eval_batch_size 64 \
--learning_rate 3e-5 \
--num_train_epochs 3.0 \
--max_seq_length 384 \
--doc_stride 128 \
--save_steps 2000 \
--threads 24 \
--warmup_steps 550 \
--gradient_accumulation_steps 1 \
--fp16 \
--logging_steps 50 \
--do_train
```
### Evaluation
Evaluation on the dev set. I did not sweep for best threshold.
| | val |
|-------------------|-------------------|
| exact | 75.97068980038743 |
| f1 | 79.37043950121722 |
| total | 11873.0 |
| HasAns_exact | 74.13967611336032 |
| HasAns_f1 | 80.94892513460755 |
| HasAns_total | 5928.0 |
| NoAns_exact | 77.79646761984861 |
| NoAns_f1 | 77.79646761984861 |
| NoAns_total | 5945.0 |
| best_exact | 75.97068980038743 |
| best_exact_thresh | 0.0 |
| best_f1 | 79.37043950121729 |
| best_f1_thresh | 0.0 |
### Usage
See [huggingface documentation](https://huggingface.co/transformers/model_doc/bert.html#bertforquestionanswering). Training on `SQuAD V2` allows the model to score if a paragraph contains an answer:
```python
start_scores, end_scores = model(input_ids)
span_scores = start_scores.softmax(dim=1).log()[:,:,None] + end_scores.softmax(dim=1).log()[:,None,:]
ignore_score = span_scores[:,0,0] #no answer scores
```
|
{}
|
ktrapeznikov/biobert_v1.1_pubmed_squad_v2
| null |
[
"transformers",
"pytorch",
"jax",
"bert",
"question-answering",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #jax #bert #question-answering #endpoints_compatible #region-us
|
### Model
'monologg/biobert\_v1.1\_pubmed' fine-tuned on 'SQuAD V2' using 'run\_squad.py'
This model is cased.
### Training Parameters
Trained on 4 NVIDIA GeForce RTX 2080 Ti 11Gb
### Evaluation
Evaluation on the dev set. I did not sweep for best threshold.
### Usage
See huggingface documentation. Training on 'SQuAD V2' allows the model to score if a paragraph contains an answer:
|
[
"### Model\n\n\n'monologg/biobert\\_v1.1\\_pubmed' fine-tuned on 'SQuAD V2' using 'run\\_squad.py'\n\n\nThis model is cased.",
"### Training Parameters\n\n\nTrained on 4 NVIDIA GeForce RTX 2080 Ti 11Gb",
"### Evaluation\n\n\nEvaluation on the dev set. I did not sweep for best threshold.",
"### Usage\n\n\nSee huggingface documentation. Training on 'SQuAD V2' allows the model to score if a paragraph contains an answer:"
] |
[
"TAGS\n#transformers #pytorch #jax #bert #question-answering #endpoints_compatible #region-us \n",
"### Model\n\n\n'monologg/biobert\\_v1.1\\_pubmed' fine-tuned on 'SQuAD V2' using 'run\\_squad.py'\n\n\nThis model is cased.",
"### Training Parameters\n\n\nTrained on 4 NVIDIA GeForce RTX 2080 Ti 11Gb",
"### Evaluation\n\n\nEvaluation on the dev set. I did not sweep for best threshold.",
"### Usage\n\n\nSee huggingface documentation. Training on 'SQuAD V2' allows the model to score if a paragraph contains an answer:"
] |
text-generation
|
transformers
|
# GPT2-medium-topic-news
## Model description
GPT2-medium fine tuned on a largish news corpus conditioned on a topic, source, title
## Intended uses & limitations
#### How to use
To generate a news article text conditioned on a topic, source, title or some subsets, prompt model with:
```python
f"topic {topic} source"
f"topic {topic} source {source} title"
f"topic {topic} source {source} title {title} body"
```
Try the following tags for `topic: climate, weather, vaccination`.
Zero shot generation works pretty well as long as `topic` is a single word and not too specific.
```python
device = "cuda:0"
tokenizer = AutoTokenizer.from_pretrained("ktrapeznikov/gpt2-medium-topic-small-set")
model = AutoModelWithLMHead.from_pretrained("ktrapeznikov/gpt2-medium-topic-small-set")
model.to(device)
topic = "climate"
prompt = tokenizer(f"topic {topics} source straitstimes title", return_tensors="pt")
out = model.generate(prompt["input_ids"].to(device), do_sample=True,max_length=500, early_stopping=True, top_p=.9)
print(tokenizer.decode(out[0].cpu(), skip_special_tokens=True))
```
|
{"language": ["en"], "widget": [{"text": "topic climate source washington post title "}]}
|
ktrapeznikov/gpt2-medium-topic-news-v2
| null |
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# GPT2-medium-topic-news
## Model description
GPT2-medium fine tuned on a largish news corpus conditioned on a topic, source, title
## Intended uses & limitations
#### How to use
To generate a news article text conditioned on a topic, source, title or some subsets, prompt model with:
Try the following tags for 'topic: climate, weather, vaccination'.
Zero shot generation works pretty well as long as 'topic' is a single word and not too specific.
|
[
"# GPT2-medium-topic-news",
"## Model description\n\nGPT2-medium fine tuned on a largish news corpus conditioned on a topic, source, title",
"## Intended uses & limitations",
"#### How to use\n\nTo generate a news article text conditioned on a topic, source, title or some subsets, prompt model with: \n\n\nTry the following tags for 'topic: climate, weather, vaccination'.\n\nZero shot generation works pretty well as long as 'topic' is a single word and not too specific."
] |
[
"TAGS\n#transformers #pytorch #jax #gpt2 #text-generation #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# GPT2-medium-topic-news",
"## Model description\n\nGPT2-medium fine tuned on a largish news corpus conditioned on a topic, source, title",
"## Intended uses & limitations",
"#### How to use\n\nTo generate a news article text conditioned on a topic, source, title or some subsets, prompt model with: \n\n\nTry the following tags for 'topic: climate, weather, vaccination'.\n\nZero shot generation works pretty well as long as 'topic' is a single word and not too specific."
] |
text-generation
|
transformers
|
# GPT2-medium-topic-news
## Model description
GPT2-medium fine tuned on a large news corpus conditioned on a topic
## Intended uses & limitations
#### How to use
To generate a news article text conditioned on a topic, prompt model with:
`topic: climate article:`
The following tags were used during training:
`arts law international science business politics disaster world conflict football sport sports artanddesign environment music film lifeandstyle business health commentisfree books technology media education politics travel stage uk society us money culture religion science news tv fashion uk australia cities global childrens sustainable global voluntary housing law local healthcare theguardian`
Zero shot generation works pretty well as long as `topic` is a single word and not too specific.
```python
device = "cuda:0"
tokenizer = AutoTokenizer.from_pretrained("ktrapeznikov/gpt2-medium-topic-news")
model = AutoModelWithLMHead.from_pretrained("ktrapeznikov/gpt2-medium-topic-news")
model.to(device)
topic = "climate"
prompt = tokenizer(f"topic: {topic} article:", return_tensors="pt")
out = model.generate(prompt["input_ids"].to(device), do_sample=True,max_length=500, early_stopping=True, top_p=.9)
print(tokenizer.decode(list(out.cpu()[0])))
```
## Training data
## Training procedure
|
{"language": ["en"], "widget": [{"text": "topic: climate article:"}]}
|
ktrapeznikov/gpt2-medium-topic-news
| null |
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# GPT2-medium-topic-news
## Model description
GPT2-medium fine tuned on a large news corpus conditioned on a topic
## Intended uses & limitations
#### How to use
To generate a news article text conditioned on a topic, prompt model with:
'topic: climate article:'
The following tags were used during training:
'arts law international science business politics disaster world conflict football sport sports artanddesign environment music film lifeandstyle business health commentisfree books technology media education politics travel stage uk society us money culture religion science news tv fashion uk australia cities global childrens sustainable global voluntary housing law local healthcare theguardian'
Zero shot generation works pretty well as long as 'topic' is a single word and not too specific.
## Training data
## Training procedure
|
[
"# GPT2-medium-topic-news",
"## Model description\n\nGPT2-medium fine tuned on a large news corpus conditioned on a topic",
"## Intended uses & limitations",
"#### How to use\n\nTo generate a news article text conditioned on a topic, prompt model with: \n'topic: climate article:'\n\nThe following tags were used during training:\n'arts law international science business politics disaster world conflict football sport sports artanddesign environment music film lifeandstyle business health commentisfree books technology media education politics travel stage uk society us money culture religion science news tv fashion uk australia cities global childrens sustainable global voluntary housing law local healthcare theguardian'\n\nZero shot generation works pretty well as long as 'topic' is a single word and not too specific.",
"## Training data",
"## Training procedure"
] |
[
"TAGS\n#transformers #pytorch #jax #gpt2 #text-generation #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# GPT2-medium-topic-news",
"## Model description\n\nGPT2-medium fine tuned on a large news corpus conditioned on a topic",
"## Intended uses & limitations",
"#### How to use\n\nTo generate a news article text conditioned on a topic, prompt model with: \n'topic: climate article:'\n\nThe following tags were used during training:\n'arts law international science business politics disaster world conflict football sport sports artanddesign environment music film lifeandstyle business health commentisfree books technology media education politics travel stage uk society us money culture religion science news tv fashion uk australia cities global childrens sustainable global voluntary housing law local healthcare theguardian'\n\nZero shot generation works pretty well as long as 'topic' is a single word and not too specific.",
"## Training data",
"## Training procedure"
] |
text-generation
|
transformers
|
# GPT2-medium-topic-news
## Model description
GPT2-medium fine tuned on a small news corpus conditioned on a topic, source, title
## Intended uses & limitations
#### How to use
To generate a news article text conditioned on a topic, source, title or some subsets, prompt model with:
```python
f"topic {topic} source"
f"topic {topic} source {source} title"
f"topic {topic} source {source} title {title} body"
```
Try the following tags for `topic: climate, weather, vaccination`.
Zero shot generation works pretty well as long as `topic` is a single word and not too specific.
```python
device = "cuda:0"
tokenizer = AutoTokenizer.from_pretrained("ktrapeznikov/gpt2-medium-topic-small-set")
model = AutoModelWithLMHead.from_pretrained("ktrapeznikov/gpt2-medium-topic-small-set")
model.to(device)
topic = "climate"
prompt = tokenizer(f"topic {topics} source straitstimes title", return_tensors="pt")
out = model.generate(prompt["input_ids"].to(device), do_sample=True,max_length=500, early_stopping=True, top_p=.9)
print(tokenizer.decode(out[0].cpu(), skip_special_tokens=True))
```
## Sample Output
>[topic] military [source] straitstimes [title] Trump signs bill on military aid to Israel [body] WASHINGTON (AFP) - US President Donald Trump signed into law Thursday (April 24) legislation to provide more than US$15 billion (S$20.43 billion) in military aid to Israel, a move the Obama administration had resisted for political reasons. The White House did not immediately respond to a request for comment on the Israel measure, which Trump had sought unsuccessfully to block during the Obama pres ...
>[topic] military [source] straitstimes [title] Hong Kong's leaders to discuss new travel restrictions as lockdown looms [body] HONG KONG (REUTERS) - Hong Kong authorities said they would hold a meeting of the Legislative Council on Monday (July 21) to discuss new travel restrictions on Hong Kong residents, as the city reported a record daily increase in coronavirus cases. The authorities said they would consider the proposal after meeting government chiefs and reviewing other measures. The co ...
>[topic] military [source] straitstimes [title] Trump signs Bill that gives US troops wider latitude to conduct operations abroad [body] WASHINGTON (AFP) - US President Donald Trump on Thursday (July 23) signed a controversial law that gives US troops more leeway to conduct operations abroad, as he seeks to shore up the embattled government's defences against the coronavirus pandemic and stave off a potentially devastating election defeat. Trump's signature Bill, named after his late father's l ...
>[topic] military [source] straitstimes [title] China's Foreign Ministry responds to Japan's statement on South China Sea: 'No one should assume the role of mediator' [body] BEIJING (AFP) - The Ministry of Foreign Affairs on Tuesday (Oct 18) told Japan to stop taking sides in the South China Sea issue and not interfere in the bilateral relationship, as Japan said it would do "nothing". Foreign Ministry spokesman Zhao Lijian told reporters in Beijing that the Chinese government's position on the ...
>[topic] military [source] straitstimes [title] US warns North Korea on potential nuclear strike [body] WASHINGTON - The United States warned North Korea last Friday that an attack by the North could be a "provocation" that would have "a devastating effect" on its security, as it took aim at Pyongyang over its continued efforts to develop weapons of mass destruction. US Secretary of State Mike Pompeo was speaking at the conclusion of a White House news conference when a reporter asked him how t ...
>[topic] military [source] straitstimes [title] China calls Hong Kong to halt 'illegal and illegal military acts' [body] WASHINGTON • Chinese Foreign Ministry spokeswoman Hua Chunying said yesterday that Hong Kong must stop 'illegal and illegal military acts' before Beijing can recognise the city as its own. In her annual State Councillor's speech, Ms Hua made the case for Hong Kong to resume Hong Kong's status as a semi-autonomous city, and vowed to use its "great power position to actively an ...
## Training data
## Training procedure
|
{"language": ["en"], "widget": [{"text": "topic climate source"}]}
|
ktrapeznikov/gpt2-medium-topic-small-set
| null |
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# GPT2-medium-topic-news
## Model description
GPT2-medium fine tuned on a small news corpus conditioned on a topic, source, title
## Intended uses & limitations
#### How to use
To generate a news article text conditioned on a topic, source, title or some subsets, prompt model with:
Try the following tags for 'topic: climate, weather, vaccination'.
Zero shot generation works pretty well as long as 'topic' is a single word and not too specific.
## Sample Output
>[topic] military [source] straitstimes [title] Trump signs bill on military aid to Israel [body] WASHINGTON (AFP) - US President Donald Trump signed into law Thursday (April 24) legislation to provide more than US$15 billion (S$20.43 billion) in military aid to Israel, a move the Obama administration had resisted for political reasons. The White House did not immediately respond to a request for comment on the Israel measure, which Trump had sought unsuccessfully to block during the Obama pres ...
>[topic] military [source] straitstimes [title] Hong Kong's leaders to discuss new travel restrictions as lockdown looms [body] HONG KONG (REUTERS) - Hong Kong authorities said they would hold a meeting of the Legislative Council on Monday (July 21) to discuss new travel restrictions on Hong Kong residents, as the city reported a record daily increase in coronavirus cases. The authorities said they would consider the proposal after meeting government chiefs and reviewing other measures. The co ...
>[topic] military [source] straitstimes [title] Trump signs Bill that gives US troops wider latitude to conduct operations abroad [body] WASHINGTON (AFP) - US President Donald Trump on Thursday (July 23) signed a controversial law that gives US troops more leeway to conduct operations abroad, as he seeks to shore up the embattled government's defences against the coronavirus pandemic and stave off a potentially devastating election defeat. Trump's signature Bill, named after his late father's l ...
>[topic] military [source] straitstimes [title] China's Foreign Ministry responds to Japan's statement on South China Sea: 'No one should assume the role of mediator' [body] BEIJING (AFP) - The Ministry of Foreign Affairs on Tuesday (Oct 18) told Japan to stop taking sides in the South China Sea issue and not interfere in the bilateral relationship, as Japan said it would do "nothing". Foreign Ministry spokesman Zhao Lijian told reporters in Beijing that the Chinese government's position on the ...
>[topic] military [source] straitstimes [title] US warns North Korea on potential nuclear strike [body] WASHINGTON - The United States warned North Korea last Friday that an attack by the North could be a "provocation" that would have "a devastating effect" on its security, as it took aim at Pyongyang over its continued efforts to develop weapons of mass destruction. US Secretary of State Mike Pompeo was speaking at the conclusion of a White House news conference when a reporter asked him how t ...
>[topic] military [source] straitstimes [title] China calls Hong Kong to halt 'illegal and illegal military acts' [body] WASHINGTON • Chinese Foreign Ministry spokeswoman Hua Chunying said yesterday that Hong Kong must stop 'illegal and illegal military acts' before Beijing can recognise the city as its own. In her annual State Councillor's speech, Ms Hua made the case for Hong Kong to resume Hong Kong's status as a semi-autonomous city, and vowed to use its "great power position to actively an ...
## Training data
## Training procedure
|
[
"# GPT2-medium-topic-news",
"## Model description\n\nGPT2-medium fine tuned on a small news corpus conditioned on a topic, source, title",
"## Intended uses & limitations",
"#### How to use\n\nTo generate a news article text conditioned on a topic, source, title or some subsets, prompt model with: \n\n\nTry the following tags for 'topic: climate, weather, vaccination'.\n\nZero shot generation works pretty well as long as 'topic' is a single word and not too specific.",
"## Sample Output\n\n>[topic] military [source] straitstimes [title] Trump signs bill on military aid to Israel [body] WASHINGTON (AFP) - US President Donald Trump signed into law Thursday (April 24) legislation to provide more than US$15 billion (S$20.43 billion) in military aid to Israel, a move the Obama administration had resisted for political reasons. The White House did not immediately respond to a request for comment on the Israel measure, which Trump had sought unsuccessfully to block during the Obama pres ... \n\n>[topic] military [source] straitstimes [title] Hong Kong's leaders to discuss new travel restrictions as lockdown looms [body] HONG KONG (REUTERS) - Hong Kong authorities said they would hold a meeting of the Legislative Council on Monday (July 21) to discuss new travel restrictions on Hong Kong residents, as the city reported a record daily increase in coronavirus cases. The authorities said they would consider the proposal after meeting government chiefs and reviewing other measures. The co ... \n\n>[topic] military [source] straitstimes [title] Trump signs Bill that gives US troops wider latitude to conduct operations abroad [body] WASHINGTON (AFP) - US President Donald Trump on Thursday (July 23) signed a controversial law that gives US troops more leeway to conduct operations abroad, as he seeks to shore up the embattled government's defences against the coronavirus pandemic and stave off a potentially devastating election defeat. Trump's signature Bill, named after his late father's l ... \n\n>[topic] military [source] straitstimes [title] China's Foreign Ministry responds to Japan's statement on South China Sea: 'No one should assume the role of mediator' [body] BEIJING (AFP) - The Ministry of Foreign Affairs on Tuesday (Oct 18) told Japan to stop taking sides in the South China Sea issue and not interfere in the bilateral relationship, as Japan said it would do \"nothing\". Foreign Ministry spokesman Zhao Lijian told reporters in Beijing that the Chinese government's position on the ... \n\n>[topic] military [source] straitstimes [title] US warns North Korea on potential nuclear strike [body] WASHINGTON - The United States warned North Korea last Friday that an attack by the North could be a \"provocation\" that would have \"a devastating effect\" on its security, as it took aim at Pyongyang over its continued efforts to develop weapons of mass destruction. US Secretary of State Mike Pompeo was speaking at the conclusion of a White House news conference when a reporter asked him how t ... \n\n>[topic] military [source] straitstimes [title] China calls Hong Kong to halt 'illegal and illegal military acts' [body] WASHINGTON • Chinese Foreign Ministry spokeswoman Hua Chunying said yesterday that Hong Kong must stop 'illegal and illegal military acts' before Beijing can recognise the city as its own. In her annual State Councillor's speech, Ms Hua made the case for Hong Kong to resume Hong Kong's status as a semi-autonomous city, and vowed to use its \"great power position to actively an ...",
"## Training data",
"## Training procedure"
] |
[
"TAGS\n#transformers #pytorch #jax #gpt2 #text-generation #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# GPT2-medium-topic-news",
"## Model description\n\nGPT2-medium fine tuned on a small news corpus conditioned on a topic, source, title",
"## Intended uses & limitations",
"#### How to use\n\nTo generate a news article text conditioned on a topic, source, title or some subsets, prompt model with: \n\n\nTry the following tags for 'topic: climate, weather, vaccination'.\n\nZero shot generation works pretty well as long as 'topic' is a single word and not too specific.",
"## Sample Output\n\n>[topic] military [source] straitstimes [title] Trump signs bill on military aid to Israel [body] WASHINGTON (AFP) - US President Donald Trump signed into law Thursday (April 24) legislation to provide more than US$15 billion (S$20.43 billion) in military aid to Israel, a move the Obama administration had resisted for political reasons. The White House did not immediately respond to a request for comment on the Israel measure, which Trump had sought unsuccessfully to block during the Obama pres ... \n\n>[topic] military [source] straitstimes [title] Hong Kong's leaders to discuss new travel restrictions as lockdown looms [body] HONG KONG (REUTERS) - Hong Kong authorities said they would hold a meeting of the Legislative Council on Monday (July 21) to discuss new travel restrictions on Hong Kong residents, as the city reported a record daily increase in coronavirus cases. The authorities said they would consider the proposal after meeting government chiefs and reviewing other measures. The co ... \n\n>[topic] military [source] straitstimes [title] Trump signs Bill that gives US troops wider latitude to conduct operations abroad [body] WASHINGTON (AFP) - US President Donald Trump on Thursday (July 23) signed a controversial law that gives US troops more leeway to conduct operations abroad, as he seeks to shore up the embattled government's defences against the coronavirus pandemic and stave off a potentially devastating election defeat. Trump's signature Bill, named after his late father's l ... \n\n>[topic] military [source] straitstimes [title] China's Foreign Ministry responds to Japan's statement on South China Sea: 'No one should assume the role of mediator' [body] BEIJING (AFP) - The Ministry of Foreign Affairs on Tuesday (Oct 18) told Japan to stop taking sides in the South China Sea issue and not interfere in the bilateral relationship, as Japan said it would do \"nothing\". Foreign Ministry spokesman Zhao Lijian told reporters in Beijing that the Chinese government's position on the ... \n\n>[topic] military [source] straitstimes [title] US warns North Korea on potential nuclear strike [body] WASHINGTON - The United States warned North Korea last Friday that an attack by the North could be a \"provocation\" that would have \"a devastating effect\" on its security, as it took aim at Pyongyang over its continued efforts to develop weapons of mass destruction. US Secretary of State Mike Pompeo was speaking at the conclusion of a White House news conference when a reporter asked him how t ... \n\n>[topic] military [source] straitstimes [title] China calls Hong Kong to halt 'illegal and illegal military acts' [body] WASHINGTON • Chinese Foreign Ministry spokeswoman Hua Chunying said yesterday that Hong Kong must stop 'illegal and illegal military acts' before Beijing can recognise the city as its own. In her annual State Councillor's speech, Ms Hua made the case for Hong Kong to resume Hong Kong's status as a semi-autonomous city, and vowed to use its \"great power position to actively an ...",
"## Training data",
"## Training procedure"
] |
question-answering
|
transformers
|
### Model
**[`allenai/scibert_scivocab_uncased`](https://huggingface.co/allenai/scibert_scivocab_uncased)** fine-tuned on **[`SQuAD V2`](https://rajpurkar.github.io/SQuAD-explorer/)** using **[`run_squad.py`](https://github.com/huggingface/transformers/blob/master/examples/question-answering/run_squad.py)**
### Training Parameters
Trained on 4 NVIDIA GeForce RTX 2080 Ti 11Gb
```bash
BASE_MODEL=allenai/scibert_scivocab_uncased
python run_squad.py \
--version_2_with_negative \
--model_type albert \
--model_name_or_path $BASE_MODEL \
--output_dir $OUTPUT_MODEL \
--do_eval \
--do_lower_case \
--train_file $SQUAD_DIR/train-v2.0.json \
--predict_file $SQUAD_DIR/dev-v2.0.json \
--per_gpu_train_batch_size 18 \
--per_gpu_eval_batch_size 64 \
--learning_rate 3e-5 \
--num_train_epochs 3.0 \
--max_seq_length 384 \
--doc_stride 128 \
--save_steps 2000 \
--threads 24 \
--warmup_steps 550 \
--gradient_accumulation_steps 1 \
--fp16 \
--logging_steps 50 \
--do_train
```
### Evaluation
Evaluation on the dev set. I did not sweep for best threshold.
| | val |
|-------------------|-------------------|
| exact | 75.07790785816559 |
| f1 | 78.47735207283013 |
| total | 11873.0 |
| HasAns_exact | 70.76585695006747 |
| HasAns_f1 | 77.57449412292718 |
| HasAns_total | 5928.0 |
| NoAns_exact | 79.37762825904122 |
| NoAns_f1 | 79.37762825904122 |
| NoAns_total | 5945.0 |
| best_exact | 75.08633032931863 |
| best_exact_thresh | 0.0 |
| best_f1 | 78.48577454398324 |
| best_f1_thresh | 0.0 |
### Usage
See [huggingface documentation](https://huggingface.co/transformers/model_doc/bert.html#bertforquestionanswering). Training on `SQuAD V2` allows the model to score if a paragraph contains an answer:
```python
start_scores, end_scores = model(input_ids)
span_scores = start_scores.softmax(dim=1).log()[:,:,None] + end_scores.softmax(dim=1).log()[:,None,:]
ignore_score = span_scores[:,0,0] #no answer scores
```
|
{}
|
ktrapeznikov/scibert_scivocab_uncased_squad_v2
| null |
[
"transformers",
"pytorch",
"jax",
"bert",
"question-answering",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #jax #bert #question-answering #endpoints_compatible #region-us
|
### Model
'allenai/scibert\_scivocab\_uncased' fine-tuned on 'SQuAD V2' using 'run\_squad.py'
### Training Parameters
Trained on 4 NVIDIA GeForce RTX 2080 Ti 11Gb
### Evaluation
Evaluation on the dev set. I did not sweep for best threshold.
### Usage
See huggingface documentation. Training on 'SQuAD V2' allows the model to score if a paragraph contains an answer:
|
[
"### Model\n\n\n'allenai/scibert\\_scivocab\\_uncased' fine-tuned on 'SQuAD V2' using 'run\\_squad.py'",
"### Training Parameters\n\n\nTrained on 4 NVIDIA GeForce RTX 2080 Ti 11Gb",
"### Evaluation\n\n\nEvaluation on the dev set. I did not sweep for best threshold.",
"### Usage\n\n\nSee huggingface documentation. Training on 'SQuAD V2' allows the model to score if a paragraph contains an answer:"
] |
[
"TAGS\n#transformers #pytorch #jax #bert #question-answering #endpoints_compatible #region-us \n",
"### Model\n\n\n'allenai/scibert\\_scivocab\\_uncased' fine-tuned on 'SQuAD V2' using 'run\\_squad.py'",
"### Training Parameters\n\n\nTrained on 4 NVIDIA GeForce RTX 2080 Ti 11Gb",
"### Evaluation\n\n\nEvaluation on the dev set. I did not sweep for best threshold.",
"### Usage\n\n\nSee huggingface documentation. Training on 'SQuAD V2' allows the model to score if a paragraph contains an answer:"
] |
null | null |
textsummarizer
|
{}
|
kumaran/textsummarizer
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#region-us
|
textsummarizer
|
[] |
[
"TAGS\n#region-us \n"
] |
text-generation
|
transformers
|
#House BOT
|
{"tags": ["conversational"]}
|
kunalbhargava/DialoGPT-small-housebot
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
#House BOT
|
[] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
fill-mask
|
transformers
|
# telugu_bertu
## Model description
This model is a BERT MLM model trained on Telugu. Please use it from the terminal as the web interface has encoding issues.
PS: If you find my model useful, I would appreciate a note from you as it would encourage me to continue improving it and also add new models. And also, please cite my model if you are using it in your pipeline.
## Intended uses & limitations
#### How to use
```python
from transformers import AutoModelWithLMHead, AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained("kuppuluri/telugu_bertu",
clean_text=False,
handle_chinese_chars=False,
strip_accents=False,
wordpieces_prefix='##')
model = AutoModelWithLMHead.from_pretrained("kuppuluri/telugu_bertu")
fill_mask = pipeline("fill-mask", model=model, tokenizer=tokenizer)
results = fill_mask("మక్దూంపల్లి పేరుతో చాలా [MASK] ఉన్నాయి.")
```
|
{"language": "te"}
|
kuppuluri/telugu_bertu
| null |
[
"transformers",
"pytorch",
"jax",
"safetensors",
"bert",
"fill-mask",
"te",
"doi:10.57967/hf/0264",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"te"
] |
TAGS
#transformers #pytorch #jax #safetensors #bert #fill-mask #te #doi-10.57967/hf/0264 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# telugu_bertu
## Model description
This model is a BERT MLM model trained on Telugu. Please use it from the terminal as the web interface has encoding issues.
PS: If you find my model useful, I would appreciate a note from you as it would encourage me to continue improving it and also add new models. And also, please cite my model if you are using it in your pipeline.
## Intended uses & limitations
#### How to use
|
[
"# telugu_bertu",
"## Model description\n\nThis model is a BERT MLM model trained on Telugu. Please use it from the terminal as the web interface has encoding issues.\n\nPS: If you find my model useful, I would appreciate a note from you as it would encourage me to continue improving it and also add new models. And also, please cite my model if you are using it in your pipeline.",
"## Intended uses & limitations",
"#### How to use"
] |
[
"TAGS\n#transformers #pytorch #jax #safetensors #bert #fill-mask #te #doi-10.57967/hf/0264 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# telugu_bertu",
"## Model description\n\nThis model is a BERT MLM model trained on Telugu. Please use it from the terminal as the web interface has encoding issues.\n\nPS: If you find my model useful, I would appreciate a note from you as it would encourage me to continue improving it and also add new models. And also, please cite my model if you are using it in your pipeline.",
"## Intended uses & limitations",
"#### How to use"
] |
token-classification
|
transformers
|
# Named Entity Recognition Model for Telugu
#### How to use
Use the below script from your python terminal as the web interface for inference has few encoding issues for Telugu
PS: If you find my model useful, I would appreciate a note from you as it would encourage me to continue improving it and also add new models.
```python
from simpletransformers.ner import NERModel
model = NERModel('bert',
'kuppuluri/telugu_bertu_ner',
labels=[
'B-PERSON', 'I-ORG', 'B-ORG', 'I-LOC', 'B-MISC',
'I-MISC', 'I-PERSON', 'B-LOC', 'O'
],
use_cuda=False,
args={"use_multiprocessing": False})
text = "విరాట్ కోహ్లీ కూడా అదే నిర్లక్ష్యాన్ని ప్రదర్శించి కేవలం ఒక పరుగుకే రనౌటై పెవిలియన్ చేరాడు ."
results = model.predict([text])
```
## Training data
Training data is from https://github.com/anikethjr/NER_Telugu
## Eval results
On the test set my results were
eval_loss = 0.0004407190410447974
f1_score = 0.999519076627124
precision = 0.9994389677005691
recall = 0.9995991983967936
|
{}
|
kuppuluri/telugu_bertu_ner
| null |
[
"transformers",
"pytorch",
"jax",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #jax #bert #token-classification #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# Named Entity Recognition Model for Telugu
#### How to use
Use the below script from your python terminal as the web interface for inference has few encoding issues for Telugu
PS: If you find my model useful, I would appreciate a note from you as it would encourage me to continue improving it and also add new models.
## Training data
Training data is from URL
## Eval results
On the test set my results were
eval_loss = 0.0004407190410447974
f1_score = 0.999519076627124
precision = 0.9994389677005691
recall = 0.9995991983967936
|
[
"# Named Entity Recognition Model for Telugu",
"#### How to use\nUse the below script from your python terminal as the web interface for inference has few encoding issues for Telugu\n\nPS: If you find my model useful, I would appreciate a note from you as it would encourage me to continue improving it and also add new models.",
"## Training data\n\nTraining data is from URL",
"## Eval results\n\nOn the test set my results were\n\neval_loss = 0.0004407190410447974\n\nf1_score = 0.999519076627124\n\nprecision = 0.9994389677005691\n\nrecall = 0.9995991983967936"
] |
[
"TAGS\n#transformers #pytorch #jax #bert #token-classification #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# Named Entity Recognition Model for Telugu",
"#### How to use\nUse the below script from your python terminal as the web interface for inference has few encoding issues for Telugu\n\nPS: If you find my model useful, I would appreciate a note from you as it would encourage me to continue improving it and also add new models.",
"## Training data\n\nTraining data is from URL",
"## Eval results\n\nOn the test set my results were\n\neval_loss = 0.0004407190410447974\n\nf1_score = 0.999519076627124\n\nprecision = 0.9994389677005691\n\nrecall = 0.9995991983967936"
] |
token-classification
|
transformers
|
# Part of Speech tagging Model for Telugu
#### How to use
Use the below script from your python terminal as the web interface for inference has few encoding issues for Telugu
PS: If you find my model useful, I would appreciate a note from you as it would encourage me to continue improving it and also add new models.
```python
from simpletransformers.ner import NERModel
model = NERModel('bert',
'kuppuluri/telugu_bertu_pos',
args={"use_multiprocessing": False},
labels=[
'QC', 'JJ', 'NN', 'QF', 'RDP', 'O',
'NNO', 'PRP', 'RP', 'VM', 'WQ',
'PSP', 'UT', 'CC', 'INTF', 'SYMP',
'NNP', 'INJ', 'SYM', 'CL', 'QO',
'DEM', 'RB', 'NST', ],
use_cuda=False)
text = "విరాట్ కోహ్లీ కూడా అదే నిర్లక్ష్యాన్ని ప్రదర్శించి కేవలం ఒక పరుగుకే రనౌటై పెవిలియన్ చేరాడు ."
results = model.predict([text])
```
## Training data
Training data is from https://github.com/anikethjr/NER_Telugu
## Eval results
On the test set my results were
eval_loss = 0.0036797842364565416
f1_score = 0.9983795127912227
precision = 0.9984325602401637
recall = 0.9983264709788816
|
{}
|
kuppuluri/telugu_bertu_pos
| null |
[
"transformers",
"pytorch",
"jax",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #jax #bert #token-classification #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# Part of Speech tagging Model for Telugu
#### How to use
Use the below script from your python terminal as the web interface for inference has few encoding issues for Telugu
PS: If you find my model useful, I would appreciate a note from you as it would encourage me to continue improving it and also add new models.
## Training data
Training data is from URL
## Eval results
On the test set my results were
eval_loss = 0.0036797842364565416
f1_score = 0.9983795127912227
precision = 0.9984325602401637
recall = 0.9983264709788816
|
[
"# Part of Speech tagging Model for Telugu",
"#### How to use\nUse the below script from your python terminal as the web interface for inference has few encoding issues for Telugu\n\nPS: If you find my model useful, I would appreciate a note from you as it would encourage me to continue improving it and also add new models.",
"## Training data\n\nTraining data is from URL",
"## Eval results\n\nOn the test set my results were\n\neval_loss = 0.0036797842364565416\n\nf1_score = 0.9983795127912227\n\nprecision = 0.9984325602401637\n\nrecall = 0.9983264709788816"
] |
[
"TAGS\n#transformers #pytorch #jax #bert #token-classification #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# Part of Speech tagging Model for Telugu",
"#### How to use\nUse the below script from your python terminal as the web interface for inference has few encoding issues for Telugu\n\nPS: If you find my model useful, I would appreciate a note from you as it would encourage me to continue improving it and also add new models.",
"## Training data\n\nTraining data is from URL",
"## Eval results\n\nOn the test set my results were\n\neval_loss = 0.0036797842364565416\n\nf1_score = 0.9983795127912227\n\nprecision = 0.9984325602401637\n\nrecall = 0.9983264709788816"
] |
question-answering
|
transformers
|
# Telugu Question-Answering model trained on Tydiqa dataset from Google
#### How to use
Use the below script from your python terminal as the web interface for inference has few encoding issues for Telugu
```python
from transformers.pipelines import pipeline, AutoModelForQuestionAnswering, AutoTokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained("kuppuluri/telugu_bertu_tydiqa",
clean_text=False,
handle_chinese_chars=False,
strip_accents=False,
wordpieces_prefix='##')
nlp = pipeline('question-answering', model=model, tokenizer=tokenizer)
result = nlp({'question': question, 'context': context})
```
## Training data
I used Tydiqa Telugu data from Google https://github.com/google-research-datasets/tydiqa
PS: If you find my model useful, I would appreciate a note from you as it would encourage me to continue improving it and also add new models.
|
{}
|
kuppuluri/telugu_bertu_tydiqa
| null |
[
"transformers",
"pytorch",
"jax",
"safetensors",
"bert",
"question-answering",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #jax #safetensors #bert #question-answering #endpoints_compatible #has_space #region-us
|
# Telugu Question-Answering model trained on Tydiqa dataset from Google
#### How to use
Use the below script from your python terminal as the web interface for inference has few encoding issues for Telugu
## Training data
I used Tydiqa Telugu data from Google URL
PS: If you find my model useful, I would appreciate a note from you as it would encourage me to continue improving it and also add new models.
|
[
"# Telugu Question-Answering model trained on Tydiqa dataset from Google",
"#### How to use\nUse the below script from your python terminal as the web interface for inference has few encoding issues for Telugu",
"## Training data\nI used Tydiqa Telugu data from Google URL\n\nPS: If you find my model useful, I would appreciate a note from you as it would encourage me to continue improving it and also add new models."
] |
[
"TAGS\n#transformers #pytorch #jax #safetensors #bert #question-answering #endpoints_compatible #has_space #region-us \n",
"# Telugu Question-Answering model trained on Tydiqa dataset from Google",
"#### How to use\nUse the below script from your python terminal as the web interface for inference has few encoding issues for Telugu",
"## Training data\nI used Tydiqa Telugu data from Google URL\n\nPS: If you find my model useful, I would appreciate a note from you as it would encourage me to continue improving it and also add new models."
] |
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0611
- Precision: 0.9305
- Recall: 0.9505
- F1: 0.9404
- Accuracy: 0.9861
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0869 | 1.0 | 1756 | 0.0680 | 0.9174 | 0.9342 | 0.9257 | 0.9827 |
| 0.0334 | 2.0 | 3512 | 0.0620 | 0.9305 | 0.9470 | 0.9387 | 0.9853 |
| 0.0233 | 3.0 | 5268 | 0.0611 | 0.9305 | 0.9505 | 0.9404 | 0.9861 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["conll2003"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "bert-finetuned-ner", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "conll2003", "type": "conll2003", "args": "conll2003"}, "metrics": [{"type": "precision", "value": 0.9304777594728171, "name": "Precision"}, {"type": "recall", "value": 0.9505217098619994, "name": "Recall"}, {"type": "f1", "value": 0.9403929403929404, "name": "F1"}, {"type": "accuracy", "value": 0.9861070230176017, "name": "Accuracy"}]}]}]}
|
kurianbenoy/bert-finetuned-ner
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #bert #token-classification #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
|
bert-finetuned-ner
==================
This model is a fine-tuned version of bert-base-cased on the conll2003 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0611
* Precision: 0.9305
* Recall: 0.9505
* F1: 0.9404
* Accuracy: 0.9861
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #bert #token-classification #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3073
- Accuracy: 0.923
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2744 | 1.0 | 1563 | 0.2049 | 0.921 |
| 0.1572 | 2.0 | 3126 | 0.2308 | 0.923 |
| 0.0917 | 3.0 | 4689 | 0.3073 | 0.923 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["imdb"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased-finetuned-imdb", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "imdb", "type": "imdb", "args": "plain_text"}, "metrics": [{"type": "accuracy", "value": 0.923, "name": "Accuracy"}]}]}]}
|
kurianbenoy/distilbert-base-uncased-finetuned-imdb
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-imdb #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
|
distilbert-base-uncased-finetuned-imdb
======================================
This model is a fine-tuned version of distilbert-base-uncased on the imdb dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3073
* Accuracy: 0.923
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-imdb #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-sst-2-english-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2165
- Accuracy: 0.9303
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2749 | 1.0 | 3125 | 0.2165 | 0.9303 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["imdb"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased-finetuned-sst-2-english-finetuned-imdb", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "imdb", "type": "imdb", "args": "plain_text"}, "metrics": [{"type": "accuracy", "value": 0.93032, "name": "Accuracy"}]}]}]}
|
kurianbenoy/distilbert-base-uncased-finetuned-sst-2-english-finetuned-imdb
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-imdb #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
|
distilbert-base-uncased-finetuned-sst-2-english-finetuned-imdb
==============================================================
This model is a fine-tuned version of distilbert-base-uncased-finetuned-sst-2-english on the imdb dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2165
* Accuracy: 0.9303
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 1
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-imdb #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
null | null |
This model can predict which categories a specific competitive problem falls into
|
{}
|
kurone/cp_tags_prediction
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#region-us
|
This model can predict which categories a specific competitive problem falls into
|
[] |
[
"TAGS\n#region-us \n"
] |
text-generation
|
transformers
|
# Rick DiabloGPT Model
|
{"tags": ["conversational"]}
|
kvothe28/DiabloGPT-small-Rick
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Rick DiabloGPT Model
|
[
"# Rick DiabloGPT Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Rick DiabloGPT Model"
] |
automatic-speech-recognition
|
transformers
|
https://huggingface.co/blog/fine-tune-wav2vec2-english
Use the processor from https://huggingface.co/facebook/wav2vec2-base
|
{}
|
kwang1993/wav2vec2-base-timit-demo
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #endpoints_compatible #region-us
|
URL
Use the processor from URL
|
[] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #endpoints_compatible #region-us \n"
] |
feature-extraction
|
transformers
|
# kwang2049/TSDAE-askubuntu2nli_stsb
This is a model from the paper ["TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning"](https://arxiv.org/abs/2104.06979). This model was only trained with the TSDAE objective on AskUbuntu in an unsupervised manner. Training procedure of this model:
1. Initialized with [bert-base-uncased](https://huggingface.co/bert-base-uncased);
2. Unsupervised training on AskUbuntu with the TSDAE objective;
The pooling method is CLS-pooling.
## Usage
To use this model, an convenient way is through [SentenceTransformers](https://github.com/UKPLab/sentence-transformers). So please install it via:
```bash
pip install sentence-transformers
```
And then load the model and use it to encode sentences:
```python
from sentence_transformers import SentenceTransformer, models
dataset = 'askubuntu'
model_name_or_path = f'kwang2049/TSDAE-{dataset}'
model = SentenceTransformer(model_name_or_path)
model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling
sentence_embeddings = model.encode(['This is the first sentence.', 'This is the second one.'])
```
## Evaluation
To evaluate the model against the datasets used in the paper, please install our evaluation toolkit [USEB](https://github.com/UKPLab/useb):
```bash
pip install useb # Or git clone and pip install .
python -m useb.downloading all # Download both training and evaluation data
```
And then do the evaluation:
```python
from sentence_transformers import SentenceTransformer, models
import torch
from useb import run_on
dataset = 'askubuntu'
model_name_or_path = f'kwang2049/TSDAE-{dataset}'
model = SentenceTransformer(model_name_or_path)
model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling
@torch.no_grad()
def semb_fn(sentences) -> torch.Tensor:
return torch.Tensor(model.encode(sentences, show_progress_bar=False))
result = run_on(
dataset,
semb_fn=semb_fn,
eval_type='test',
data_eval_path='data-eval'
)
```
## Training
Please refer to [the page of TSDAE training](https://github.com/UKPLab/sentence-transformers/tree/master/examples/unsupervised_learning/TSDAE) in SentenceTransformers.
## Cite & Authors
If you use the code for evaluation, feel free to cite our publication [TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning](https://arxiv.org/abs/2104.06979):
```bibtex
@article{wang-2021-TSDAE,
title = "TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning",
author = "Wang, Kexin and Reimers, Nils and Gurevych, Iryna",
journal= "arXiv preprint arXiv:2104.06979",
month = "4",
year = "2021",
url = "https://arxiv.org/abs/2104.06979",
}
```
|
{}
|
kwang2049/TSDAE-askubuntu
| null |
[
"transformers",
"pytorch",
"bert",
"feature-extraction",
"arxiv:2104.06979",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2104.06979"
] |
[] |
TAGS
#transformers #pytorch #bert #feature-extraction #arxiv-2104.06979 #endpoints_compatible #region-us
|
# kwang2049/TSDAE-askubuntu2nli_stsb
This is a model from the paper "TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning". This model was only trained with the TSDAE objective on AskUbuntu in an unsupervised manner. Training procedure of this model:
1. Initialized with bert-base-uncased;
2. Unsupervised training on AskUbuntu with the TSDAE objective;
The pooling method is CLS-pooling.
## Usage
To use this model, an convenient way is through SentenceTransformers. So please install it via:
And then load the model and use it to encode sentences:
## Evaluation
To evaluate the model against the datasets used in the paper, please install our evaluation toolkit USEB:
And then do the evaluation:
## Training
Please refer to the page of TSDAE training in SentenceTransformers.
## Cite & Authors
If you use the code for evaluation, feel free to cite our publication TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning:
|
[
"# kwang2049/TSDAE-askubuntu2nli_stsb\nThis is a model from the paper \"TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning\". This model was only trained with the TSDAE objective on AskUbuntu in an unsupervised manner. Training procedure of this model:\n 1. Initialized with bert-base-uncased;\n 2. Unsupervised training on AskUbuntu with the TSDAE objective;\n \n The pooling method is CLS-pooling.\n \n ## Usage\n To use this model, an convenient way is through SentenceTransformers. So please install it via:\n \n And then load the model and use it to encode sentences:\n \n ## Evaluation\n To evaluate the model against the datasets used in the paper, please install our evaluation toolkit USEB:\n \n And then do the evaluation:\n \n \n ## Training\n Please refer to the page of TSDAE training in SentenceTransformers.\n \n ## Cite & Authors\n If you use the code for evaluation, feel free to cite our publication TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning:"
] |
[
"TAGS\n#transformers #pytorch #bert #feature-extraction #arxiv-2104.06979 #endpoints_compatible #region-us \n",
"# kwang2049/TSDAE-askubuntu2nli_stsb\nThis is a model from the paper \"TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning\". This model was only trained with the TSDAE objective on AskUbuntu in an unsupervised manner. Training procedure of this model:\n 1. Initialized with bert-base-uncased;\n 2. Unsupervised training on AskUbuntu with the TSDAE objective;\n \n The pooling method is CLS-pooling.\n \n ## Usage\n To use this model, an convenient way is through SentenceTransformers. So please install it via:\n \n And then load the model and use it to encode sentences:\n \n ## Evaluation\n To evaluate the model against the datasets used in the paper, please install our evaluation toolkit USEB:\n \n And then do the evaluation:\n \n \n ## Training\n Please refer to the page of TSDAE training in SentenceTransformers.\n \n ## Cite & Authors\n If you use the code for evaluation, feel free to cite our publication TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning:"
] |
feature-extraction
|
transformers
|
# kwang2049/TSDAE-askubuntu2nli_stsb
This is a model from the paper ["TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning"](https://arxiv.org/abs/2104.06979). This model adapts the knowledge from the NLI and STSb data to the specific domain AskUbuntu. Training procedure of this model:
1. Initialized with [bert-base-uncased](https://huggingface.co/bert-base-uncased);
2. Unsupervised training on AskUbuntu with the TSDAE objective;
3. Supervised training on the NLI data with cross-entropy loss;
4. Supervised training on the STSb data with MSE loss.
The pooling method is CLS-pooling.
## Usage
To use this model, an convenient way is through [SentenceTransformers](https://github.com/UKPLab/sentence-transformers). So please install it via:
```bash
pip install sentence-transformers
```
And then load the model and use it to encode sentences:
```python
from sentence_transformers import SentenceTransformer, models
dataset = 'askubuntu'
model_name_or_path = f'kwang2049/TSDAE-{dataset}2nli_stsb'
model = SentenceTransformer(model_name_or_path)
model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling
sentence_embeddings = model.encode(['This is the first sentence.', 'This is the second one.'])
```
## Evaluation
To evaluate the model against the datasets used in the paper, please install our evaluation toolkit [USEB](https://github.com/UKPLab/useb):
```bash
pip install useb # Or git clone and pip install .
python -m useb.downloading all # Download both training and evaluation data
```
And then do the evaluation:
```python
from sentence_transformers import SentenceTransformer, models
import torch
from useb import run_on
dataset = 'askubuntu'
model_name_or_path = f'kwang2049/TSDAE-{dataset}2nli_stsb'
model = SentenceTransformer(model_name_or_path)
model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling
@torch.no_grad()
def semb_fn(sentences) -> torch.Tensor:
return torch.Tensor(model.encode(sentences, show_progress_bar=False))
result = run_on(
dataset,
semb_fn=semb_fn,
eval_type='test',
data_eval_path='data-eval'
)
```
## Training
Please refer to [the page of TSDAE training](https://github.com/UKPLab/sentence-transformers/tree/master/examples/unsupervised_learning/TSDAE) in SentenceTransformers.
## Cite & Authors
If you use the code for evaluation, feel free to cite our publication [TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning](https://arxiv.org/abs/2104.06979):
```bibtex
@article{wang-2021-TSDAE,
title = "TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning",
author = "Wang, Kexin and Reimers, Nils and Gurevych, Iryna",
journal= "arXiv preprint arXiv:2104.06979",
month = "4",
year = "2021",
url = "https://arxiv.org/abs/2104.06979",
}
```
|
{}
|
kwang2049/TSDAE-askubuntu2nli_stsb
| null |
[
"transformers",
"pytorch",
"bert",
"feature-extraction",
"arxiv:2104.06979",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2104.06979"
] |
[] |
TAGS
#transformers #pytorch #bert #feature-extraction #arxiv-2104.06979 #endpoints_compatible #region-us
|
# kwang2049/TSDAE-askubuntu2nli_stsb
This is a model from the paper "TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning". This model adapts the knowledge from the NLI and STSb data to the specific domain AskUbuntu. Training procedure of this model:
1. Initialized with bert-base-uncased;
2. Unsupervised training on AskUbuntu with the TSDAE objective;
3. Supervised training on the NLI data with cross-entropy loss;
4. Supervised training on the STSb data with MSE loss.
The pooling method is CLS-pooling.
## Usage
To use this model, an convenient way is through SentenceTransformers. So please install it via:
And then load the model and use it to encode sentences:
## Evaluation
To evaluate the model against the datasets used in the paper, please install our evaluation toolkit USEB:
And then do the evaluation:
## Training
Please refer to the page of TSDAE training in SentenceTransformers.
## Cite & Authors
If you use the code for evaluation, feel free to cite our publication TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning:
|
[
"# kwang2049/TSDAE-askubuntu2nli_stsb\n\nThis is a model from the paper \"TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning\". This model adapts the knowledge from the NLI and STSb data to the specific domain AskUbuntu. Training procedure of this model:\n 1. Initialized with bert-base-uncased;\n 2. Unsupervised training on AskUbuntu with the TSDAE objective;\n 3. Supervised training on the NLI data with cross-entropy loss;\n 4. Supervised training on the STSb data with MSE loss.\n \n The pooling method is CLS-pooling.\n \n ## Usage\n To use this model, an convenient way is through SentenceTransformers. So please install it via:\n \n And then load the model and use it to encode sentences:\n \n\n ## Evaluation\n To evaluate the model against the datasets used in the paper, please install our evaluation toolkit USEB:\n \n And then do the evaluation:\n \n \n ## Training\n Please refer to the page of TSDAE training in SentenceTransformers.\n \n ## Cite & Authors\n If you use the code for evaluation, feel free to cite our publication TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning:"
] |
[
"TAGS\n#transformers #pytorch #bert #feature-extraction #arxiv-2104.06979 #endpoints_compatible #region-us \n",
"# kwang2049/TSDAE-askubuntu2nli_stsb\n\nThis is a model from the paper \"TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning\". This model adapts the knowledge from the NLI and STSb data to the specific domain AskUbuntu. Training procedure of this model:\n 1. Initialized with bert-base-uncased;\n 2. Unsupervised training on AskUbuntu with the TSDAE objective;\n 3. Supervised training on the NLI data with cross-entropy loss;\n 4. Supervised training on the STSb data with MSE loss.\n \n The pooling method is CLS-pooling.\n \n ## Usage\n To use this model, an convenient way is through SentenceTransformers. So please install it via:\n \n And then load the model and use it to encode sentences:\n \n\n ## Evaluation\n To evaluate the model against the datasets used in the paper, please install our evaluation toolkit USEB:\n \n And then do the evaluation:\n \n \n ## Training\n Please refer to the page of TSDAE training in SentenceTransformers.\n \n ## Cite & Authors\n If you use the code for evaluation, feel free to cite our publication TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning:"
] |
feature-extraction
|
transformers
|
# kwang2049/TSDAE-cqadupstack2nli_stsb
This is a model from the paper ["TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning"](https://arxiv.org/abs/2104.06979). This model was only trained with the TSDAE objective on cqadupstack in an unsupervised manner. Training procedure of this model:
1. Initialized with [bert-base-uncased](https://huggingface.co/bert-base-uncased);
2. Unsupervised training on cqadupstack with the TSDAE objective;
The pooling method is CLS-pooling.
## Usage
To use this model, an convenient way is through [SentenceTransformers](https://github.com/UKPLab/sentence-transformers). So please install it via:
```bash
pip install sentence-transformers
```
And then load the model and use it to encode sentences:
```python
from sentence_transformers import SentenceTransformer, models
dataset = 'cqadupstack'
model_name_or_path = f'kwang2049/TSDAE-{dataset}'
model = SentenceTransformer(model_name_or_path)
model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling
sentence_embeddings = model.encode(['This is the first sentence.', 'This is the second one.'])
```
## Evaluation
To evaluate the model against the datasets used in the paper, please install our evaluation toolkit [USEB](https://github.com/UKPLab/useb):
```bash
pip install useb # Or git clone and pip install .
python -m useb.downloading all # Download both training and evaluation data
```
And then do the evaluation:
```python
from sentence_transformers import SentenceTransformer, models
import torch
from useb import run_on
dataset = 'cqadupstack'
model_name_or_path = f'kwang2049/TSDAE-{dataset}'
model = SentenceTransformer(model_name_or_path)
model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling
@torch.no_grad()
def semb_fn(sentences) -> torch.Tensor:
return torch.Tensor(model.encode(sentences, show_progress_bar=False))
result = run_on(
dataset,
semb_fn=semb_fn,
eval_type='test',
data_eval_path='data-eval'
)
```
## Training
Please refer to [the page of TSDAE training](https://github.com/UKPLab/sentence-transformers/tree/master/examples/unsupervised_learning/TSDAE) in SentenceTransformers.
## Cite & Authors
If you use the code for evaluation, feel free to cite our publication [TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning](https://arxiv.org/abs/2104.06979):
```bibtex
@article{wang-2021-TSDAE,
title = "TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning",
author = "Wang, Kexin and Reimers, Nils and Gurevych, Iryna",
journal= "arXiv preprint arXiv:2104.06979",
month = "4",
year = "2021",
url = "https://arxiv.org/abs/2104.06979",
}
```
|
{}
|
kwang2049/TSDAE-cqadupstack
| null |
[
"transformers",
"pytorch",
"bert",
"feature-extraction",
"arxiv:2104.06979",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2104.06979"
] |
[] |
TAGS
#transformers #pytorch #bert #feature-extraction #arxiv-2104.06979 #endpoints_compatible #region-us
|
# kwang2049/TSDAE-cqadupstack2nli_stsb
This is a model from the paper "TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning". This model was only trained with the TSDAE objective on cqadupstack in an unsupervised manner. Training procedure of this model:
1. Initialized with bert-base-uncased;
2. Unsupervised training on cqadupstack with the TSDAE objective;
The pooling method is CLS-pooling.
## Usage
To use this model, an convenient way is through SentenceTransformers. So please install it via:
And then load the model and use it to encode sentences:
## Evaluation
To evaluate the model against the datasets used in the paper, please install our evaluation toolkit USEB:
And then do the evaluation:
## Training
Please refer to the page of TSDAE training in SentenceTransformers.
## Cite & Authors
If you use the code for evaluation, feel free to cite our publication TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning:
|
[
"# kwang2049/TSDAE-cqadupstack2nli_stsb\nThis is a model from the paper \"TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning\". This model was only trained with the TSDAE objective on cqadupstack in an unsupervised manner. Training procedure of this model:\n 1. Initialized with bert-base-uncased;\n 2. Unsupervised training on cqadupstack with the TSDAE objective;\n \n The pooling method is CLS-pooling.\n \n ## Usage\n To use this model, an convenient way is through SentenceTransformers. So please install it via:\n \n And then load the model and use it to encode sentences:\n \n ## Evaluation\n To evaluate the model against the datasets used in the paper, please install our evaluation toolkit USEB:\n \n And then do the evaluation:\n \n \n ## Training\n Please refer to the page of TSDAE training in SentenceTransformers.\n \n ## Cite & Authors\n If you use the code for evaluation, feel free to cite our publication TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning:"
] |
[
"TAGS\n#transformers #pytorch #bert #feature-extraction #arxiv-2104.06979 #endpoints_compatible #region-us \n",
"# kwang2049/TSDAE-cqadupstack2nli_stsb\nThis is a model from the paper \"TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning\". This model was only trained with the TSDAE objective on cqadupstack in an unsupervised manner. Training procedure of this model:\n 1. Initialized with bert-base-uncased;\n 2. Unsupervised training on cqadupstack with the TSDAE objective;\n \n The pooling method is CLS-pooling.\n \n ## Usage\n To use this model, an convenient way is through SentenceTransformers. So please install it via:\n \n And then load the model and use it to encode sentences:\n \n ## Evaluation\n To evaluate the model against the datasets used in the paper, please install our evaluation toolkit USEB:\n \n And then do the evaluation:\n \n \n ## Training\n Please refer to the page of TSDAE training in SentenceTransformers.\n \n ## Cite & Authors\n If you use the code for evaluation, feel free to cite our publication TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning:"
] |
feature-extraction
|
transformers
|
# kwang2049/TSDAE-cqadupstack2nli_stsb
This is a model from the paper ["TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning"](https://arxiv.org/abs/2104.06979). This model adapts the knowledge from the NLI and STSb data to the specific domain cqadupstack. Training procedure of this model:
1. Initialized with [bert-base-uncased](https://huggingface.co/bert-base-uncased);
2. Unsupervised training on cqadupstack with the TSDAE objective;
3. Supervised training on the NLI data with cross-entropy loss;
4. Supervised training on the STSb data with MSE loss.
The pooling method is CLS-pooling.
## Usage
To use this model, an convenient way is through [SentenceTransformers](https://github.com/UKPLab/sentence-transformers). So please install it via:
```bash
pip install sentence-transformers
```
And then load the model and use it to encode sentences:
```python
from sentence_transformers import SentenceTransformer, models
dataset = 'cqadupstack'
model_name_or_path = f'kwang2049/TSDAE-{dataset}2nli_stsb'
model = SentenceTransformer(model_name_or_path)
model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling
sentence_embeddings = model.encode(['This is the first sentence.', 'This is the second one.'])
```
## Evaluation
To evaluate the model against the datasets used in the paper, please install our evaluation toolkit [USEB](https://github.com/UKPLab/useb):
```bash
pip install useb # Or git clone and pip install .
python -m useb.downloading all # Download both training and evaluation data
```
And then do the evaluation:
```python
from sentence_transformers import SentenceTransformer, models
import torch
from useb import run_on
dataset = 'cqadupstack'
model_name_or_path = f'kwang2049/TSDAE-{dataset}2nli_stsb'
model = SentenceTransformer(model_name_or_path)
model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling
@torch.no_grad()
def semb_fn(sentences) -> torch.Tensor:
return torch.Tensor(model.encode(sentences, show_progress_bar=False))
result = run_on(
dataset,
semb_fn=semb_fn,
eval_type='test',
data_eval_path='data-eval'
)
```
## Training
Please refer to [the page of TSDAE training](https://github.com/UKPLab/sentence-transformers/tree/master/examples/unsupervised_learning/TSDAE) in SentenceTransformers.
## Cite & Authors
If you use the code for evaluation, feel free to cite our publication [TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning](https://arxiv.org/abs/2104.06979):
```bibtex
@article{wang-2021-TSDAE,
title = "TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning",
author = "Wang, Kexin and Reimers, Nils and Gurevych, Iryna",
journal= "arXiv preprint arXiv:2104.06979",
month = "4",
year = "2021",
url = "https://arxiv.org/abs/2104.06979",
}
```
|
{}
|
kwang2049/TSDAE-cqadupstack2nli_stsb
| null |
[
"transformers",
"pytorch",
"bert",
"feature-extraction",
"arxiv:2104.06979",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2104.06979"
] |
[] |
TAGS
#transformers #pytorch #bert #feature-extraction #arxiv-2104.06979 #endpoints_compatible #region-us
|
# kwang2049/TSDAE-cqadupstack2nli_stsb
This is a model from the paper "TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning". This model adapts the knowledge from the NLI and STSb data to the specific domain cqadupstack. Training procedure of this model:
1. Initialized with bert-base-uncased;
2. Unsupervised training on cqadupstack with the TSDAE objective;
3. Supervised training on the NLI data with cross-entropy loss;
4. Supervised training on the STSb data with MSE loss.
The pooling method is CLS-pooling.
## Usage
To use this model, an convenient way is through SentenceTransformers. So please install it via:
And then load the model and use it to encode sentences:
## Evaluation
To evaluate the model against the datasets used in the paper, please install our evaluation toolkit USEB:
And then do the evaluation:
## Training
Please refer to the page of TSDAE training in SentenceTransformers.
## Cite & Authors
If you use the code for evaluation, feel free to cite our publication TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning:
|
[
"# kwang2049/TSDAE-cqadupstack2nli_stsb\nThis is a model from the paper \"TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning\". This model adapts the knowledge from the NLI and STSb data to the specific domain cqadupstack. Training procedure of this model:\n 1. Initialized with bert-base-uncased;\n 2. Unsupervised training on cqadupstack with the TSDAE objective;\n 3. Supervised training on the NLI data with cross-entropy loss;\n 4. Supervised training on the STSb data with MSE loss.\n \n The pooling method is CLS-pooling.\n \n ## Usage\n To use this model, an convenient way is through SentenceTransformers. So please install it via:\n \n And then load the model and use it to encode sentences:\n \n ## Evaluation\n To evaluate the model against the datasets used in the paper, please install our evaluation toolkit USEB:\n \n And then do the evaluation:\n \n \n ## Training\n Please refer to the page of TSDAE training in SentenceTransformers.\n \n ## Cite & Authors\n If you use the code for evaluation, feel free to cite our publication TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning:"
] |
[
"TAGS\n#transformers #pytorch #bert #feature-extraction #arxiv-2104.06979 #endpoints_compatible #region-us \n",
"# kwang2049/TSDAE-cqadupstack2nli_stsb\nThis is a model from the paper \"TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning\". This model adapts the knowledge from the NLI and STSb data to the specific domain cqadupstack. Training procedure of this model:\n 1. Initialized with bert-base-uncased;\n 2. Unsupervised training on cqadupstack with the TSDAE objective;\n 3. Supervised training on the NLI data with cross-entropy loss;\n 4. Supervised training on the STSb data with MSE loss.\n \n The pooling method is CLS-pooling.\n \n ## Usage\n To use this model, an convenient way is through SentenceTransformers. So please install it via:\n \n And then load the model and use it to encode sentences:\n \n ## Evaluation\n To evaluate the model against the datasets used in the paper, please install our evaluation toolkit USEB:\n \n And then do the evaluation:\n \n \n ## Training\n Please refer to the page of TSDAE training in SentenceTransformers.\n \n ## Cite & Authors\n If you use the code for evaluation, feel free to cite our publication TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning:"
] |
feature-extraction
|
transformers
|
# kwang2049/TSDAE-scidocs2nli_stsb
This is a model from the paper ["TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning"](https://arxiv.org/abs/2104.06979). This model was only trained with the TSDAE objective on scidocs in an unsupervised manner. Training procedure of this model:
1. Initialized with [bert-base-uncased](https://huggingface.co/bert-base-uncased);
2. Unsupervised training on scidocs with the TSDAE objective;
The pooling method is CLS-pooling.
## Usage
To use this model, an convenient way is through [SentenceTransformers](https://github.com/UKPLab/sentence-transformers). So please install it via:
```bash
pip install sentence-transformers
```
And then load the model and use it to encode sentences:
```python
from sentence_transformers import SentenceTransformer, models
dataset = 'scidocs'
model_name_or_path = f'kwang2049/TSDAE-{dataset}'
model = SentenceTransformer(model_name_or_path)
model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling
sentence_embeddings = model.encode(['This is the first sentence.', 'This is the second one.'])
```
## Evaluation
To evaluate the model against the datasets used in the paper, please install our evaluation toolkit [USEB](https://github.com/UKPLab/useb):
```bash
pip install useb # Or git clone and pip install .
python -m useb.downloading all # Download both training and evaluation data
```
And then do the evaluation:
```python
from sentence_transformers import SentenceTransformer, models
import torch
from useb import run_on
dataset = 'scidocs'
model_name_or_path = f'kwang2049/TSDAE-{dataset}'
model = SentenceTransformer(model_name_or_path)
model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling
@torch.no_grad()
def semb_fn(sentences) -> torch.Tensor:
return torch.Tensor(model.encode(sentences, show_progress_bar=False))
result = run_on(
dataset,
semb_fn=semb_fn,
eval_type='test',
data_eval_path='data-eval'
)
```
## Training
Please refer to [the page of TSDAE training](https://github.com/UKPLab/sentence-transformers/tree/master/examples/unsupervised_learning/TSDAE) in SentenceTransformers.
## Cite & Authors
If you use the code for evaluation, feel free to cite our publication [TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning](https://arxiv.org/abs/2104.06979):
```bibtex
@article{wang-2021-TSDAE,
title = "TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning",
author = "Wang, Kexin and Reimers, Nils and Gurevych, Iryna",
journal= "arXiv preprint arXiv:2104.06979",
month = "4",
year = "2021",
url = "https://arxiv.org/abs/2104.06979",
}
```
|
{}
|
kwang2049/TSDAE-scidocs
| null |
[
"transformers",
"pytorch",
"bert",
"feature-extraction",
"arxiv:2104.06979",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2104.06979"
] |
[] |
TAGS
#transformers #pytorch #bert #feature-extraction #arxiv-2104.06979 #endpoints_compatible #region-us
|
# kwang2049/TSDAE-scidocs2nli_stsb
This is a model from the paper "TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning". This model was only trained with the TSDAE objective on scidocs in an unsupervised manner. Training procedure of this model:
1. Initialized with bert-base-uncased;
2. Unsupervised training on scidocs with the TSDAE objective;
The pooling method is CLS-pooling.
## Usage
To use this model, an convenient way is through SentenceTransformers. So please install it via:
And then load the model and use it to encode sentences:
## Evaluation
To evaluate the model against the datasets used in the paper, please install our evaluation toolkit USEB:
And then do the evaluation:
## Training
Please refer to the page of TSDAE training in SentenceTransformers.
## Cite & Authors
If you use the code for evaluation, feel free to cite our publication TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning:
|
[
"# kwang2049/TSDAE-scidocs2nli_stsb\nThis is a model from the paper \"TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning\". This model was only trained with the TSDAE objective on scidocs in an unsupervised manner. Training procedure of this model:\n 1. Initialized with bert-base-uncased;\n 2. Unsupervised training on scidocs with the TSDAE objective;\n \n The pooling method is CLS-pooling.\n \n ## Usage\n To use this model, an convenient way is through SentenceTransformers. So please install it via:\n \n And then load the model and use it to encode sentences:\n \n ## Evaluation\n To evaluate the model against the datasets used in the paper, please install our evaluation toolkit USEB:\n \n And then do the evaluation:\n \n \n ## Training\n Please refer to the page of TSDAE training in SentenceTransformers.\n \n ## Cite & Authors\n If you use the code for evaluation, feel free to cite our publication TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning:"
] |
[
"TAGS\n#transformers #pytorch #bert #feature-extraction #arxiv-2104.06979 #endpoints_compatible #region-us \n",
"# kwang2049/TSDAE-scidocs2nli_stsb\nThis is a model from the paper \"TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning\". This model was only trained with the TSDAE objective on scidocs in an unsupervised manner. Training procedure of this model:\n 1. Initialized with bert-base-uncased;\n 2. Unsupervised training on scidocs with the TSDAE objective;\n \n The pooling method is CLS-pooling.\n \n ## Usage\n To use this model, an convenient way is through SentenceTransformers. So please install it via:\n \n And then load the model and use it to encode sentences:\n \n ## Evaluation\n To evaluate the model against the datasets used in the paper, please install our evaluation toolkit USEB:\n \n And then do the evaluation:\n \n \n ## Training\n Please refer to the page of TSDAE training in SentenceTransformers.\n \n ## Cite & Authors\n If you use the code for evaluation, feel free to cite our publication TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning:"
] |
feature-extraction
|
transformers
|
# kwang2049/TSDAE-scidocs2nli_stsb
This is a model from the paper ["TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning"](https://arxiv.org/abs/2104.06979). This model adapts the knowledge from the NLI and STSb data to the specific domain scidocs. Training procedure of this model:
1. Initialized with [bert-base-uncased](https://huggingface.co/bert-base-uncased);
2. Unsupervised training on scidocs with the TSDAE objective;
3. Supervised training on the NLI data with cross-entropy loss;
4. Supervised training on the STSb data with MSE loss.
The pooling method is CLS-pooling.
## Usage
To use this model, an convenient way is through [SentenceTransformers](https://github.com/UKPLab/sentence-transformers). So please install it via:
```bash
pip install sentence-transformers
```
And then load the model and use it to encode sentences:
```python
from sentence_transformers import SentenceTransformer, models
dataset = 'scidocs'
model_name_or_path = f'kwang2049/TSDAE-{dataset}2nli_stsb'
model = SentenceTransformer(model_name_or_path)
model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling
sentence_embeddings = model.encode(['This is the first sentence.', 'This is the second one.'])
```
## Evaluation
To evaluate the model against the datasets used in the paper, please install our evaluation toolkit [USEB](https://github.com/UKPLab/useb):
```bash
pip install useb # Or git clone and pip install .
python -m useb.downloading all # Download both training and evaluation data
```
And then do the evaluation:
```python
from sentence_transformers import SentenceTransformer, models
import torch
from useb import run_on
dataset = 'scidocs'
model_name_or_path = f'kwang2049/TSDAE-{dataset}2nli_stsb'
model = SentenceTransformer(model_name_or_path)
model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling
@torch.no_grad()
def semb_fn(sentences) -> torch.Tensor:
return torch.Tensor(model.encode(sentences, show_progress_bar=False))
result = run_on(
dataset,
semb_fn=semb_fn,
eval_type='test',
data_eval_path='data-eval'
)
```
## Training
Please refer to [the page of TSDAE training](https://github.com/UKPLab/sentence-transformers/tree/master/examples/unsupervised_learning/TSDAE) in SentenceTransformers.
## Cite & Authors
If you use the code for evaluation, feel free to cite our publication [TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning](https://arxiv.org/abs/2104.06979):
```bibtex
@article{wang-2021-TSDAE,
title = "TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning",
author = "Wang, Kexin and Reimers, Nils and Gurevych, Iryna",
journal= "arXiv preprint arXiv:2104.06979",
month = "4",
year = "2021",
url = "https://arxiv.org/abs/2104.06979",
}
```
|
{}
|
kwang2049/TSDAE-scidocs2nli_stsb
| null |
[
"transformers",
"pytorch",
"bert",
"feature-extraction",
"arxiv:2104.06979",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2104.06979"
] |
[] |
TAGS
#transformers #pytorch #bert #feature-extraction #arxiv-2104.06979 #endpoints_compatible #region-us
|
# kwang2049/TSDAE-scidocs2nli_stsb
This is a model from the paper "TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning". This model adapts the knowledge from the NLI and STSb data to the specific domain scidocs. Training procedure of this model:
1. Initialized with bert-base-uncased;
2. Unsupervised training on scidocs with the TSDAE objective;
3. Supervised training on the NLI data with cross-entropy loss;
4. Supervised training on the STSb data with MSE loss.
The pooling method is CLS-pooling.
## Usage
To use this model, an convenient way is through SentenceTransformers. So please install it via:
And then load the model and use it to encode sentences:
## Evaluation
To evaluate the model against the datasets used in the paper, please install our evaluation toolkit USEB:
And then do the evaluation:
## Training
Please refer to the page of TSDAE training in SentenceTransformers.
## Cite & Authors
If you use the code for evaluation, feel free to cite our publication TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning:
|
[
"# kwang2049/TSDAE-scidocs2nli_stsb\nThis is a model from the paper \"TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning\". This model adapts the knowledge from the NLI and STSb data to the specific domain scidocs. Training procedure of this model:\n 1. Initialized with bert-base-uncased;\n 2. Unsupervised training on scidocs with the TSDAE objective;\n 3. Supervised training on the NLI data with cross-entropy loss;\n 4. Supervised training on the STSb data with MSE loss.\n \n The pooling method is CLS-pooling.\n \n ## Usage\n To use this model, an convenient way is through SentenceTransformers. So please install it via:\n \n And then load the model and use it to encode sentences:\n \n ## Evaluation\n To evaluate the model against the datasets used in the paper, please install our evaluation toolkit USEB:\n \n And then do the evaluation:\n \n \n ## Training\n Please refer to the page of TSDAE training in SentenceTransformers.\n \n ## Cite & Authors\n If you use the code for evaluation, feel free to cite our publication TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning:"
] |
[
"TAGS\n#transformers #pytorch #bert #feature-extraction #arxiv-2104.06979 #endpoints_compatible #region-us \n",
"# kwang2049/TSDAE-scidocs2nli_stsb\nThis is a model from the paper \"TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning\". This model adapts the knowledge from the NLI and STSb data to the specific domain scidocs. Training procedure of this model:\n 1. Initialized with bert-base-uncased;\n 2. Unsupervised training on scidocs with the TSDAE objective;\n 3. Supervised training on the NLI data with cross-entropy loss;\n 4. Supervised training on the STSb data with MSE loss.\n \n The pooling method is CLS-pooling.\n \n ## Usage\n To use this model, an convenient way is through SentenceTransformers. So please install it via:\n \n And then load the model and use it to encode sentences:\n \n ## Evaluation\n To evaluate the model against the datasets used in the paper, please install our evaluation toolkit USEB:\n \n And then do the evaluation:\n \n \n ## Training\n Please refer to the page of TSDAE training in SentenceTransformers.\n \n ## Cite & Authors\n If you use the code for evaluation, feel free to cite our publication TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning:"
] |
feature-extraction
|
transformers
|
# kwang2049/TSDAE-twitterpara2nli_stsb
This is a model from the paper ["TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning"](https://arxiv.org/abs/2104.06979). This model was only trained with the TSDAE objective on twitterpara in an unsupervised manner. Training procedure of this model:
1. Initialized with [bert-base-uncased](https://huggingface.co/bert-base-uncased);
2. Unsupervised training on twitterpara with the TSDAE objective;
The pooling method is CLS-pooling.
## Usage
To use this model, an convenient way is through [SentenceTransformers](https://github.com/UKPLab/sentence-transformers). So please install it via:
```bash
pip install sentence-transformers
```
And then load the model and use it to encode sentences:
```python
from sentence_transformers import SentenceTransformer, models
dataset = 'twitterpara'
model_name_or_path = f'kwang2049/TSDAE-{dataset}'
model = SentenceTransformer(model_name_or_path)
model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling
sentence_embeddings = model.encode(['This is the first sentence.', 'This is the second one.'])
```
## Evaluation
To evaluate the model against the datasets used in the paper, please install our evaluation toolkit [USEB](https://github.com/UKPLab/useb):
```bash
pip install useb # Or git clone and pip install .
python -m useb.downloading all # Download both training and evaluation data
```
And then do the evaluation:
```python
from sentence_transformers import SentenceTransformer, models
import torch
from useb import run_on
dataset = 'twitterpara'
model_name_or_path = f'kwang2049/TSDAE-{dataset}'
model = SentenceTransformer(model_name_or_path)
model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling
@torch.no_grad()
def semb_fn(sentences) -> torch.Tensor:
return torch.Tensor(model.encode(sentences, show_progress_bar=False))
result = run_on(
dataset,
semb_fn=semb_fn,
eval_type='test',
data_eval_path='data-eval'
)
```
## Training
Please refer to [the page of TSDAE training](https://github.com/UKPLab/sentence-transformers/tree/master/examples/unsupervised_learning/TSDAE) in SentenceTransformers.
## Cite & Authors
If you use the code for evaluation, feel free to cite our publication [TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning](https://arxiv.org/abs/2104.06979):
```bibtex
@article{wang-2021-TSDAE,
title = "TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning",
author = "Wang, Kexin and Reimers, Nils and Gurevych, Iryna",
journal= "arXiv preprint arXiv:2104.06979",
month = "4",
year = "2021",
url = "https://arxiv.org/abs/2104.06979",
}
```
|
{}
|
kwang2049/TSDAE-twitterpara
| null |
[
"transformers",
"pytorch",
"bert",
"feature-extraction",
"arxiv:2104.06979",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2104.06979"
] |
[] |
TAGS
#transformers #pytorch #bert #feature-extraction #arxiv-2104.06979 #endpoints_compatible #region-us
|
# kwang2049/TSDAE-twitterpara2nli_stsb
This is a model from the paper "TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning". This model was only trained with the TSDAE objective on twitterpara in an unsupervised manner. Training procedure of this model:
1. Initialized with bert-base-uncased;
2. Unsupervised training on twitterpara with the TSDAE objective;
The pooling method is CLS-pooling.
## Usage
To use this model, an convenient way is through SentenceTransformers. So please install it via:
And then load the model and use it to encode sentences:
## Evaluation
To evaluate the model against the datasets used in the paper, please install our evaluation toolkit USEB:
And then do the evaluation:
## Training
Please refer to the page of TSDAE training in SentenceTransformers.
## Cite & Authors
If you use the code for evaluation, feel free to cite our publication TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning:
|
[
"# kwang2049/TSDAE-twitterpara2nli_stsb\nThis is a model from the paper \"TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning\". This model was only trained with the TSDAE objective on twitterpara in an unsupervised manner. Training procedure of this model:\n 1. Initialized with bert-base-uncased;\n 2. Unsupervised training on twitterpara with the TSDAE objective;\n \n The pooling method is CLS-pooling.\n \n ## Usage\n To use this model, an convenient way is through SentenceTransformers. So please install it via:\n \n And then load the model and use it to encode sentences:\n \n ## Evaluation\n To evaluate the model against the datasets used in the paper, please install our evaluation toolkit USEB:\n \n And then do the evaluation:\n \n \n ## Training\n Please refer to the page of TSDAE training in SentenceTransformers.\n \n ## Cite & Authors\n If you use the code for evaluation, feel free to cite our publication TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning:"
] |
[
"TAGS\n#transformers #pytorch #bert #feature-extraction #arxiv-2104.06979 #endpoints_compatible #region-us \n",
"# kwang2049/TSDAE-twitterpara2nli_stsb\nThis is a model from the paper \"TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning\". This model was only trained with the TSDAE objective on twitterpara in an unsupervised manner. Training procedure of this model:\n 1. Initialized with bert-base-uncased;\n 2. Unsupervised training on twitterpara with the TSDAE objective;\n \n The pooling method is CLS-pooling.\n \n ## Usage\n To use this model, an convenient way is through SentenceTransformers. So please install it via:\n \n And then load the model and use it to encode sentences:\n \n ## Evaluation\n To evaluate the model against the datasets used in the paper, please install our evaluation toolkit USEB:\n \n And then do the evaluation:\n \n \n ## Training\n Please refer to the page of TSDAE training in SentenceTransformers.\n \n ## Cite & Authors\n If you use the code for evaluation, feel free to cite our publication TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning:"
] |
feature-extraction
|
transformers
|
# kwang2049/TSDAE-twitterpara2nli_stsb
This is a model from the paper ["TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning"](https://arxiv.org/abs/2104.06979). This model adapts the knowledge from the NLI and STSb data to the specific domain twitterpara. Training procedure of this model:
1. Initialized with [bert-base-uncased](https://huggingface.co/bert-base-uncased);
2. Unsupervised training on twitterpara with the TSDAE objective;
3. Supervised training on the NLI data with cross-entropy loss;
4. Supervised training on the STSb data with MSE loss.
The pooling method is CLS-pooling.
## Usage
To use this model, an convenient way is through [SentenceTransformers](https://github.com/UKPLab/sentence-transformers). So please install it via:
```bash
pip install sentence-transformers
```
And then load the model and use it to encode sentences:
```python
from sentence_transformers import SentenceTransformer, models
dataset = 'twitterpara'
model_name_or_path = f'kwang2049/TSDAE-{dataset}2nli_stsb'
model = SentenceTransformer(model_name_or_path)
model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling
sentence_embeddings = model.encode(['This is the first sentence.', 'This is the second one.'])
```
## Evaluation
To evaluate the model against the datasets used in the paper, please install our evaluation toolkit [USEB](https://github.com/UKPLab/useb):
```bash
pip install useb # Or git clone and pip install .
python -m useb.downloading all # Download both training and evaluation data
```
And then do the evaluation:
```python
from sentence_transformers import SentenceTransformer, models
import torch
from useb import run_on
dataset = 'twitterpara'
model_name_or_path = f'kwang2049/TSDAE-{dataset}2nli_stsb'
model = SentenceTransformer(model_name_or_path)
model[1] = models.Pooling(model[0].get_word_embedding_dimension(), pooling_mode='cls') # Note this model uses CLS-pooling
@torch.no_grad()
def semb_fn(sentences) -> torch.Tensor:
return torch.Tensor(model.encode(sentences, show_progress_bar=False))
result = run_on(
dataset,
semb_fn=semb_fn,
eval_type='test',
data_eval_path='data-eval'
)
```
## Training
Please refer to [the page of TSDAE training](https://github.com/UKPLab/sentence-transformers/tree/master/examples/unsupervised_learning/TSDAE) in SentenceTransformers.
## Cite & Authors
If you use the code for evaluation, feel free to cite our publication [TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning](https://arxiv.org/abs/2104.06979):
```bibtex
@article{wang-2021-TSDAE,
title = "TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning",
author = "Wang, Kexin and Reimers, Nils and Gurevych, Iryna",
journal= "arXiv preprint arXiv:2104.06979",
month = "4",
year = "2021",
url = "https://arxiv.org/abs/2104.06979",
}
```
|
{}
|
kwang2049/TSDAE-twitterpara2nli_stsb
| null |
[
"transformers",
"pytorch",
"bert",
"feature-extraction",
"arxiv:2104.06979",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2104.06979"
] |
[] |
TAGS
#transformers #pytorch #bert #feature-extraction #arxiv-2104.06979 #endpoints_compatible #region-us
|
# kwang2049/TSDAE-twitterpara2nli_stsb
This is a model from the paper "TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning". This model adapts the knowledge from the NLI and STSb data to the specific domain twitterpara. Training procedure of this model:
1. Initialized with bert-base-uncased;
2. Unsupervised training on twitterpara with the TSDAE objective;
3. Supervised training on the NLI data with cross-entropy loss;
4. Supervised training on the STSb data with MSE loss.
The pooling method is CLS-pooling.
## Usage
To use this model, an convenient way is through SentenceTransformers. So please install it via:
And then load the model and use it to encode sentences:
## Evaluation
To evaluate the model against the datasets used in the paper, please install our evaluation toolkit USEB:
And then do the evaluation:
## Training
Please refer to the page of TSDAE training in SentenceTransformers.
## Cite & Authors
If you use the code for evaluation, feel free to cite our publication TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning:
|
[
"# kwang2049/TSDAE-twitterpara2nli_stsb\nThis is a model from the paper \"TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning\". This model adapts the knowledge from the NLI and STSb data to the specific domain twitterpara. Training procedure of this model:\n 1. Initialized with bert-base-uncased;\n 2. Unsupervised training on twitterpara with the TSDAE objective;\n 3. Supervised training on the NLI data with cross-entropy loss;\n 4. Supervised training on the STSb data with MSE loss.\n \n The pooling method is CLS-pooling.\n \n ## Usage\n To use this model, an convenient way is through SentenceTransformers. So please install it via:\n \n And then load the model and use it to encode sentences:\n \n ## Evaluation\n To evaluate the model against the datasets used in the paper, please install our evaluation toolkit USEB:\n \n And then do the evaluation:\n \n \n ## Training\n Please refer to the page of TSDAE training in SentenceTransformers.\n \n ## Cite & Authors\n If you use the code for evaluation, feel free to cite our publication TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning:"
] |
[
"TAGS\n#transformers #pytorch #bert #feature-extraction #arxiv-2104.06979 #endpoints_compatible #region-us \n",
"# kwang2049/TSDAE-twitterpara2nli_stsb\nThis is a model from the paper \"TSDAE: Using Transformer-based Sequential Denoising Auto-Encoder for Unsupervised Sentence Embedding Learning\". This model adapts the knowledge from the NLI and STSb data to the specific domain twitterpara. Training procedure of this model:\n 1. Initialized with bert-base-uncased;\n 2. Unsupervised training on twitterpara with the TSDAE objective;\n 3. Supervised training on the NLI data with cross-entropy loss;\n 4. Supervised training on the STSb data with MSE loss.\n \n The pooling method is CLS-pooling.\n \n ## Usage\n To use this model, an convenient way is through SentenceTransformers. So please install it via:\n \n And then load the model and use it to encode sentences:\n \n ## Evaluation\n To evaluate the model against the datasets used in the paper, please install our evaluation toolkit USEB:\n \n And then do the evaluation:\n \n \n ## Training\n Please refer to the page of TSDAE training in SentenceTransformers.\n \n ## Cite & Authors\n If you use the code for evaluation, feel free to cite our publication TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning:"
] |
fill-mask
|
transformers
|
# Albert base model for Korean
* 70GB Korean text dataset and 42000 lower-cased subwords are used
* Check the model performance and other language models for Korean in [github](https://github.com/kiyoungkim1/LM-kor)
```python
from transformers import BertTokenizerFast, AlbertModel
tokenizer_albert = BertTokenizerFast.from_pretrained("kykim/albert-kor-base")
model_albert = AlbertModel.from_pretrained("kykim/albert-kor-base")
```
|
{"language": "ko"}
|
kykim/albert-kor-base
| null |
[
"transformers",
"pytorch",
"tf",
"albert",
"fill-mask",
"ko",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ko"
] |
TAGS
#transformers #pytorch #tf #albert #fill-mask #ko #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# Albert base model for Korean
* 70GB Korean text dataset and 42000 lower-cased subwords are used
* Check the model performance and other language models for Korean in github
|
[
"# Albert base model for Korean\n\n* 70GB Korean text dataset and 42000 lower-cased subwords are used\n* Check the model performance and other language models for Korean in github"
] |
[
"TAGS\n#transformers #pytorch #tf #albert #fill-mask #ko #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# Albert base model for Korean\n\n* 70GB Korean text dataset and 42000 lower-cased subwords are used\n* Check the model performance and other language models for Korean in github"
] |
fill-mask
|
transformers
|
# Bert base model for Korean
* 70GB Korean text dataset and 42000 lower-cased subwords are used
* Check the model performance and other language models for Korean in [github](https://github.com/kiyoungkim1/LM-kor)
```python
from transformers import BertTokenizerFast, BertModel
tokenizer_bert = BertTokenizerFast.from_pretrained("kykim/bert-kor-base")
model_bert = BertModel.from_pretrained("kykim/bert-kor-base")
```
|
{"language": "ko"}
|
kykim/bert-kor-base
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"ko",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ko"
] |
TAGS
#transformers #pytorch #tf #jax #bert #fill-mask #ko #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# Bert base model for Korean
* 70GB Korean text dataset and 42000 lower-cased subwords are used
* Check the model performance and other language models for Korean in github
|
[
"# Bert base model for Korean\n\n* 70GB Korean text dataset and 42000 lower-cased subwords are used\n* Check the model performance and other language models for Korean in github"
] |
[
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #ko #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# Bert base model for Korean\n\n* 70GB Korean text dataset and 42000 lower-cased subwords are used\n* Check the model performance and other language models for Korean in github"
] |
text2text-generation
|
transformers
|
# Bert base model for Korean
* 70GB Korean text dataset and 42000 lower-cased subwords are used
* Check the model performance and other language models for Korean in [github](https://github.com/kiyoungkim1/LM-kor)
```python
# only for pytorch in transformers
from transformers import BertTokenizerFast, EncoderDecoderModel
tokenizer = BertTokenizerFast.from_pretrained("kykim/bertshared-kor-base")
model = EncoderDecoderModel.from_pretrained("kykim/bertshared-kor-base")
```
|
{"language": "ko"}
|
kykim/bertshared-kor-base
| null |
[
"transformers",
"pytorch",
"encoder-decoder",
"text2text-generation",
"ko",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ko"
] |
TAGS
#transformers #pytorch #encoder-decoder #text2text-generation #ko #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# Bert base model for Korean
* 70GB Korean text dataset and 42000 lower-cased subwords are used
* Check the model performance and other language models for Korean in github
|
[
"# Bert base model for Korean\n\n* 70GB Korean text dataset and 42000 lower-cased subwords are used\n* Check the model performance and other language models for Korean in github"
] |
[
"TAGS\n#transformers #pytorch #encoder-decoder #text2text-generation #ko #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# Bert base model for Korean\n\n* 70GB Korean text dataset and 42000 lower-cased subwords are used\n* Check the model performance and other language models for Korean in github"
] |
null |
transformers
|
# Electra base model for Korean
* 70GB Korean text dataset and 42000 lower-cased subwords are used
* Check the model performance and other language models for Korean in [github](https://github.com/kiyoungkim1/LM-kor)
```python
from transformers import ElectraTokenizerFast, ElectraModel
tokenizer_electra = ElectraTokenizerFast.from_pretrained("kykim/electra-kor-base")
model = ElectraModel.from_pretrained("kykim/electra-kor-base")
```
|
{"language": "ko"}
|
kykim/electra-kor-base
| null |
[
"transformers",
"pytorch",
"tf",
"electra",
"pretraining",
"ko",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ko"
] |
TAGS
#transformers #pytorch #tf #electra #pretraining #ko #endpoints_compatible #has_space #region-us
|
# Electra base model for Korean
* 70GB Korean text dataset and 42000 lower-cased subwords are used
* Check the model performance and other language models for Korean in github
|
[
"# Electra base model for Korean\n\n* 70GB Korean text dataset and 42000 lower-cased subwords are used\n* Check the model performance and other language models for Korean in github"
] |
[
"TAGS\n#transformers #pytorch #tf #electra #pretraining #ko #endpoints_compatible #has_space #region-us \n",
"# Electra base model for Korean\n\n* 70GB Korean text dataset and 42000 lower-cased subwords are used\n* Check the model performance and other language models for Korean in github"
] |
feature-extraction
|
transformers
|
# Funnel-transformer base model for Korean
* 70GB Korean text dataset and 42000 lower-cased subwords are used
* Check the model performance and other language models for Korean in [github](https://github.com/kiyoungkim1/LM-kor)
```python
from transformers import FunnelTokenizer, FunnelModel
tokenizer = FunnelTokenizer.from_pretrained("kykim/funnel-kor-base")
model = FunnelModel.from_pretrained("kykim/funnel-kor-base")
```
|
{"language": "ko"}
|
kykim/funnel-kor-base
| null |
[
"transformers",
"pytorch",
"tf",
"funnel",
"feature-extraction",
"ko",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ko"
] |
TAGS
#transformers #pytorch #tf #funnel #feature-extraction #ko #endpoints_compatible #has_space #region-us
|
# Funnel-transformer base model for Korean
* 70GB Korean text dataset and 42000 lower-cased subwords are used
* Check the model performance and other language models for Korean in github
|
[
"# Funnel-transformer base model for Korean\n\n* 70GB Korean text dataset and 42000 lower-cased subwords are used\n* Check the model performance and other language models for Korean in github"
] |
[
"TAGS\n#transformers #pytorch #tf #funnel #feature-extraction #ko #endpoints_compatible #has_space #region-us \n",
"# Funnel-transformer base model for Korean\n\n* 70GB Korean text dataset and 42000 lower-cased subwords are used\n* Check the model performance and other language models for Korean in github"
] |
text-generation
|
transformers
|
# Bert base model for Korean
* 70GB Korean text dataset and 42000 lower-cased subwords are used
* Check the model performance and other language models for Korean in [github](https://github.com/kiyoungkim1/LM-kor)
```python
from transformers import BertTokenizerFast, GPT2LMHeadModel
tokenizer_gpt3 = BertTokenizerFast.from_pretrained("kykim/gpt3-kor-small_based_on_gpt2")
input_ids = tokenizer_gpt3.encode("text to tokenize")[1:] # remove cls token
model_gpt3 = GPT2LMHeadModel.from_pretrained("kykim/gpt3-kor-small_based_on_gpt2")
```
|
{"language": "ko", "tags": ["text-generation"]}
|
kykim/gpt3-kor-small_based_on_gpt2
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"gpt2",
"text-generation",
"ko",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ko"
] |
TAGS
#transformers #pytorch #tf #jax #gpt2 #text-generation #ko #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
# Bert base model for Korean
* 70GB Korean text dataset and 42000 lower-cased subwords are used
* Check the model performance and other language models for Korean in github
|
[
"# Bert base model for Korean\n\n* 70GB Korean text dataset and 42000 lower-cased subwords are used\n* Check the model performance and other language models for Korean in github"
] |
[
"TAGS\n#transformers #pytorch #tf #jax #gpt2 #text-generation #ko #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"# Bert base model for Korean\n\n* 70GB Korean text dataset and 42000 lower-cased subwords are used\n* Check the model performance and other language models for Korean in github"
] |
fill-mask
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4718
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.707 | 1.0 | 157 | 2.4883 |
| 2.572 | 2.0 | 314 | 2.4240 |
| 2.5377 | 3.0 | 471 | 2.4355 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["imdb"], "model-index": [{"name": "distilbert-base-uncased-finetuned-imdb", "results": []}]}
|
kyo/distilbert-base-uncased-finetuned-imdb
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #distilbert #fill-mask #generated_from_trainer #dataset-imdb #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
distilbert-base-uncased-finetuned-imdb
======================================
This model is a fine-tuned version of distilbert-base-uncased on the imdb dataset.
It achieves the following results on the evaluation set:
* Loss: 2.4718
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 64
* eval\_batch\_size: 64
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3.0
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.12.5
* Pytorch 1.10.0+cu111
* Datasets 1.16.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #distilbert #fill-mask #generated_from_trainer #dataset-imdb #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
text2text-generation
|
transformers
|
Google's mt5-base fine-tuned in Japanese to solve error detection and correction task.
# 日本語誤り訂正
- "吾輩をは猫である。名前えはまだない。"→"吾輩は猫である。名前はまだない。"
- "-small" has been trained on 20,000 text pairs only.
- dataset: [link](http://nlp.ist.i.kyoto-u.ac.jp/?%E6%97%A5%E6%9C%AC%E8%AA%9EWikipedia%E5%85%A5%E5%8A%9B%E8%AA%A4%E3%82%8A%E3%83%87%E3%83%BC%E3%82%BF%E3%82%BB%E3%83%83%E3%83%88) *used only first 20,000 text pairs.
- prefix: "correction: " (notice: single task trained.)
- text-to-textのお気持ち体験版ぐらいの感覚でどうぞ.
## 参考
- "東北大学でMASKが研究をしています。"→"東北大学でMASKの研究をしています。" ジム・キャリーを主語とした唯一のガ格が消され、ジム・キャリーは研究対象となった。易読化のために用いられる主語と動詞を近づける記法は誤り扱い?
- "東北大学でマスクが研究をしています。"→"東北大学でマスクの研究をしています。"
- "東北大学でイーロン・マスクが研究をしています。"→"東北大学でイーロン・マスクが研究をしています。"
- "東北大学で「イーロン・マスク」が研究をしています。"→"東北大学で「イーロン・マスク」の研究をしています。" 単語の意味も考慮されている?
- "東北大学でイマスクが研究をしています。"→"東北大学でイマスクの研究をしています。"
- "東北大学でクが研究をしています。"→"東北大学でコンピューターが研究をしています。" それはちょっと待って。
## 参考 extra_idを用い探索 <>は半角に変更してください
- "東北大学で <extra_id_0> の研究をしています。"→"東北大学で化学の研究をしています。"
- "東北大学で <extra_id_0> が研究をしています。"→"東北大学で工学が研究をしています。" 工学さん。
- "吾輩は <extra_id_0> である。"→"吾輩は吾輩である。"
- "答えは猫です。吾輩は <extra_id_0> である。"→"答えは猫です。吾輩は猫である。"
- "答えは猫です。吾輩の <extra_id_0> である。"→"答えは猫です。吾輩の心は猫である。"
- "私は猫です。私は <extra_id_0>"→"私は猫です。私は猫です。"
- "私は猫です。N/A <extra_id_0>"→"猫です。"
- "あなたは女性で猫です。彼は犬です。彼女は <extra_id_0>"→"あなたは女性で猫です。彼は犬です。彼女は猫です。"
- "あなたは女性で猫です。彼は犬です。彼は <extra_id_0>"→"あなたは女性で猫です。彼は犬です。"
- "あなたは女性で猫です。彼は犬です。彼は男性で <extra_id_0>"→"あなたは女性で猫です。彼は犬です。彼は男性で猫です。"
- "あなたは女性で猫です。彼は犬です。ライオンは <extra_id_0>"→"あなたは女性で猫です。彼は犬です。ライオンは猫です。"
- "あなたがは女性で猫です。彼はが犬です。ライオンが <extra_id_0>"→"あなたが女性で猫です。彼は犬です。ライオンが犬です。"
- "Aは11、Bは9。Aは <extra_id_0> 。Bは <extra_id_1> 。"→"Aは11、Bは9。Aは11。Bは9。"
- "彼の名前はallenです。彼のnameは <extra_id_0>"→"彼の名前はallenです。彼の名前は英語です。"
- "translate japanease to english: 赤い花. => red flower. 青い花. => <extra_id_0>"→"赤い花. => red flower. 青い花. => blue flower" タスク比依存翻訳可能性の片鱗.japaneseをjapaneaseと間違えたことは秘密だ・・・と言うか間違えても動くのか
## Prompting参考
Chain of Thought Prompting Elicits Reasoning in Large Language Models
https://arxiv.org/abs/2201.11903
**check in progress**
## Licenese
- The MIT license
|
{"language": "ja", "license": "mit", "widget": [{"text": "\u543e\u8f29\u3092\u306f\u732b\u3067\u3042\u308b\u3002\u3092\u66f8\u3044\u305f\u4f5c\u5bb6\u306f\uff0c\u590f\u76ee\u6f31 <extra_id_0>"}, {"text": "\u543e\u8f29\u3092\u306f\u732b\u3067\u3042\u308b\u3002\u540d\u524d\u3048\u306f\u307e\u3060\u306a\u3044\u3002"}, {"text": "translate japanese to english: \u8d64\u3044\u82b1\uff0e => red flower. \u9752\u3044\u82b1\uff0e => <extra_id_0>"}]}
|
kz/mt5base-finetuned-ECC-japanese-small
| null |
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"ja",
"arxiv:2201.11903",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2201.11903"
] |
[
"ja"
] |
TAGS
#transformers #pytorch #mt5 #text2text-generation #ja #arxiv-2201.11903 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Google's mt5-base fine-tuned in Japanese to solve error detection and correction task.
# 日本語誤り訂正
- "吾輩をは猫である。名前えはまだない。"→"吾輩は猫である。名前はまだない。"
- "-small" has been trained on 20,000 text pairs only.
- dataset: link *used only first 20,000 text pairs.
- prefix: "correction: " (notice: single task trained.)
- text-to-textのお気持ち体験版ぐらいの感覚でどうぞ.
## 参考
- "東北大学でMASKが研究をしています。"→"東北大学でMASKの研究をしています。" ジム・キャリーを主語とした唯一のガ格が消され、ジム・キャリーは研究対象となった。易読化のために用いられる主語と動詞を近づける記法は誤り扱い?
- "東北大学でマスクが研究をしています。"→"東北大学でマスクの研究をしています。"
- "東北大学でイーロン・マスクが研究をしています。"→"東北大学でイーロン・マスクが研究をしています。"
- "東北大学で「イーロン・マスク」が研究をしています。"→"東北大学で「イーロン・マスク」の研究をしています。" 単語の意味も考慮されている?
- "東北大学でイマスクが研究をしています。"→"東北大学でイマスクの研究をしています。"
- "東北大学でクが研究をしています。"→"東北大学でコンピューターが研究をしています。" それはちょっと待って。
## 参考 extra_idを用い探索 <>は半角に変更してください
- "東北大学で <extra_id_0> の研究をしています。"→"東北大学で化学の研究をしています。"
- "東北大学で <extra_id_0> が研究をしています。"→"東北大学で工学が研究をしています。" 工学さん。
- "吾輩は <extra_id_0> である。"→"吾輩は吾輩である。"
- "答えは猫です。吾輩は <extra_id_0> である。"→"答えは猫です。吾輩は猫である。"
- "答えは猫です。吾輩の <extra_id_0> である。"→"答えは猫です。吾輩の心は猫である。"
- "私は猫です。私は <extra_id_0>"→"私は猫です。私は猫です。"
- "私は猫です。N/A <extra_id_0>"→"猫です。"
- "あなたは女性で猫です。彼は犬です。彼女は <extra_id_0>"→"あなたは女性で猫です。彼は犬です。彼女は猫です。"
- "あなたは女性で猫です。彼は犬です。彼は <extra_id_0>"→"あなたは女性で猫です。彼は犬です。"
- "あなたは女性で猫です。彼は犬です。彼は男性で <extra_id_0>"→"あなたは女性で猫です。彼は犬です。彼は男性で猫です。"
- "あなたは女性で猫です。彼は犬です。ライオンは <extra_id_0>"→"あなたは女性で猫です。彼は犬です。ライオンは猫です。"
- "あなたがは女性で猫です。彼はが犬です。ライオンが <extra_id_0>"→"あなたが女性で猫です。彼は犬です。ライオンが犬です。"
- "Aは11、Bは9。Aは <extra_id_0> 。Bは <extra_id_1> 。"→"Aは11、Bは9。Aは11。Bは9。"
- "彼の名前はallenです。彼のnameは <extra_id_0>"→"彼の名前はallenです。彼の名前は英語です。"
- "translate japanease to english: 赤い花. => red flower. 青い花. => <extra_id_0>"→"赤い花. => red flower. 青い花. => blue flower" タスク比依存翻訳可能性の片鱗.japaneseをjapaneaseと間違えたことは秘密だ・・・と言うか間違えても動くのか
## Prompting参考
Chain of Thought Prompting Elicits Reasoning in Large Language Models
URL
check in progress
## Licenese
- The MIT license
|
[
"# 日本語誤り訂正\n\n- \"吾輩をは猫である。名前えはまだない。\"→\"吾輩は猫である。名前はまだない。\"\n- \"-small\" has been trained on 20,000 text pairs only.\n- dataset: link *used only first 20,000 text pairs.\n- prefix: \"correction: \" (notice: single task trained.)\n- text-to-textのお気持ち体験版ぐらいの感覚でどうぞ.",
"## 参考\n\n- \"東北大学でMASKが研究をしています。\"→\"東北大学でMASKの研究をしています。\" ジム・キャリーを主語とした唯一のガ格が消され、ジム・キャリーは研究対象となった。易読化のために用いられる主語と動詞を近づける記法は誤り扱い?\n- \"東北大学でマスクが研究をしています。\"→\"東北大学でマスクの研究をしています。\"\n- \"東北大学でイーロン・マスクが研究をしています。\"→\"東北大学でイーロン・マスクが研究をしています。\"\n- \"東北大学で「イーロン・マスク」が研究をしています。\"→\"東北大学で「イーロン・マスク」の研究をしています。\" 単語の意味も考慮されている?\n- \"東北大学でイマスクが研究をしています。\"→\"東北大学でイマスクの研究をしています。\"\n- \"東北大学でクが研究をしています。\"→\"東北大学でコンピューターが研究をしています。\" それはちょっと待って。",
"## 参考 extra_idを用い探索 <>は半角に変更してください\n\n- \"東北大学で <extra_id_0> の研究をしています。\"→\"東北大学で化学の研究をしています。\"\n- \"東北大学で <extra_id_0> が研究をしています。\"→\"東北大学で工学が研究をしています。\" 工学さん。\n- \"吾輩は <extra_id_0> である。\"→\"吾輩は吾輩である。\"\n- \"答えは猫です。吾輩は <extra_id_0> である。\"→\"答えは猫です。吾輩は猫である。\"\n- \"答えは猫です。吾輩の <extra_id_0> である。\"→\"答えは猫です。吾輩の心は猫である。\"\n- \"私は猫です。私は <extra_id_0>\"→\"私は猫です。私は猫です。\"\n- \"私は猫です。N/A <extra_id_0>\"→\"猫です。\"\n- \"あなたは女性で猫です。彼は犬です。彼女は <extra_id_0>\"→\"あなたは女性で猫です。彼は犬です。彼女は猫です。\"\n- \"あなたは女性で猫です。彼は犬です。彼は <extra_id_0>\"→\"あなたは女性で猫です。彼は犬です。\"\n- \"あなたは女性で猫です。彼は犬です。彼は男性で <extra_id_0>\"→\"あなたは女性で猫です。彼は犬です。彼は男性で猫です。\"\n- \"あなたは女性で猫です。彼は犬です。ライオンは <extra_id_0>\"→\"あなたは女性で猫です。彼は犬です。ライオンは猫です。\"\n- \"あなたがは女性で猫です。彼はが犬です。ライオンが <extra_id_0>\"→\"あなたが女性で猫です。彼は犬です。ライオンが犬です。\"\n- \"Aは11、Bは9。Aは <extra_id_0> 。Bは <extra_id_1> 。\"→\"Aは11、Bは9。Aは11。Bは9。\"\n- \"彼の名前はallenです。彼のnameは <extra_id_0>\"→\"彼の名前はallenです。彼の名前は英語です。\"\n- \"translate japanease to english: 赤い花. => red flower. 青い花. => <extra_id_0>\"→\"赤い花. => red flower. 青い花. => blue flower\" タスク比依存翻訳可能性の片鱗.japaneseをjapaneaseと間違えたことは秘密だ・・・と言うか間違えても動くのか",
"## Prompting参考\nChain of Thought Prompting Elicits Reasoning in Large Language Models\nURL\n\ncheck in progress",
"## Licenese\n- The MIT license"
] |
[
"TAGS\n#transformers #pytorch #mt5 #text2text-generation #ja #arxiv-2201.11903 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# 日本語誤り訂正\n\n- \"吾輩をは猫である。名前えはまだない。\"→\"吾輩は猫である。名前はまだない。\"\n- \"-small\" has been trained on 20,000 text pairs only.\n- dataset: link *used only first 20,000 text pairs.\n- prefix: \"correction: \" (notice: single task trained.)\n- text-to-textのお気持ち体験版ぐらいの感覚でどうぞ.",
"## 参考\n\n- \"東北大学でMASKが研究をしています。\"→\"東北大学でMASKの研究をしています。\" ジム・キャリーを主語とした唯一のガ格が消され、ジム・キャリーは研究対象となった。易読化のために用いられる主語と動詞を近づける記法は誤り扱い?\n- \"東北大学でマスクが研究をしています。\"→\"東北大学でマスクの研究をしています。\"\n- \"東北大学でイーロン・マスクが研究をしています。\"→\"東北大学でイーロン・マスクが研究をしています。\"\n- \"東北大学で「イーロン・マスク」が研究をしています。\"→\"東北大学で「イーロン・マスク」の研究をしています。\" 単語の意味も考慮されている?\n- \"東北大学でイマスクが研究をしています。\"→\"東北大学でイマスクの研究をしています。\"\n- \"東北大学でクが研究をしています。\"→\"東北大学でコンピューターが研究をしています。\" それはちょっと待って。",
"## 参考 extra_idを用い探索 <>は半角に変更してください\n\n- \"東北大学で <extra_id_0> の研究をしています。\"→\"東北大学で化学の研究をしています。\"\n- \"東北大学で <extra_id_0> が研究をしています。\"→\"東北大学で工学が研究をしています。\" 工学さん。\n- \"吾輩は <extra_id_0> である。\"→\"吾輩は吾輩である。\"\n- \"答えは猫です。吾輩は <extra_id_0> である。\"→\"答えは猫です。吾輩は猫である。\"\n- \"答えは猫です。吾輩の <extra_id_0> である。\"→\"答えは猫です。吾輩の心は猫である。\"\n- \"私は猫です。私は <extra_id_0>\"→\"私は猫です。私は猫です。\"\n- \"私は猫です。N/A <extra_id_0>\"→\"猫です。\"\n- \"あなたは女性で猫です。彼は犬です。彼女は <extra_id_0>\"→\"あなたは女性で猫です。彼は犬です。彼女は猫です。\"\n- \"あなたは女性で猫です。彼は犬です。彼は <extra_id_0>\"→\"あなたは女性で猫です。彼は犬です。\"\n- \"あなたは女性で猫です。彼は犬です。彼は男性で <extra_id_0>\"→\"あなたは女性で猫です。彼は犬です。彼は男性で猫です。\"\n- \"あなたは女性で猫です。彼は犬です。ライオンは <extra_id_0>\"→\"あなたは女性で猫です。彼は犬です。ライオンは猫です。\"\n- \"あなたがは女性で猫です。彼はが犬です。ライオンが <extra_id_0>\"→\"あなたが女性で猫です。彼は犬です。ライオンが犬です。\"\n- \"Aは11、Bは9。Aは <extra_id_0> 。Bは <extra_id_1> 。\"→\"Aは11、Bは9。Aは11。Bは9。\"\n- \"彼の名前はallenです。彼のnameは <extra_id_0>\"→\"彼の名前はallenです。彼の名前は英語です。\"\n- \"translate japanease to english: 赤い花. => red flower. 青い花. => <extra_id_0>\"→\"赤い花. => red flower. 青い花. => blue flower\" タスク比依存翻訳可能性の片鱗.japaneseをjapaneaseと間違えたことは秘密だ・・・と言うか間違えても動くのか",
"## Prompting参考\nChain of Thought Prompting Elicits Reasoning in Large Language Models\nURL\n\ncheck in progress",
"## Licenese\n- The MIT license"
] |
text2text-generation
|
transformers
|
Google's mt5-base fine-tuned in Japanese to summarize patent claims in a limited Pharmaceutical domain.
# 日本語特許請求項要約(医薬特定ドメイン限定)
- """【請求項1】
ヒトCD38(配列番号1)及びカニクイザルCD38(配列番号2)に特異的に結合する単離された抗体であって、
a)以下を含む重鎖可変領域:
i)配列番号3を含む第1のCDR;
ii)配列番号4を含む第2のCDR;
iii)配列番号5を含む第3のCDR;及び
b)以下を含む軽鎖可変領域:
i)配列番号6を含む第1のCDR;
ii)配列番号7を含む第2のCDR;
iii)配列番号8を含む第3のCDR;
を含む、抗体。(請求項2~19省略)【請求項20】
前記自己免疫疾患が、関節リウマチ、全身性エリテマトーデス、炎症性腸疾患、潰瘍性大腸炎及び移植片対宿主病からなる群から選択される、請求項19記載の方法。
"""
- →"本発明は、ヒトCD38タンパク質(配列番号0)及びカニクイザルCD38(配列番号2)に特異的に結合する抗体に関する。本発明はまた、ヒトCD38タンパク質(配列番号0)及びカニクイザルCD38(配列番号2)に特異的に結合する抗体を、それを必要とする患者に投与することを含む、自己免疫疾患の治療方法に関する。"
- "-small" has been trained on 20,000 text pairs only.
- dataset: *
- prefix: "patent claim summarization: " (notice: single task trained.)
- 特定ドメインの2万テキストを用いて要約モデルを作成するとこの程度ですよ,とのお気持ちとして.
- 注意: Hosted inference APIでは要約の一部しか出力されません.使用する際には,Use in Transformersのコードをご自身の環境で実行されることをおすすめします.
# 参考
- https://huggingface.co/blog/how-to-generate
- 前処理が最適ではなかった。修正する。
- 任意に上位概念・下位概念と変換できるようprefixを追加する。
- 任意のテーマに沿った要約とできるようprefixを追加する。
- prefixを追加せずとも、ある程度任意のテーマに沿った要約とすることは可能。請求項の構造を利用する、任意のテーマに沿っているか判定するモデルを用い生成を補正するなど。
**check in progress**
## Licenese
- The MIT license
|
{"language": "ja", "license": "mit", "tags": ["Summarization", "japanese"], "widget": [{"text": "\u8acb\u6c42\u9805 <extra_id_0>"}]}
|
kz/mt5base-finetuned-patentsum-japanese-small
| null |
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"Summarization",
"japanese",
"ja",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ja"
] |
TAGS
#transformers #pytorch #mt5 #text2text-generation #Summarization #japanese #ja #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Google's mt5-base fine-tuned in Japanese to summarize patent claims in a limited Pharmaceutical domain.
# 日本語特許請求項要約(医薬特定ドメイン限定)
- """【請求項1】
ヒトCD38(配列番号1)及びカニクイザルCD38(配列番号2)に特異的に結合する単離された抗体であって、
a)以下を含む重鎖可変領域:
i)配列番号3を含む第1のCDR;
ii)配列番号4を含む第2のCDR;
iii)配列番号5を含む第3のCDR;及び
b)以下を含む軽鎖可変領域:
i)配列番号6を含む第1のCDR;
ii)配列番号7を含む第2のCDR;
iii)配列番号8を含む第3のCDR;
を含む、抗体。(請求項2~19省略)【請求項20】
前記自己免疫疾患が、関節リウマチ、全身性エリテマトーデス、炎症性腸疾患、潰瘍性大腸炎及び移植片対宿主病からなる群から選択される、請求項19記載の方法。
"""
- →"本発明は、ヒトCD38タンパク質(配列番号0)及びカニクイザルCD38(配列番号2)に特異的に結合する抗体に関する。本発明はまた、ヒトCD38タンパク質(配列番号0)及びカニクイザルCD38(配列番号2)に特異的に結合する抗体を、それを必要とする患者に投与することを含む、自己免疫疾患の治療方法に関する。"
- "-small" has been trained on 20,000 text pairs only.
- dataset: *
- prefix: "patent claim summarization: " (notice: single task trained.)
- 特定ドメインの2万テキストを用いて要約モデルを作成するとこの程度ですよ,とのお気持ちとして.
- 注意: Hosted inference APIでは要約の一部しか出力されません.使用する際には,Use in Transformersのコードをご自身の環境で実行されることをおすすめします.
# 参考
- URL
- 前処理が最適ではなかった。修正する。
- 任意に上位概念・下位概念と変換できるようprefixを追加する。
- 任意のテーマに沿った要約とできるようprefixを追加する。
- prefixを追加せずとも、ある程度任意のテーマに沿った要約とすることは可能。請求項の構造を利用する、任意のテーマに沿っているか判定するモデルを用い生成を補正するなど。
check in progress
## Licenese
- The MIT license
|
[
"# 日本語特許請求項要約(医薬特定ドメイン限定)\n\n- \"\"\"【請求項1】\n ヒトCD38(配列番号1)及びカニクイザルCD38(配列番号2)に特異的に結合する単離された抗体であって、\na)以下を含む重鎖可変領域:\n i)配列番号3を含む第1のCDR;\n ii)配列番号4を含む第2のCDR;\n iii)配列番号5を含む第3のCDR;及び\nb)以下を含む軽鎖可変領域:\n i)配列番号6を含む第1のCDR;\n ii)配列番号7を含む第2のCDR;\n iii)配列番号8を含む第3のCDR;\nを含む、抗体。(請求項2~19省略)【請求項20】\n 前記自己免疫疾患が、関節リウマチ、全身性エリテマトーデス、炎症性腸疾患、潰瘍性大腸炎及び移植片対宿主病からなる群から選択される、請求項19記載の方法。\n\"\"\"\n- →\"本発明は、ヒトCD38タンパク質(配列番号0)及びカニクイザルCD38(配列番号2)に特異的に結合する抗体に関する。本発明はまた、ヒトCD38タンパク質(配列番号0)及びカニクイザルCD38(配列番号2)に特異的に結合する抗体を、それを必要とする患者に投与することを含む、自己免疫疾患の治療方法に関する。\"\n\n- \"-small\" has been trained on 20,000 text pairs only. \n- dataset: *\n- prefix: \"patent claim summarization: \" (notice: single task trained.)\n- 特定ドメインの2万テキストを用いて要約モデルを作成するとこの程度ですよ,とのお気持ちとして.\n- 注意: Hosted inference APIでは要約の一部しか出力されません.使用する際には,Use in Transformersのコードをご自身の環境で実行されることをおすすめします.",
"# 参考\n\n- URL\n- 前処理が最適ではなかった。修正する。\n- 任意に上位概念・下位概念と変換できるようprefixを追加する。\n- 任意のテーマに沿った要約とできるようprefixを追加する。\n- prefixを追加せずとも、ある程度任意のテーマに沿った要約とすることは可能。請求項の構造を利用する、任意のテーマに沿っているか判定するモデルを用い生成を補正するなど。\n\ncheck in progress",
"## Licenese\n- The MIT license"
] |
[
"TAGS\n#transformers #pytorch #mt5 #text2text-generation #Summarization #japanese #ja #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# 日本語特許請求項要約(医薬特定ドメイン限定)\n\n- \"\"\"【請求項1】\n ヒトCD38(配列番号1)及びカニクイザルCD38(配列番号2)に特異的に結合する単離された抗体であって、\na)以下を含む重鎖可変領域:\n i)配列番号3を含む第1のCDR;\n ii)配列番号4を含む第2のCDR;\n iii)配列番号5を含む第3のCDR;及び\nb)以下を含む軽鎖可変領域:\n i)配列番号6を含む第1のCDR;\n ii)配列番号7を含む第2のCDR;\n iii)配列番号8を含む第3のCDR;\nを含む、抗体。(請求項2~19省略)【請求項20】\n 前記自己免疫疾患が、関節リウマチ、全身性エリテマトーデス、炎症性腸疾患、潰瘍性大腸炎及び移植片対宿主病からなる群から選択される、請求項19記載の方法。\n\"\"\"\n- →\"本発明は、ヒトCD38タンパク質(配列番号0)及びカニクイザルCD38(配列番号2)に特異的に結合する抗体に関する。本発明はまた、ヒトCD38タンパク質(配列番号0)及びカニクイザルCD38(配列番号2)に特異的に結合する抗体を、それを必要とする患者に投与することを含む、自己免疫疾患の治療方法に関する。\"\n\n- \"-small\" has been trained on 20,000 text pairs only. \n- dataset: *\n- prefix: \"patent claim summarization: \" (notice: single task trained.)\n- 特定ドメインの2万テキストを用いて要約モデルを作成するとこの程度ですよ,とのお気持ちとして.\n- 注意: Hosted inference APIでは要約の一部しか出力されません.使用する際には,Use in Transformersのコードをご自身の環境で実行されることをおすすめします.",
"# 参考\n\n- URL\n- 前処理が最適ではなかった。修正する。\n- 任意に上位概念・下位概念と変換できるようprefixを追加する。\n- 任意のテーマに沿った要約とできるようprefixを追加する。\n- prefixを追加せずとも、ある程度任意のテーマに沿った要約とすることは可能。請求項の構造を利用する、任意のテーマに沿っているか判定するモデルを用い生成を補正するなど。\n\ncheck in progress",
"## Licenese\n- The MIT license"
] |
text-classification
|
transformers
|
## MarathiSentiment
** An updated and better version of this model covering multiple domains is shared here: <a href="https://huggingface.co/l3cube-pune/marathi-sentiment-md"> marathi-sentiment-md </a> ** <br>
MarathiSentiment is an IndicBERT(ai4bharat/indic-bert) model fine-tuned on L3CubeMahaSent - a Marathi tweet-based sentiment analysis dataset.
[dataset link] (https://github.com/l3cube-pune/MarathiNLP)
More details on the dataset, models, and baseline results can be found in our [paper] (http://arxiv.org/abs/2103.11408)
```
@inproceedings{kulkarni2021l3cubemahasent,
title={L3CubeMahaSent: A Marathi Tweet-based Sentiment Analysis Dataset},
author={Kulkarni, Atharva and Mandhane, Meet and Likhitkar, Manali and Kshirsagar, Gayatri and Joshi, Raviraj},
booktitle={Proceedings of the Eleventh Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis},
pages={213--220},
year={2021}
}
```
|
{"language": "mr", "license": "cc-by-4.0", "tags": ["albert"], "datasets": ["L3CubeMahaSent"], "widget": [{"text": "I like you. </s></s> I love you."}]}
|
l3cube-pune/MarathiSentiment
| null |
[
"transformers",
"pytorch",
"tf",
"safetensors",
"albert",
"text-classification",
"mr",
"dataset:L3CubeMahaSent",
"arxiv:2103.11408",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2103.11408"
] |
[
"mr"
] |
TAGS
#transformers #pytorch #tf #safetensors #albert #text-classification #mr #dataset-L3CubeMahaSent #arxiv-2103.11408 #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #region-us
|
## MarathiSentiment
An updated and better version of this model covering multiple domains is shared here: <a href="URL marathi-sentiment-md </a> <br>
MarathiSentiment is an IndicBERT(ai4bharat/indic-bert) model fine-tuned on L3CubeMahaSent - a Marathi tweet-based sentiment analysis dataset.
[dataset link] (URL
More details on the dataset, models, and baseline results can be found in our [paper] (URL
|
[
"## MarathiSentiment\n \n An updated and better version of this model covering multiple domains is shared here: <a href=\"URL marathi-sentiment-md </a> <br>\n\nMarathiSentiment is an IndicBERT(ai4bharat/indic-bert) model fine-tuned on L3CubeMahaSent - a Marathi tweet-based sentiment analysis dataset.\n[dataset link] (URL\n\nMore details on the dataset, models, and baseline results can be found in our [paper] (URL"
] |
[
"TAGS\n#transformers #pytorch #tf #safetensors #albert #text-classification #mr #dataset-L3CubeMahaSent #arxiv-2103.11408 #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"## MarathiSentiment\n \n An updated and better version of this model covering multiple domains is shared here: <a href=\"URL marathi-sentiment-md </a> <br>\n\nMarathiSentiment is an IndicBERT(ai4bharat/indic-bert) model fine-tuned on L3CubeMahaSent - a Marathi tweet-based sentiment analysis dataset.\n[dataset link] (URL\n\nMore details on the dataset, models, and baseline results can be found in our [paper] (URL"
] |
text-classification
|
transformers
|
## hate-bert-hasoc-marathi
hate-bert-hasoc-marathi is a binary hate speech model fine-tuned on Marathi Hasoc Hate Speech Dataset 2021.
The label mappings are 0 -> None, 1 -> Hate.
More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2110.12200)
A new version of Marathi Hate Speech Detection models can be found here: <br>
binary: https://huggingface.co/l3cube-pune/mahahate-bert <br>
multi label: https://huggingface.co/l3cube-pune/mahahate-multi-roberta <br>
```
@article{velankar2021hate,
title={Hate and Offensive Speech Detection in Hindi and Marathi},
author={Velankar, Abhishek and Patil, Hrushikesh and Gore, Amol and Salunke, Shubham and Joshi, Raviraj},
journal={arXiv preprint arXiv:2110.12200},
year={2021}
}
```
|
{"language": "mr", "license": "cc-by-4.0", "tags": ["albert"], "datasets": ["HASOC 2021"], "widget": [{"text": "I like you. </s></s> I love you."}]}
|
l3cube-pune/hate-bert-hasoc-marathi
| null |
[
"transformers",
"pytorch",
"tf",
"safetensors",
"albert",
"text-classification",
"mr",
"arxiv:2110.12200",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2110.12200"
] |
[
"mr"
] |
TAGS
#transformers #pytorch #tf #safetensors #albert #text-classification #mr #arxiv-2110.12200 #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #region-us
|
## hate-bert-hasoc-marathi
hate-bert-hasoc-marathi is a binary hate speech model fine-tuned on Marathi Hasoc Hate Speech Dataset 2021.
The label mappings are 0 -> None, 1 -> Hate.
More details on the dataset, models, and baseline results can be found in our [paper] (URL
A new version of Marathi Hate Speech Detection models can be found here: <br>
binary: URL <br>
multi label: URL <br>
|
[
"## hate-bert-hasoc-marathi\n\nhate-bert-hasoc-marathi is a binary hate speech model fine-tuned on Marathi Hasoc Hate Speech Dataset 2021.\nThe label mappings are 0 -> None, 1 -> Hate.\n\nMore details on the dataset, models, and baseline results can be found in our [paper] (URL\n\nA new version of Marathi Hate Speech Detection models can be found here: <br>\nbinary: URL <br>\nmulti label: URL <br>"
] |
[
"TAGS\n#transformers #pytorch #tf #safetensors #albert #text-classification #mr #arxiv-2110.12200 #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"## hate-bert-hasoc-marathi\n\nhate-bert-hasoc-marathi is a binary hate speech model fine-tuned on Marathi Hasoc Hate Speech Dataset 2021.\nThe label mappings are 0 -> None, 1 -> Hate.\n\nMore details on the dataset, models, and baseline results can be found in our [paper] (URL\n\nA new version of Marathi Hate Speech Detection models can be found here: <br>\nbinary: URL <br>\nmulti label: URL <br>"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.