modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-27 18:28:06
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 523
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-27 18:27:40
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
jeapaul/wav2vec2-base-torgo-demo-m04-nolm
|
jeapaul
| 2022-11-23T00:14:40Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-11-16T20:01:09Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-torgo-demo-m04-nolm
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-torgo-demo-m04-nolm
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5735
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:---:|
| 3.431 | 0.88 | 500 | 4.5567 | 1.0 |
| 3.4727 | 1.75 | 1000 | 3.5626 | 1.0 |
| 3.3879 | 2.63 | 1500 | 3.9274 | 1.0 |
| 3.3513 | 3.5 | 2000 | 3.4813 | 1.0 |
| 3.3538 | 4.38 | 2500 | 3.7300 | 1.0 |
| 3.3539 | 5.25 | 3000 | 3.5714 | 1.0 |
| 3.339 | 6.13 | 3500 | 3.6732 | 1.0 |
| 3.3038 | 7.01 | 4000 | 3.6788 | 1.0 |
| 3.35 | 7.88 | 4500 | 3.6715 | 1.0 |
| 3.338 | 8.76 | 5000 | 3.5161 | 1.0 |
| 3.3306 | 9.63 | 5500 | 3.7386 | 1.0 |
| 3.3266 | 10.51 | 6000 | 3.4908 | 1.0 |
| 3.3184 | 11.38 | 6500 | 3.7669 | 1.0 |
| 3.3189 | 12.26 | 7000 | 3.6142 | 1.0 |
| 3.331 | 13.13 | 7500 | 3.5619 | 1.0 |
| 3.3139 | 14.01 | 8000 | 3.6632 | 1.0 |
| 3.3069 | 14.89 | 8500 | 3.6127 | 1.0 |
| 3.315 | 15.76 | 9000 | 3.5562 | 1.0 |
| 3.3079 | 16.64 | 9500 | 3.7094 | 1.0 |
| 3.3077 | 17.51 | 10000 | 3.5412 | 1.0 |
| 3.3188 | 18.39 | 10500 | 3.6303 | 1.0 |
| 3.3133 | 19.26 | 11000 | 3.5704 | 1.0 |
| 3.3428 | 20.14 | 11500 | 3.5662 | 1.0 |
| 3.3082 | 21.02 | 12000 | 3.6084 | 1.0 |
| 3.3238 | 21.89 | 12500 | 3.6164 | 1.0 |
| 3.3119 | 22.77 | 13000 | 3.5787 | 1.0 |
| 3.2981 | 23.64 | 13500 | 3.6356 | 1.0 |
| 3.3153 | 24.52 | 14000 | 3.5726 | 1.0 |
| 3.3065 | 25.39 | 14500 | 3.5908 | 1.0 |
| 3.3199 | 26.27 | 15000 | 3.5823 | 1.0 |
| 3.306 | 27.15 | 15500 | 3.5658 | 1.0 |
| 3.3153 | 28.02 | 16000 | 3.5818 | 1.0 |
| 3.2762 | 28.9 | 16500 | 3.5810 | 1.0 |
| 3.3196 | 29.77 | 17000 | 3.5735 | 1.0 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.0.0
- Tokenizers 0.13.2
|
jacobthebanana/galactica-30b
|
jacobthebanana
| 2022-11-22T23:16:04Z | 7 | 1 |
transformers
|
[
"transformers",
"jax",
"opt",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-18T15:10:33Z |
---
license: cc-by-nc-4.0
---
JAX weights converted from Torch checkpoint at `facebook/galactica-30b`.
```python
(env) ubuntu@vm:~$ JAX_PLATFORM_NAME=cpu python3
>>> import jax
>>> print(jax.devices())
[CpuDevice(id=0)] # Ensure that model weights are loaded into CPU RAM, not accelerator memory.
>>> from transformers import FlaxOPTForCausalLM
>>> model = FlaxOPTForCausalLM.from_pretrained("facebook/galactica-30b", from_pt=True)
>>> model.push_to_hub(hf_model_repo)
```
## Citation and Attribution
Citation from the original repo is reproduced below as per the cc-by-nc-4.0 licsense.
```bibtex
@inproceedings{GALACTICA,
title={GALACTICA: A Large Language Model for Science},
author={Ross Taylor and Marcin Kardas and Guillem Cucurull and Thomas Scialom and Anthony Hartshorn and Elvis Saravia and Andrew Poulton and Viktor Kerkez and Robert Stojnic},
year={2022}
}
```
> Research supported with Cloud TPUs from Google's TPU Research Cloud (TRC)
|
unza/xls-r-300m-nyanja-fullset
|
unza
| 2022-11-22T23:02:48Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"NyanjaSpeech",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-11-22T10:28:07Z |
---
license: apache-2.0
tags:
- automatic-speech-recognition
- NyanjaSpeech
- generated_from_trainer
metrics:
- wer
model-index:
- name: xls-r-300m-nyanja-fullset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xls-r-300m-nyanja-fullset
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the NYANJASPEECH - NYA dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1987
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1500
- num_epochs: 2.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 3.3815 | 1.58 | 500 | 3.1987 | 1.0 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
warrormac/autotrain-my-train-2209070896
|
warrormac
| 2022-11-22T22:30:08Z | 79 | 0 |
transformers
|
[
"transformers",
"pytorch",
"autotrain",
"translation",
"en",
"es",
"dataset:warrormac/autotrain-data-my-train",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-11-22T21:56:47Z |
---
tags:
- autotrain
- translation
language:
- en
- es
datasets:
- warrormac/autotrain-data-my-train
co2_eq_emissions:
emissions: 48.01845367300684
---
# Model Trained Using AutoTrain
- Problem type: Translation
- Model ID: 2209070896
- CO2 Emissions (in grams): 48.0185
## Validation Metrics
- Loss: 0.940
- SacreBLEU: 37.030
- Gen len: 11.428
|
monakth/distilbert-base-multilingual-cased-sv2
|
monakth
| 2022-11-22T22:26:39Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad_v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-11-22T22:24:13Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: distilbert-base-multilingual-cased-sv2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-multilingual-cased-sv2
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the squad_v2 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
sacculifer/dimbat_disaster_type_distilbert
|
sacculifer
| 2022-11-22T22:07:32Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-05T19:36:01Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: tmpzujlpono
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Tweets disaster type classification model
This model was trained from part of Disaster Tweet Corpus 2020 (Analysis of Filtering Models for Disaster-Related Tweets, Wiegmann,M. et al, 2020) dataset
It achieves the following results on the evaluation set:
- Train Loss: 0.0875
- Train Accuracy: 0.8783
- Validation Loss: 0.2980
- Validation Accuracy: 0.8133
- Epoch: 5
## Model description
Labels
<br>
disease --- 1
<br>
earthquake --- 2
<br>
flood --- 3
<br>
hurricane & tornado --- 4
<br>
wildfire --- 5
<br>
industrial accident --- 6
<br>
societal crime --- 7
<br>
transportation accident --- 8
<br>
meteor crash --- 9
<br>
haze --- 0
## Intended uses & limitation
This model is able to detect 10 different type of disaster (nature and human-made), but it shows problem to detect the type 0 disaster due to the insignificant tweets and similarity to type 5 in the training dataset
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer:
<br>
batch_size = 16
<br>
num_epochs = 5
<br>
batches_per_epoch = len(tokenized_tweet["train"])//batch_size
<br>
total_train_steps = int(batches_per_epoch * num_epochs)
<br>
optimizer, schedule = create_optimizer(init_lr=2e-5, num_warmup_steps=0, num_train_steps=total_train_steps)
- training_precision: float32
### Framework versions
- Transformers 4.16.2
- TensorFlow 2.9.2
- Datasets 2.4.0
- Tokenizers 0.12.1
### How to use it
from transformers import AutoTokenizer, TFAutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("sacculifer/dimbat_disaster_type_distilbert")
model = TFAutoModelForSequenceClassification.from_pretrained("sacculifer/dimbat_disaster_type_distilbert")
|
manirai91/xlm-roberta-imdb
|
manirai91
| 2022-11-22T20:36:34Z | 126 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-22T16:42:44Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: xlm-roberta-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-imdb
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the imdb dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.11.0
- Datasets 2.7.0
- Tokenizers 0.13.2
|
research-backup/relbert-roberta-base-semeval2012-v6-average-prompt-e-triplet-2
|
research-backup
| 2022-11-22T20:25:41Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"dataset:relbert/semeval2012_relational_similarity_v6",
"model-index",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-11-22T07:40:00Z |
---
datasets:
- relbert/semeval2012_relational_similarity_v6
model-index:
- name: relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-triplet-2
results:
- task:
name: Relation Mapping
type: sorting-task
dataset:
name: Relation Mapping
args: relbert/relation_mapping
type: relation-mapping
metrics:
- name: Accuracy
type: accuracy
value: 0.790515873015873
- task:
name: Analogy Questions (SAT full)
type: multiple-choice-qa
dataset:
name: SAT full
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.37967914438502676
- task:
name: Analogy Questions (SAT)
type: multiple-choice-qa
dataset:
name: SAT
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.3857566765578635
- task:
name: Analogy Questions (BATS)
type: multiple-choice-qa
dataset:
name: BATS
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.5063924402445803
- task:
name: Analogy Questions (Google)
type: multiple-choice-qa
dataset:
name: Google
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.646
- task:
name: Analogy Questions (U2)
type: multiple-choice-qa
dataset:
name: U2
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.4517543859649123
- task:
name: Analogy Questions (U4)
type: multiple-choice-qa
dataset:
name: U4
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.42824074074074076
- task:
name: Lexical Relation Classification (BLESS)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8080458038270304
- name: F1 (macro)
type: f1_macro
value: 0.7357565896819839
- task:
name: Lexical Relation Classification (CogALexV)
type: classification
dataset:
name: CogALexV
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.7894366197183098
- name: F1 (macro)
type: f1_macro
value: 0.4680529848631216
- task:
name: Lexical Relation Classification (EVALution)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.5520043336944745
- name: F1 (macro)
type: f1_macro
value: 0.5647005456999193
- task:
name: Lexical Relation Classification (K&H+N)
type: classification
dataset:
name: K&H+N
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.9177157960631565
- name: F1 (macro)
type: f1_macro
value: 0.7991809595622609
- task:
name: Lexical Relation Classification (ROOT09)
type: classification
dataset:
name: ROOT09
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.770918207458477
- name: F1 (macro)
type: f1_macro
value: 0.701131895018139
---
# relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-triplet-2
RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on
[relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6).
Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail).
It achieves the following results on the relation understanding tasks:
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-triplet-2/raw/main/analogy.json)):
- Accuracy on SAT (full): 0.37967914438502676
- Accuracy on SAT: 0.3857566765578635
- Accuracy on BATS: 0.5063924402445803
- Accuracy on U2: 0.4517543859649123
- Accuracy on U4: 0.42824074074074076
- Accuracy on Google: 0.646
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-triplet-2/raw/main/classification.json)):
- Micro F1 score on BLESS: 0.8080458038270304
- Micro F1 score on CogALexV: 0.7894366197183098
- Micro F1 score on EVALution: 0.5520043336944745
- Micro F1 score on K&H+N: 0.9177157960631565
- Micro F1 score on ROOT09: 0.770918207458477
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-triplet-2/raw/main/relation_mapping.json)):
- Accuracy on Relation Mapping: 0.790515873015873
### Usage
This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip
```shell
pip install relbert
```
and activate model as below.
```python
from relbert import RelBERT
model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-triplet-2")
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, )
```
### Training hyperparameters
The following hyperparameters were used during training:
- model: roberta-base
- max_length: 64
- mode: average
- data: relbert/semeval2012_relational_similarity_v6
- split: train
- split_eval: validation
- template_mode: manual
- loss_function: triplet
- classification_loss: False
- temperature_nce_constant: 0.05
- temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'}
- epoch: 10
- batch: 128
- lr: 5e-06
- lr_decay: False
- lr_warmup: 1
- weight_decay: 0
- random_seed: 2
- exclude_relation: None
- n_sample: 320
- gradient_accumulation: 8
- relation_level: None
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-triplet-2/raw/main/trainer_config.json).
### Reference
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
```
@inproceedings{ushio-etal-2021-distilling-relation-embeddings,
title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels",
author = "Ushio, Asahi and
Schockaert, Steven and
Camacho-Collados, Jose",
booktitle = "EMNLP 2021",
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
}
```
|
research-backup/relbert-roberta-base-semeval2012-v6-average-prompt-c-triplet-2
|
research-backup
| 2022-11-22T19:57:35Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"dataset:relbert/semeval2012_relational_similarity_v6",
"model-index",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-11-22T07:36:40Z |
---
datasets:
- relbert/semeval2012_relational_similarity_v6
model-index:
- name: relbert/relbert-roberta-base-semeval2012-v6-average-prompt-c-triplet-2
results:
- task:
name: Relation Mapping
type: sorting-task
dataset:
name: Relation Mapping
args: relbert/relation_mapping
type: relation-mapping
metrics:
- name: Accuracy
type: accuracy
value: 0.8335714285714285
- task:
name: Analogy Questions (SAT full)
type: multiple-choice-qa
dataset:
name: SAT full
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.38235294117647056
- task:
name: Analogy Questions (SAT)
type: multiple-choice-qa
dataset:
name: SAT
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.3798219584569733
- task:
name: Analogy Questions (BATS)
type: multiple-choice-qa
dataset:
name: BATS
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.5336297943301834
- task:
name: Analogy Questions (Google)
type: multiple-choice-qa
dataset:
name: Google
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.662
- task:
name: Analogy Questions (U2)
type: multiple-choice-qa
dataset:
name: U2
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.4473684210526316
- task:
name: Analogy Questions (U4)
type: multiple-choice-qa
dataset:
name: U4
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.4166666666666667
- task:
name: Lexical Relation Classification (BLESS)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8625885189091457
- name: F1 (macro)
type: f1_macro
value: 0.8603027072164148
- task:
name: Lexical Relation Classification (CogALexV)
type: classification
dataset:
name: CogALexV
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8065727699530516
- name: F1 (macro)
type: f1_macro
value: 0.5506373401584694
- task:
name: Lexical Relation Classification (EVALution)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.6175514626218852
- name: F1 (macro)
type: f1_macro
value: 0.6052063445391235
- task:
name: Lexical Relation Classification (K&H+N)
type: classification
dataset:
name: K&H+N
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.9263406830354037
- name: F1 (macro)
type: f1_macro
value: 0.8061025838390545
- task:
name: Lexical Relation Classification (ROOT09)
type: classification
dataset:
name: ROOT09
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8373550611093701
- name: F1 (macro)
type: f1_macro
value: 0.837629132435287
---
# relbert/relbert-roberta-base-semeval2012-v6-average-prompt-c-triplet-2
RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on
[relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6).
Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail).
It achieves the following results on the relation understanding tasks:
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-c-triplet-2/raw/main/analogy.json)):
- Accuracy on SAT (full): 0.38235294117647056
- Accuracy on SAT: 0.3798219584569733
- Accuracy on BATS: 0.5336297943301834
- Accuracy on U2: 0.4473684210526316
- Accuracy on U4: 0.4166666666666667
- Accuracy on Google: 0.662
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-c-triplet-2/raw/main/classification.json)):
- Micro F1 score on BLESS: 0.8625885189091457
- Micro F1 score on CogALexV: 0.8065727699530516
- Micro F1 score on EVALution: 0.6175514626218852
- Micro F1 score on K&H+N: 0.9263406830354037
- Micro F1 score on ROOT09: 0.8373550611093701
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-c-triplet-2/raw/main/relation_mapping.json)):
- Accuracy on Relation Mapping: 0.8335714285714285
### Usage
This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip
```shell
pip install relbert
```
and activate model as below.
```python
from relbert import RelBERT
model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-average-prompt-c-triplet-2")
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, )
```
### Training hyperparameters
The following hyperparameters were used during training:
- model: roberta-base
- max_length: 64
- mode: average
- data: relbert/semeval2012_relational_similarity_v6
- split: train
- split_eval: validation
- template_mode: manual
- loss_function: triplet
- classification_loss: False
- temperature_nce_constant: 0.05
- temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'}
- epoch: 10
- batch: 128
- lr: 5e-06
- lr_decay: False
- lr_warmup: 1
- weight_decay: 0
- random_seed: 2
- exclude_relation: None
- n_sample: 320
- gradient_accumulation: 8
- relation_level: None
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-c-triplet-2/raw/main/trainer_config.json).
### Reference
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
```
@inproceedings{ushio-etal-2021-distilling-relation-embeddings,
title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels",
author = "Ushio, Asahi and
Schockaert, Steven and
Camacho-Collados, Jose",
booktitle = "EMNLP 2021",
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
}
```
|
research-backup/relbert-roberta-base-semeval2012-v6-average-prompt-b-triplet-2
|
research-backup
| 2022-11-22T19:43:03Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"dataset:relbert/semeval2012_relational_similarity_v6",
"model-index",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-11-22T07:34:42Z |
---
datasets:
- relbert/semeval2012_relational_similarity_v6
model-index:
- name: relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-triplet-2
results:
- task:
name: Relation Mapping
type: sorting-task
dataset:
name: Relation Mapping
args: relbert/relation_mapping
type: relation-mapping
metrics:
- name: Accuracy
type: accuracy
value: 0.7143253968253969
- task:
name: Analogy Questions (SAT full)
type: multiple-choice-qa
dataset:
name: SAT full
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.30213903743315507
- task:
name: Analogy Questions (SAT)
type: multiple-choice-qa
dataset:
name: SAT
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.29673590504451036
- task:
name: Analogy Questions (BATS)
type: multiple-choice-qa
dataset:
name: BATS
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.41078376876042244
- task:
name: Analogy Questions (Google)
type: multiple-choice-qa
dataset:
name: Google
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.444
- task:
name: Analogy Questions (U2)
type: multiple-choice-qa
dataset:
name: U2
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.3508771929824561
- task:
name: Analogy Questions (U4)
type: multiple-choice-qa
dataset:
name: U4
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.35185185185185186
- task:
name: Lexical Relation Classification (BLESS)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8389332529757421
- name: F1 (macro)
type: f1_macro
value: 0.8320870274406121
- task:
name: Lexical Relation Classification (CogALexV)
type: classification
dataset:
name: CogALexV
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8110328638497653
- name: F1 (macro)
type: f1_macro
value: 0.558175722976752
- task:
name: Lexical Relation Classification (EVALution)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.6397616468039004
- name: F1 (macro)
type: f1_macro
value: 0.6018197960350038
- task:
name: Lexical Relation Classification (K&H+N)
type: classification
dataset:
name: K&H+N
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.936495791889824
- name: F1 (macro)
type: f1_macro
value: 0.8329891004271437
- task:
name: Lexical Relation Classification (ROOT09)
type: classification
dataset:
name: ROOT09
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8574114697586963
- name: F1 (macro)
type: f1_macro
value: 0.859031346414651
---
# relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-triplet-2
RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on
[relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6).
Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail).
It achieves the following results on the relation understanding tasks:
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-triplet-2/raw/main/analogy.json)):
- Accuracy on SAT (full): 0.30213903743315507
- Accuracy on SAT: 0.29673590504451036
- Accuracy on BATS: 0.41078376876042244
- Accuracy on U2: 0.3508771929824561
- Accuracy on U4: 0.35185185185185186
- Accuracy on Google: 0.444
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-triplet-2/raw/main/classification.json)):
- Micro F1 score on BLESS: 0.8389332529757421
- Micro F1 score on CogALexV: 0.8110328638497653
- Micro F1 score on EVALution: 0.6397616468039004
- Micro F1 score on K&H+N: 0.936495791889824
- Micro F1 score on ROOT09: 0.8574114697586963
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-triplet-2/raw/main/relation_mapping.json)):
- Accuracy on Relation Mapping: 0.7143253968253969
### Usage
This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip
```shell
pip install relbert
```
and activate model as below.
```python
from relbert import RelBERT
model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-triplet-2")
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, )
```
### Training hyperparameters
The following hyperparameters were used during training:
- model: roberta-base
- max_length: 64
- mode: average
- data: relbert/semeval2012_relational_similarity_v6
- split: train
- split_eval: validation
- template_mode: manual
- loss_function: triplet
- classification_loss: False
- temperature_nce_constant: 0.05
- temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'}
- epoch: 10
- batch: 128
- lr: 5e-06
- lr_decay: False
- lr_warmup: 1
- weight_decay: 0
- random_seed: 2
- exclude_relation: None
- n_sample: 320
- gradient_accumulation: 8
- relation_level: None
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-triplet-2/raw/main/trainer_config.json).
### Reference
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
```
@inproceedings{ushio-etal-2021-distilling-relation-embeddings,
title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels",
author = "Ushio, Asahi and
Schockaert, Steven and
Camacho-Collados, Jose",
booktitle = "EMNLP 2021",
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
}
```
|
alryan1478/gpt2-wikitext2
|
alryan1478
| 2022-11-22T19:15:47Z | 175 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-22T16:54:38Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-wikitext2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.1085
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.561 | 1.0 | 2249 | 6.4685 |
| 6.1921 | 2.0 | 4498 | 6.1978 |
| 6.017 | 3.0 | 6747 | 6.1085 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0+cu117
- Datasets 2.7.0
- Tokenizers 0.13.2
|
masapasa/meddner
|
masapasa
| 2022-11-22T19:13:06Z | 3 | 0 |
spacy
|
[
"spacy",
"token-classification",
"en",
"license:mit",
"model-index",
"region:us"
] |
token-classification
| 2022-11-22T19:05:40Z |
---
tags:
- spacy
- token-classification
language:
- en
license: mit
model-index:
- name: en_core_med7_lg
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.8649613325
- name: NER Recall
type: recall
value: 0.8892966361
- name: NER F Score
type: f_score
value: 0.876960193
duplicated_from: kormilitzin/en_core_med7_lg
---
| Feature | Description |
| --- | --- |
| **Name** | `en_core_med7_lg` |
| **Version** | `3.4.2.1` |
| **spaCy** | `>=3.4.2,<3.5.0` |
| **Default Pipeline** | `tok2vec`, `ner` |
| **Components** | `tok2vec`, `ner` |
| **Vectors** | 514157 keys, 514157 unique vectors (300 dimensions) |
| **Sources** | n/a |
| **License** | `MIT` |
| **Author** | [Andrey Kormilitzin](https://www.kormilitzin.com/) |
### Label Scheme
<details>
<summary>View label scheme (7 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `DOSAGE`, `DRUG`, `DURATION`, `FORM`, `FREQUENCY`, `ROUTE`, `STRENGTH` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `ENTS_F` | 87.70 |
| `ENTS_P` | 86.50 |
| `ENTS_R` | 88.93 |
| `TOK2VEC_LOSS` | 226109.53 |
| `NER_LOSS` | 302222.55 |
### BibTeX entry and citation info
```bibtex
@article{kormilitzin2021med7,
title={Med7: A transferable clinical natural language processing model for electronic health records},
author={Kormilitzin, Andrey and Vaci, Nemanja and Liu, Qiang and Nevado-Holgado, Alejo},
journal={Artificial Intelligence in Medicine},
volume={118},
pages={102086},
year={2021},
publisher={Elsevier}
}
```
|
research-backup/relbert-roberta-base-semeval2012-v6-mask-prompt-e-triplet-2
|
research-backup
| 2022-11-22T19:10:57Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"dataset:relbert/semeval2012_relational_similarity_v6",
"model-index",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-11-22T07:30:50Z |
---
datasets:
- relbert/semeval2012_relational_similarity_v6
model-index:
- name: relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-triplet-2
results:
- task:
name: Relation Mapping
type: sorting-task
dataset:
name: Relation Mapping
args: relbert/relation_mapping
type: relation-mapping
metrics:
- name: Accuracy
type: accuracy
value: 0.47160714285714284
- task:
name: Analogy Questions (SAT full)
type: multiple-choice-qa
dataset:
name: SAT full
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.34759358288770054
- task:
name: Analogy Questions (SAT)
type: multiple-choice-qa
dataset:
name: SAT
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.3590504451038576
- task:
name: Analogy Questions (BATS)
type: multiple-choice-qa
dataset:
name: BATS
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.4980544747081712
- task:
name: Analogy Questions (Google)
type: multiple-choice-qa
dataset:
name: Google
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.544
- task:
name: Analogy Questions (U2)
type: multiple-choice-qa
dataset:
name: U2
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.38596491228070173
- task:
name: Analogy Questions (U4)
type: multiple-choice-qa
dataset:
name: U4
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.38657407407407407
- task:
name: Lexical Relation Classification (BLESS)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.835919843302697
- name: F1 (macro)
type: f1_macro
value: 0.8291105198617971
- task:
name: Lexical Relation Classification (CogALexV)
type: classification
dataset:
name: CogALexV
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.7704225352112676
- name: F1 (macro)
type: f1_macro
value: 0.4170022869326865
- task:
name: Lexical Relation Classification (EVALution)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.6121343445287107
- name: F1 (macro)
type: f1_macro
value: 0.5765221107709003
- task:
name: Lexical Relation Classification (K&H+N)
type: classification
dataset:
name: K&H+N
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.9145162412186131
- name: F1 (macro)
type: f1_macro
value: 0.783440515726974
- task:
name: Lexical Relation Classification (ROOT09)
type: classification
dataset:
name: ROOT09
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8223127546223754
- name: F1 (macro)
type: f1_macro
value: 0.8219042972063227
---
# relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-triplet-2
RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on
[relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6).
Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail).
It achieves the following results on the relation understanding tasks:
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-triplet-2/raw/main/analogy.json)):
- Accuracy on SAT (full): 0.34759358288770054
- Accuracy on SAT: 0.3590504451038576
- Accuracy on BATS: 0.4980544747081712
- Accuracy on U2: 0.38596491228070173
- Accuracy on U4: 0.38657407407407407
- Accuracy on Google: 0.544
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-triplet-2/raw/main/classification.json)):
- Micro F1 score on BLESS: 0.835919843302697
- Micro F1 score on CogALexV: 0.7704225352112676
- Micro F1 score on EVALution: 0.6121343445287107
- Micro F1 score on K&H+N: 0.9145162412186131
- Micro F1 score on ROOT09: 0.8223127546223754
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-triplet-2/raw/main/relation_mapping.json)):
- Accuracy on Relation Mapping: 0.47160714285714284
### Usage
This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip
```shell
pip install relbert
```
and activate model as below.
```python
from relbert import RelBERT
model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-triplet-2")
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, )
```
### Training hyperparameters
The following hyperparameters were used during training:
- model: roberta-base
- max_length: 64
- mode: mask
- data: relbert/semeval2012_relational_similarity_v6
- split: train
- split_eval: validation
- template_mode: manual
- loss_function: triplet
- classification_loss: False
- temperature_nce_constant: 0.05
- temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'}
- epoch: 4
- batch: 128
- lr: 5e-06
- lr_decay: False
- lr_warmup: 1
- weight_decay: 0
- random_seed: 2
- exclude_relation: None
- n_sample: 320
- gradient_accumulation: 8
- relation_level: None
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-triplet-2/raw/main/trainer_config.json).
### Reference
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
```
@inproceedings{ushio-etal-2021-distilling-relation-embeddings,
title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels",
author = "Ushio, Asahi and
Schockaert, Steven and
Camacho-Collados, Jose",
booktitle = "EMNLP 2021",
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
}
```
|
HarshitaDiddee/AmericasNLP_Kotiria
|
HarshitaDiddee
| 2022-11-22T18:58:36Z | 4 | 0 |
transformers
|
[
"transformers",
"wav2vec2",
"automatic-speech-recognition",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-11-22T18:56:28Z |
---
license: cc-by-4.0
---
ASR for Kotiria ( Data Source: AmericasNLP Shared Task for Low-Resource ASR)
|
research-backup/relbert-roberta-base-semeval2012-v6-mask-prompt-d-triplet-2
|
research-backup
| 2022-11-22T18:49:15Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"dataset:relbert/semeval2012_relational_similarity_v6",
"model-index",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-11-22T07:28:58Z |
---
datasets:
- relbert/semeval2012_relational_similarity_v6
model-index:
- name: relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-d-triplet-2
results:
- task:
name: Relation Mapping
type: sorting-task
dataset:
name: Relation Mapping
args: relbert/relation_mapping
type: relation-mapping
metrics:
- name: Accuracy
type: accuracy
value: 0.6089087301587301
- task:
name: Analogy Questions (SAT full)
type: multiple-choice-qa
dataset:
name: SAT full
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.43315508021390375
- task:
name: Analogy Questions (SAT)
type: multiple-choice-qa
dataset:
name: SAT
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.44510385756676557
- task:
name: Analogy Questions (BATS)
type: multiple-choice-qa
dataset:
name: BATS
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.6120066703724292
- task:
name: Analogy Questions (Google)
type: multiple-choice-qa
dataset:
name: Google
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.878
- task:
name: Analogy Questions (U2)
type: multiple-choice-qa
dataset:
name: U2
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.4473684210526316
- task:
name: Analogy Questions (U4)
type: multiple-choice-qa
dataset:
name: U4
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.49537037037037035
- task:
name: Lexical Relation Classification (BLESS)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8796142835618503
- name: F1 (macro)
type: f1_macro
value: 0.8747731277585521
- task:
name: Lexical Relation Classification (CogALexV)
type: classification
dataset:
name: CogALexV
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8394366197183099
- name: F1 (macro)
type: f1_macro
value: 0.6300385764057015
- task:
name: Lexical Relation Classification (EVALution)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.6749729144095341
- name: F1 (macro)
type: f1_macro
value: 0.6626586846228053
- task:
name: Lexical Relation Classification (K&H+N)
type: classification
dataset:
name: K&H+N
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.9191069068651319
- name: F1 (macro)
type: f1_macro
value: 0.8114897599095089
- task:
name: Lexical Relation Classification (ROOT09)
type: classification
dataset:
name: ROOT09
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8921968035098715
- name: F1 (macro)
type: f1_macro
value: 0.8854495217016495
---
# relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-d-triplet-2
RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on
[relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6).
Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail).
It achieves the following results on the relation understanding tasks:
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-d-triplet-2/raw/main/analogy.json)):
- Accuracy on SAT (full): 0.43315508021390375
- Accuracy on SAT: 0.44510385756676557
- Accuracy on BATS: 0.6120066703724292
- Accuracy on U2: 0.4473684210526316
- Accuracy on U4: 0.49537037037037035
- Accuracy on Google: 0.878
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-d-triplet-2/raw/main/classification.json)):
- Micro F1 score on BLESS: 0.8796142835618503
- Micro F1 score on CogALexV: 0.8394366197183099
- Micro F1 score on EVALution: 0.6749729144095341
- Micro F1 score on K&H+N: 0.9191069068651319
- Micro F1 score on ROOT09: 0.8921968035098715
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-d-triplet-2/raw/main/relation_mapping.json)):
- Accuracy on Relation Mapping: 0.6089087301587301
### Usage
This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip
```shell
pip install relbert
```
and activate model as below.
```python
from relbert import RelBERT
model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-d-triplet-2")
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, )
```
### Training hyperparameters
The following hyperparameters were used during training:
- model: roberta-base
- max_length: 64
- mode: mask
- data: relbert/semeval2012_relational_similarity_v6
- split: train
- split_eval: validation
- template_mode: manual
- loss_function: triplet
- classification_loss: False
- temperature_nce_constant: 0.05
- temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'}
- epoch: 9
- batch: 128
- lr: 5e-06
- lr_decay: False
- lr_warmup: 1
- weight_decay: 0
- random_seed: 2
- exclude_relation: None
- n_sample: 320
- gradient_accumulation: 8
- relation_level: None
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-d-triplet-2/raw/main/trainer_config.json).
### Reference
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
```
@inproceedings{ushio-etal-2021-distilling-relation-embeddings,
title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels",
author = "Ushio, Asahi and
Schockaert, Steven and
Camacho-Collados, Jose",
booktitle = "EMNLP 2021",
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
}
```
|
HarshitaDiddee/AmericasNLP_Bribri
|
HarshitaDiddee
| 2022-11-22T18:35:11Z | 91 | 0 |
transformers
|
[
"transformers",
"wav2vec2",
"automatic-speech-recognition",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-11-22T18:24:40Z |
---
license: cc-by-4.0
---
ASR Model for Bribri ( Source: AmericasNLP Shared Task 2022 )
|
umairalipathan/finetuning-sentiment-model-surrender-final
|
umairalipathan
| 2022-11-22T18:17:49Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-22T18:08:12Z |
---
tags:
- generated_from_trainer
model-index:
- name: finetuning-sentiment-model-surrender-final
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-surrender-final
This model is a fine-tuned version of [umairalipathan/autotrain-sisu_surrender-2206370778](https://huggingface.co/umairalipathan/autotrain-sisu_surrender-2206370778) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.2072
- eval_accuracy: 0.9556
- eval_f1: 0.9714
- eval_runtime: 8.4
- eval_samples_per_second: 5.357
- eval_steps_per_second: 0.357
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0+cpu
- Datasets 2.6.1
- Tokenizers 0.13.2
|
motmono/Modified-Reinforce-PixelCopter
|
motmono
| 2022-11-22T18:16:23Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-11-22T18:13:52Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Modified-Reinforce-PixelCopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 16.10 +/- 10.73
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
research-backup/relbert-roberta-base-semeval2012-v6-mask-prompt-c-triplet-2
|
research-backup
| 2022-11-22T18:16:11Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"dataset:relbert/semeval2012_relational_similarity_v6",
"model-index",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-11-22T07:26:58Z |
---
datasets:
- relbert/semeval2012_relational_similarity_v6
model-index:
- name: relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-triplet-2
results:
- task:
name: Relation Mapping
type: sorting-task
dataset:
name: Relation Mapping
args: relbert/relation_mapping
type: relation-mapping
metrics:
- name: Accuracy
type: accuracy
value: 0.6346626984126984
- task:
name: Analogy Questions (SAT full)
type: multiple-choice-qa
dataset:
name: SAT full
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.32887700534759357
- task:
name: Analogy Questions (SAT)
type: multiple-choice-qa
dataset:
name: SAT
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.3264094955489614
- task:
name: Analogy Questions (BATS)
type: multiple-choice-qa
dataset:
name: BATS
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.47581989994441354
- task:
name: Analogy Questions (Google)
type: multiple-choice-qa
dataset:
name: Google
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.464
- task:
name: Analogy Questions (U2)
type: multiple-choice-qa
dataset:
name: U2
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.37719298245614036
- task:
name: Analogy Questions (U4)
type: multiple-choice-qa
dataset:
name: U4
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.36342592592592593
- task:
name: Lexical Relation Classification (BLESS)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.7761036612927528
- name: F1 (macro)
type: f1_macro
value: 0.7415561766602355
- task:
name: Lexical Relation Classification (CogALexV)
type: classification
dataset:
name: CogALexV
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.7328638497652582
- name: F1 (macro)
type: f1_macro
value: 0.47573763054929613
- task:
name: Lexical Relation Classification (EVALution)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.5390032502708559
- name: F1 (macro)
type: f1_macro
value: 0.49194003623703636
- task:
name: Lexical Relation Classification (K&H+N)
type: classification
dataset:
name: K&H+N
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8753564721430062
- name: F1 (macro)
type: f1_macro
value: 0.7536524804914483
- task:
name: Lexical Relation Classification (ROOT09)
type: classification
dataset:
name: ROOT09
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8282670009401442
- name: F1 (macro)
type: f1_macro
value: 0.8236645741563291
---
# relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-triplet-2
RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on
[relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6).
Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail).
It achieves the following results on the relation understanding tasks:
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-triplet-2/raw/main/analogy.json)):
- Accuracy on SAT (full): 0.32887700534759357
- Accuracy on SAT: 0.3264094955489614
- Accuracy on BATS: 0.47581989994441354
- Accuracy on U2: 0.37719298245614036
- Accuracy on U4: 0.36342592592592593
- Accuracy on Google: 0.464
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-triplet-2/raw/main/classification.json)):
- Micro F1 score on BLESS: 0.7761036612927528
- Micro F1 score on CogALexV: 0.7328638497652582
- Micro F1 score on EVALution: 0.5390032502708559
- Micro F1 score on K&H+N: 0.8753564721430062
- Micro F1 score on ROOT09: 0.8282670009401442
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-triplet-2/raw/main/relation_mapping.json)):
- Accuracy on Relation Mapping: 0.6346626984126984
### Usage
This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip
```shell
pip install relbert
```
and activate model as below.
```python
from relbert import RelBERT
model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-triplet-2")
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, )
```
### Training hyperparameters
The following hyperparameters were used during training:
- model: roberta-base
- max_length: 64
- mode: mask
- data: relbert/semeval2012_relational_similarity_v6
- split: train
- split_eval: validation
- template_mode: manual
- loss_function: triplet
- classification_loss: False
- temperature_nce_constant: 0.05
- temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'}
- epoch: 9
- batch: 128
- lr: 5e-06
- lr_decay: False
- lr_warmup: 1
- weight_decay: 0
- random_seed: 2
- exclude_relation: None
- n_sample: 320
- gradient_accumulation: 8
- relation_level: None
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-triplet-2/raw/main/trainer_config.json).
### Reference
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
```
@inproceedings{ushio-etal-2021-distilling-relation-embeddings,
title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels",
author = "Ushio, Asahi and
Schockaert, Steven and
Camacho-Collados, Jose",
booktitle = "EMNLP 2021",
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
}
```
|
julesyego/train
|
julesyego
| 2022-11-22T18:04:41Z | 167 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-22T12:58:17Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: train
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1100
- F1: 0.6074
- Roc Auc: 0.7538
- Accuracy: 0.8966
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:|:--------:|
| 0.2023 | 1.0 | 5261 | 0.1159 | 0.5889 | 0.7525 | 0.8851 |
| 0.1663 | 2.0 | 10522 | 0.1100 | 0.6074 | 0.7538 | 0.8966 |
| 0.1472 | 3.0 | 15783 | 0.1132 | 0.5736 | 0.7679 | 0.8634 |
| 0.1312 | 4.0 | 21044 | 0.1159 | 0.5975 | 0.7462 | 0.8911 |
| 0.1175 | 5.0 | 26305 | 0.1289 | 0.5922 | 0.7390 | 0.8936 |
| 0.1036 | 6.0 | 31566 | 0.1380 | 0.6062 | 0.7463 | 0.897 |
| 0.089 | 7.0 | 36827 | 0.1440 | 0.5927 | 0.7395 | 0.894 |
| 0.077 | 8.0 | 42088 | 0.1579 | 0.5998 | 0.7463 | 0.8944 |
| 0.0661 | 9.0 | 47349 | 0.1662 | 0.5933 | 0.7382 | 0.8956 |
| 0.0584 | 10.0 | 52610 | 0.1665 | 0.5940 | 0.7424 | 0.8922 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Tokenizers 0.13.2
|
renjithman/finetuning-sentiment-model-3000-samples
|
renjithman
| 2022-11-22T17:43:52Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-22T17:30:07Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.87
- name: F1
type: f1
value: 0.8704318936877077
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3099
- Accuracy: 0.87
- F1: 0.8704
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
gd1m3y/test_trainer_1
|
gd1m3y
| 2022-11-22T17:38:49Z | 178 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-22T17:04:11Z |
<<<<<<< HEAD
---
tags:
- generated_from_trainer
datasets:
- financial_phrasebank
model-index:
- name: test_trainer_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_trainer_1
This model is a fine-tuned version of [SALT-NLP/FLANG-Roberta](https://huggingface.co/SALT-NLP/FLANG-Roberta) on the financial_phrasebank dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.5963
- eval_accuracy: 0.9242
- eval_runtime: 4.3354
- eval_samples_per_second: 97.337
- eval_steps_per_second: 12.225
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
=======
This is a demo model for our reference
>>>>>>> 24191373ff05e3799b9c6f359e51b37b642f4865
|
datasciencemmw/old-beta2
|
datasciencemmw
| 2022-11-22T17:37:01Z | 101 | 1 |
transformers
|
[
"transformers",
"pytorch",
"autotrain",
"text-classification",
"en",
"dataset:LiveEvil/autotrain-data-copuml-la-beta-demo",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-22T17:35:39Z |
---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- LiveEvil/autotrain-data-copuml-la-beta-demo
co2_eq_emissions:
emissions: 1.2815143214785873
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 2205770755
- CO2 Emissions (in grams): 1.2815
## Validation Metrics
- Loss: 1.085
- Accuracy: 0.747
- Macro F1: 0.513
- Micro F1: 0.747
- Weighted F1: 0.715
- Macro Precision: 0.533
- Micro Precision: 0.747
- Weighted Precision: 0.691
- Macro Recall: 0.515
- Micro Recall: 0.747
- Weighted Recall: 0.747
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/LiveEvil/autotrain-copuml-la-beta-demo-2205770755
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("LiveEvil/autotrain-copuml-la-beta-demo-2205770755", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("LiveEvil/autotrain-copuml-la-beta-demo-2205770755", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
research-backup/relbert-roberta-base-semeval2012-v6-average-prompt-e-triplet-1
|
research-backup
| 2022-11-22T17:34:18Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"dataset:relbert/semeval2012_relational_similarity_v6",
"model-index",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-11-22T07:40:04Z |
---
datasets:
- relbert/semeval2012_relational_similarity_v6
model-index:
- name: relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-triplet-1
results:
- task:
name: Relation Mapping
type: sorting-task
dataset:
name: Relation Mapping
args: relbert/relation_mapping
type: relation-mapping
metrics:
- name: Accuracy
type: accuracy
value: 0.8018650793650793
- task:
name: Analogy Questions (SAT full)
type: multiple-choice-qa
dataset:
name: SAT full
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.3502673796791444
- task:
name: Analogy Questions (SAT)
type: multiple-choice-qa
dataset:
name: SAT
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.35014836795252224
- task:
name: Analogy Questions (BATS)
type: multiple-choice-qa
dataset:
name: BATS
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.5202890494719289
- task:
name: Analogy Questions (Google)
type: multiple-choice-qa
dataset:
name: Google
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.644
- task:
name: Analogy Questions (U2)
type: multiple-choice-qa
dataset:
name: U2
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.39035087719298245
- task:
name: Analogy Questions (U4)
type: multiple-choice-qa
dataset:
name: U4
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.43287037037037035
- task:
name: Lexical Relation Classification (BLESS)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8461654361910502
- name: F1 (macro)
type: f1_macro
value: 0.8411664963735426
- task:
name: Lexical Relation Classification (CogALexV)
type: classification
dataset:
name: CogALexV
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8145539906103286
- name: F1 (macro)
type: f1_macro
value: 0.5873414064116238
- task:
name: Lexical Relation Classification (EVALution)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.6505958829902492
- name: F1 (macro)
type: f1_macro
value: 0.6269958308732405
- task:
name: Lexical Relation Classification (K&H+N)
type: classification
dataset:
name: K&H+N
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.9319051262433052
- name: F1 (macro)
type: f1_macro
value: 0.8393686548194149
- task:
name: Lexical Relation Classification (ROOT09)
type: classification
dataset:
name: ROOT09
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.7511751801942964
- name: F1 (macro)
type: f1_macro
value: 0.6464435364634403
---
# relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-triplet-1
RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on
[relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6).
Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail).
It achieves the following results on the relation understanding tasks:
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-triplet-1/raw/main/analogy.json)):
- Accuracy on SAT (full): 0.3502673796791444
- Accuracy on SAT: 0.35014836795252224
- Accuracy on BATS: 0.5202890494719289
- Accuracy on U2: 0.39035087719298245
- Accuracy on U4: 0.43287037037037035
- Accuracy on Google: 0.644
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-triplet-1/raw/main/classification.json)):
- Micro F1 score on BLESS: 0.8461654361910502
- Micro F1 score on CogALexV: 0.8145539906103286
- Micro F1 score on EVALution: 0.6505958829902492
- Micro F1 score on K&H+N: 0.9319051262433052
- Micro F1 score on ROOT09: 0.7511751801942964
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-triplet-1/raw/main/relation_mapping.json)):
- Accuracy on Relation Mapping: 0.8018650793650793
### Usage
This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip
```shell
pip install relbert
```
and activate model as below.
```python
from relbert import RelBERT
model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-triplet-1")
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, )
```
### Training hyperparameters
The following hyperparameters were used during training:
- model: roberta-base
- max_length: 64
- mode: average
- data: relbert/semeval2012_relational_similarity_v6
- split: train
- split_eval: validation
- template_mode: manual
- loss_function: triplet
- classification_loss: False
- temperature_nce_constant: 0.05
- temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'}
- epoch: 9
- batch: 128
- lr: 5e-06
- lr_decay: False
- lr_warmup: 1
- weight_decay: 0
- random_seed: 1
- exclude_relation: None
- n_sample: 320
- gradient_accumulation: 8
- relation_level: None
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-triplet-1/raw/main/trainer_config.json).
### Reference
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
```
@inproceedings{ushio-etal-2021-distilling-relation-embeddings,
title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels",
author = "Ushio, Asahi and
Schockaert, Steven and
Camacho-Collados, Jose",
booktitle = "EMNLP 2021",
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
}
```
|
research-backup/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-2
|
research-backup
| 2022-11-22T17:33:29Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"dataset:relbert/semeval2012_relational_similarity_v6",
"model-index",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-11-22T07:22:15Z |
---
datasets:
- relbert/semeval2012_relational_similarity_v6
model-index:
- name: relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-2
results:
- task:
name: Relation Mapping
type: sorting-task
dataset:
name: Relation Mapping
args: relbert/relation_mapping
type: relation-mapping
metrics:
- name: Accuracy
type: accuracy
value: 0.7463293650793651
- task:
name: Analogy Questions (SAT full)
type: multiple-choice-qa
dataset:
name: SAT full
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.34759358288770054
- task:
name: Analogy Questions (SAT)
type: multiple-choice-qa
dataset:
name: SAT
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.3590504451038576
- task:
name: Analogy Questions (BATS)
type: multiple-choice-qa
dataset:
name: BATS
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.481378543635353
- task:
name: Analogy Questions (Google)
type: multiple-choice-qa
dataset:
name: Google
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.494
- task:
name: Analogy Questions (U2)
type: multiple-choice-qa
dataset:
name: U2
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.3991228070175439
- task:
name: Analogy Questions (U4)
type: multiple-choice-qa
dataset:
name: U4
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.35648148148148145
- task:
name: Lexical Relation Classification (BLESS)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8610818140726232
- name: F1 (macro)
type: f1_macro
value: 0.8525458448699613
- task:
name: Lexical Relation Classification (CogALexV)
type: classification
dataset:
name: CogALexV
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8171361502347417
- name: F1 (macro)
type: f1_macro
value: 0.5610856949320919
- task:
name: Lexical Relation Classification (EVALution)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.6229685807150596
- name: F1 (macro)
type: f1_macro
value: 0.6126645128177534
- task:
name: Lexical Relation Classification (K&H+N)
type: classification
dataset:
name: K&H+N
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.9215413507685887
- name: F1 (macro)
type: f1_macro
value: 0.8042276096823726
- task:
name: Lexical Relation Classification (ROOT09)
type: classification
dataset:
name: ROOT09
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.857724851143842
- name: F1 (macro)
type: f1_macro
value: 0.8472661094927697
---
# relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-2
RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on
[relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6).
Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail).
It achieves the following results on the relation understanding tasks:
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-2/raw/main/analogy.json)):
- Accuracy on SAT (full): 0.34759358288770054
- Accuracy on SAT: 0.3590504451038576
- Accuracy on BATS: 0.481378543635353
- Accuracy on U2: 0.3991228070175439
- Accuracy on U4: 0.35648148148148145
- Accuracy on Google: 0.494
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-2/raw/main/classification.json)):
- Micro F1 score on BLESS: 0.8610818140726232
- Micro F1 score on CogALexV: 0.8171361502347417
- Micro F1 score on EVALution: 0.6229685807150596
- Micro F1 score on K&H+N: 0.9215413507685887
- Micro F1 score on ROOT09: 0.857724851143842
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-2/raw/main/relation_mapping.json)):
- Accuracy on Relation Mapping: 0.7463293650793651
### Usage
This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip
```shell
pip install relbert
```
and activate model as below.
```python
from relbert import RelBERT
model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-2")
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, )
```
### Training hyperparameters
The following hyperparameters were used during training:
- model: roberta-base
- max_length: 64
- mode: mask
- data: relbert/semeval2012_relational_similarity_v6
- split: train
- split_eval: validation
- template_mode: manual
- loss_function: triplet
- classification_loss: False
- temperature_nce_constant: 0.05
- temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'}
- epoch: 9
- batch: 128
- lr: 5e-06
- lr_decay: False
- lr_warmup: 1
- weight_decay: 0
- random_seed: 2
- exclude_relation: None
- n_sample: 320
- gradient_accumulation: 8
- relation_level: None
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-2/raw/main/trainer_config.json).
### Reference
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
```
@inproceedings{ushio-etal-2021-distilling-relation-embeddings,
title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels",
author = "Ushio, Asahi and
Schockaert, Steven and
Camacho-Collados, Jose",
booktitle = "EMNLP 2021",
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
}
```
|
alanoix/whisper-small-br
|
alanoix
| 2022-11-22T17:26:31Z | 80 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"br",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-11-22T09:51:24Z |
---
language:
- br
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: whisper-small-br
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
args: 'config: br, split: test'
metrics:
- name: Wer
type: wer
value: 49.98168162667155
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-br
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8542
- Wer: 49.9817
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.1415 | 3.36 | 1000 | 0.7406 | 54.0117 |
| 0.0147 | 6.71 | 2000 | 0.7909 | 51.5479 |
| 0.0011 | 10.07 | 3000 | 0.8368 | 49.7710 |
| 0.0007 | 13.42 | 4000 | 0.8542 | 49.9817 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
datasciencemmw/old-beta1
|
datasciencemmw
| 2022-11-22T17:15:53Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"autotrain",
"text-classification",
"en",
"dataset:LiveEvil/autotrain-data-copuml-production",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-22T17:14:48Z |
---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- LiveEvil/autotrain-data-copuml-production
co2_eq_emissions:
emissions: 0.9758714074673083
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 2205570752
- CO2 Emissions (in grams): 0.9759
## Validation Metrics
- Loss: 1.092
- Accuracy: 0.701
- Macro F1: 0.416
- Micro F1: 0.701
- Weighted F1: 0.670
- Macro Precision: 0.399
- Micro Precision: 0.701
- Weighted Precision: 0.643
- Macro Recall: 0.436
- Micro Recall: 0.701
- Weighted Recall: 0.701
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/LiveEvil/autotrain-copuml-production-2205570752
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("LiveEvil/autotrain-copuml-production-2205570752", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("LiveEvil/autotrain-copuml-production-2205570752", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
research-backup/relbert-roberta-base-semeval2012-v6-mask-prompt-e-triplet-1
|
research-backup
| 2022-11-22T17:13:57Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"dataset:relbert/semeval2012_relational_similarity_v6",
"model-index",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-11-22T07:30:48Z |
---
datasets:
- relbert/semeval2012_relational_similarity_v6
model-index:
- name: relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-triplet-1
results:
- task:
name: Relation Mapping
type: sorting-task
dataset:
name: Relation Mapping
args: relbert/relation_mapping
type: relation-mapping
metrics:
- name: Accuracy
type: accuracy
value: 0.7387698412698412
- task:
name: Analogy Questions (SAT full)
type: multiple-choice-qa
dataset:
name: SAT full
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.3342245989304813
- task:
name: Analogy Questions (SAT)
type: multiple-choice-qa
dataset:
name: SAT
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.34718100890207715
- task:
name: Analogy Questions (BATS)
type: multiple-choice-qa
dataset:
name: BATS
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.5441912173429683
- task:
name: Analogy Questions (Google)
type: multiple-choice-qa
dataset:
name: Google
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.644
- task:
name: Analogy Questions (U2)
type: multiple-choice-qa
dataset:
name: U2
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.35526315789473684
- task:
name: Analogy Questions (U4)
type: multiple-choice-qa
dataset:
name: U4
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.37962962962962965
- task:
name: Lexical Relation Classification (BLESS)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8145246346240772
- name: F1 (macro)
type: f1_macro
value: 0.801802054210856
- task:
name: Lexical Relation Classification (CogALexV)
type: classification
dataset:
name: CogALexV
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.7774647887323943
- name: F1 (macro)
type: f1_macro
value: 0.5026184700694826
- task:
name: Lexical Relation Classification (EVALution)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.5980498374864572
- name: F1 (macro)
type: f1_macro
value: 0.5765100456864519
- task:
name: Lexical Relation Classification (K&H+N)
type: classification
dataset:
name: K&H+N
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8878069138206858
- name: F1 (macro)
type: f1_macro
value: 0.7711282513838499
- task:
name: Lexical Relation Classification (ROOT09)
type: classification
dataset:
name: ROOT09
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.827326856784707
- name: F1 (macro)
type: f1_macro
value: 0.824410778730745
---
# relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-triplet-1
RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on
[relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6).
Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail).
It achieves the following results on the relation understanding tasks:
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-triplet-1/raw/main/analogy.json)):
- Accuracy on SAT (full): 0.3342245989304813
- Accuracy on SAT: 0.34718100890207715
- Accuracy on BATS: 0.5441912173429683
- Accuracy on U2: 0.35526315789473684
- Accuracy on U4: 0.37962962962962965
- Accuracy on Google: 0.644
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-triplet-1/raw/main/classification.json)):
- Micro F1 score on BLESS: 0.8145246346240772
- Micro F1 score on CogALexV: 0.7774647887323943
- Micro F1 score on EVALution: 0.5980498374864572
- Micro F1 score on K&H+N: 0.8878069138206858
- Micro F1 score on ROOT09: 0.827326856784707
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-triplet-1/raw/main/relation_mapping.json)):
- Accuracy on Relation Mapping: 0.7387698412698412
### Usage
This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip
```shell
pip install relbert
```
and activate model as below.
```python
from relbert import RelBERT
model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-triplet-1")
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, )
```
### Training hyperparameters
The following hyperparameters were used during training:
- model: roberta-base
- max_length: 64
- mode: mask
- data: relbert/semeval2012_relational_similarity_v6
- split: train
- split_eval: validation
- template_mode: manual
- loss_function: triplet
- classification_loss: False
- temperature_nce_constant: 0.05
- temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'}
- epoch: 9
- batch: 128
- lr: 5e-06
- lr_decay: False
- lr_warmup: 1
- weight_decay: 0
- random_seed: 1
- exclude_relation: None
- n_sample: 320
- gradient_accumulation: 8
- relation_level: None
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-triplet-1/raw/main/trainer_config.json).
### Reference
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
```
@inproceedings{ushio-etal-2021-distilling-relation-embeddings,
title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels",
author = "Ushio, Asahi and
Schockaert, Steven and
Camacho-Collados, Jose",
booktitle = "EMNLP 2021",
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
}
```
|
research-backup/relbert-roberta-base-semeval2012-v6-mask-prompt-d-triplet-1
|
research-backup
| 2022-11-22T17:10:39Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"dataset:relbert/semeval2012_relational_similarity_v6",
"model-index",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-11-22T07:28:59Z |
---
datasets:
- relbert/semeval2012_relational_similarity_v6
model-index:
- name: relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-d-triplet-1
results:
- task:
name: Relation Mapping
type: sorting-task
dataset:
name: Relation Mapping
args: relbert/relation_mapping
type: relation-mapping
metrics:
- name: Accuracy
type: accuracy
value: 0.7853174603174603
- task:
name: Analogy Questions (SAT full)
type: multiple-choice-qa
dataset:
name: SAT full
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.4197860962566845
- task:
name: Analogy Questions (SAT)
type: multiple-choice-qa
dataset:
name: SAT
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.42433234421364985
- task:
name: Analogy Questions (BATS)
type: multiple-choice-qa
dataset:
name: BATS
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.5619788771539744
- task:
name: Analogy Questions (Google)
type: multiple-choice-qa
dataset:
name: Google
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.744
- task:
name: Analogy Questions (U2)
type: multiple-choice-qa
dataset:
name: U2
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.43859649122807015
- task:
name: Analogy Questions (U4)
type: multiple-choice-qa
dataset:
name: U4
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.4351851851851852
- task:
name: Lexical Relation Classification (BLESS)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8895585354828989
- name: F1 (macro)
type: f1_macro
value: 0.8809341644131754
- task:
name: Lexical Relation Classification (CogALexV)
type: classification
dataset:
name: CogALexV
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8453051643192488
- name: F1 (macro)
type: f1_macro
value: 0.624040279392662
- task:
name: Lexical Relation Classification (EVALution)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.6793066088840737
- name: F1 (macro)
type: f1_macro
value: 0.6602046108703392
- task:
name: Lexical Relation Classification (K&H+N)
type: classification
dataset:
name: K&H+N
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.9344786812269598
- name: F1 (macro)
type: f1_macro
value: 0.8375382298577612
- task:
name: Lexical Relation Classification (ROOT09)
type: classification
dataset:
name: ROOT09
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8658727671576308
- name: F1 (macro)
type: f1_macro
value: 0.8645267089284405
---
# relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-d-triplet-1
RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on
[relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6).
Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail).
It achieves the following results on the relation understanding tasks:
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-d-triplet-1/raw/main/analogy.json)):
- Accuracy on SAT (full): 0.4197860962566845
- Accuracy on SAT: 0.42433234421364985
- Accuracy on BATS: 0.5619788771539744
- Accuracy on U2: 0.43859649122807015
- Accuracy on U4: 0.4351851851851852
- Accuracy on Google: 0.744
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-d-triplet-1/raw/main/classification.json)):
- Micro F1 score on BLESS: 0.8895585354828989
- Micro F1 score on CogALexV: 0.8453051643192488
- Micro F1 score on EVALution: 0.6793066088840737
- Micro F1 score on K&H+N: 0.9344786812269598
- Micro F1 score on ROOT09: 0.8658727671576308
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-d-triplet-1/raw/main/relation_mapping.json)):
- Accuracy on Relation Mapping: 0.7853174603174603
### Usage
This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip
```shell
pip install relbert
```
and activate model as below.
```python
from relbert import RelBERT
model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-d-triplet-1")
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, )
```
### Training hyperparameters
The following hyperparameters were used during training:
- model: roberta-base
- max_length: 64
- mode: mask
- data: relbert/semeval2012_relational_similarity_v6
- split: train
- split_eval: validation
- template_mode: manual
- loss_function: triplet
- classification_loss: False
- temperature_nce_constant: 0.05
- temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'}
- epoch: 9
- batch: 128
- lr: 5e-06
- lr_decay: False
- lr_warmup: 1
- weight_decay: 0
- random_seed: 1
- exclude_relation: None
- n_sample: 320
- gradient_accumulation: 8
- relation_level: None
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-d-triplet-1/raw/main/trainer_config.json).
### Reference
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
```
@inproceedings{ushio-etal-2021-distilling-relation-embeddings,
title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels",
author = "Ushio, Asahi and
Schockaert, Steven and
Camacho-Collados, Jose",
booktitle = "EMNLP 2021",
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
}
```
|
research-backup/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-1
|
research-backup
| 2022-11-22T17:00:21Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"dataset:relbert/semeval2012_relational_similarity_v6",
"model-index",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-11-22T07:22:15Z |
---
datasets:
- relbert/semeval2012_relational_similarity_v6
model-index:
- name: relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-1
results:
- task:
name: Relation Mapping
type: sorting-task
dataset:
name: Relation Mapping
args: relbert/relation_mapping
type: relation-mapping
metrics:
- name: Accuracy
type: accuracy
value: 0.8430952380952381
- task:
name: Analogy Questions (SAT full)
type: multiple-choice-qa
dataset:
name: SAT full
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.3582887700534759
- task:
name: Analogy Questions (SAT)
type: multiple-choice-qa
dataset:
name: SAT
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.3649851632047478
- task:
name: Analogy Questions (BATS)
type: multiple-choice-qa
dataset:
name: BATS
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.4280155642023346
- task:
name: Analogy Questions (Google)
type: multiple-choice-qa
dataset:
name: Google
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.532
- task:
name: Analogy Questions (U2)
type: multiple-choice-qa
dataset:
name: U2
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.3333333333333333
- task:
name: Analogy Questions (U4)
type: multiple-choice-qa
dataset:
name: U4
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.3101851851851852
- task:
name: Lexical Relation Classification (BLESS)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8460147657073979
- name: F1 (macro)
type: f1_macro
value: 0.8315897128108677
- task:
name: Lexical Relation Classification (CogALexV)
type: classification
dataset:
name: CogALexV
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8084507042253521
- name: F1 (macro)
type: f1_macro
value: 0.5269777075808457
- task:
name: Lexical Relation Classification (EVALution)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.6424702058504875
- name: F1 (macro)
type: f1_macro
value: 0.6178608994596904
- task:
name: Lexical Relation Classification (K&H+N)
type: classification
dataset:
name: K&H+N
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.913612019197329
- name: F1 (macro)
type: f1_macro
value: 0.7738790468743169
- task:
name: Lexical Relation Classification (ROOT09)
type: classification
dataset:
name: ROOT09
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8693199623942337
- name: F1 (macro)
type: f1_macro
value: 0.864532922094076
---
# relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-1
RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on
[relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6).
Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail).
It achieves the following results on the relation understanding tasks:
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-1/raw/main/analogy.json)):
- Accuracy on SAT (full): 0.3582887700534759
- Accuracy on SAT: 0.3649851632047478
- Accuracy on BATS: 0.4280155642023346
- Accuracy on U2: 0.3333333333333333
- Accuracy on U4: 0.3101851851851852
- Accuracy on Google: 0.532
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-1/raw/main/classification.json)):
- Micro F1 score on BLESS: 0.8460147657073979
- Micro F1 score on CogALexV: 0.8084507042253521
- Micro F1 score on EVALution: 0.6424702058504875
- Micro F1 score on K&H+N: 0.913612019197329
- Micro F1 score on ROOT09: 0.8693199623942337
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-1/raw/main/relation_mapping.json)):
- Accuracy on Relation Mapping: 0.8430952380952381
### Usage
This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip
```shell
pip install relbert
```
and activate model as below.
```python
from relbert import RelBERT
model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-1")
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, )
```
### Training hyperparameters
The following hyperparameters were used during training:
- model: roberta-base
- max_length: 64
- mode: mask
- data: relbert/semeval2012_relational_similarity_v6
- split: train
- split_eval: validation
- template_mode: manual
- loss_function: triplet
- classification_loss: False
- temperature_nce_constant: 0.05
- temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'}
- epoch: 9
- batch: 128
- lr: 5e-06
- lr_decay: False
- lr_warmup: 1
- weight_decay: 0
- random_seed: 1
- exclude_relation: None
- n_sample: 320
- gradient_accumulation: 8
- relation_level: None
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-1/raw/main/trainer_config.json).
### Reference
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
```
@inproceedings{ushio-etal-2021-distilling-relation-embeddings,
title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels",
author = "Ushio, Asahi and
Schockaert, Steven and
Camacho-Collados, Jose",
booktitle = "EMNLP 2021",
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
}
```
|
multimodalart/sd-sc
|
multimodalart
| 2022-11-22T16:19:18Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2022-11-22T16:05:03Z |
---
license: creativeml-openrail-m
---
Just the Safety Checker of Stable Diffusion. For the model refer to https://huggingface.co/runwayml/stable-diffusion-v1-5
|
Dumeng/distilbert-base-uncased-finetuned-emotion
|
Dumeng
| 2022-11-22T15:11:40Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-19T19:49:54Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0
- Datasets 2.6.1
- Tokenizers 0.13.1
|
huggingtweets/oryxspioenkop
|
huggingtweets
| 2022-11-22T15:10:21Z | 111 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-22T15:09:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/oryxspioenkop/1669129816805/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/929707102083395584/tCWiYbO1_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Oryx</div>
<div style="text-align: center; font-size: 14px;">@oryxspioenkop</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Oryx.
| Data | Oryx |
| --- | --- |
| Tweets downloaded | 3246 |
| Retweets | 2219 |
| Short tweets | 266 |
| Tweets kept | 761 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/qbqfz863/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @oryxspioenkop's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2es3q78b) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2es3q78b/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/oryxspioenkop')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Dundalia/lfqa_covid
|
Dundalia
| 2022-11-22T15:07:37Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-22T14:39:45Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: lfqa_covid
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lfqa_covid
This model is a fine-tuned version of [vblagoje/bart_lfqa](https://huggingface.co/vblagoje/bart_lfqa) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1028
- Bleu: 0.0
- Gen Len: 19.8564
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:----:|:-------:|
| 1.5923 | 1.0 | 808 | 0.1028 | 0.0 | 19.8564 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
jjjunyeong/bart-finetuned-squad
|
jjjunyeong
| 2022-11-22T14:42:07Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"dataset:squad",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-22T12:27:04Z |
---
tags:
- generated_from_trainer
datasets:
- squad
metrics:
- rouge
model-index:
- name: bart-finetuned-squad
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: squad
type: squad
config: plain_text
split: train
args: plain_text
metrics:
- name: Rouge1
type: rouge
value: 50.1505
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-finetuned-squad
This model is a fine-tuned version of [p208p2002/bart-squad-qg-hl](https://huggingface.co/p208p2002/bart-squad-qg-hl) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8813
- Rouge1: 50.1505
- Rouge2: 26.8606
- Rougel: 46.0203
- Rougelsum: 46.0242
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 1.5702 | 1.0 | 125 | 1.4266 | 49.7474 | 26.6965 | 46.3227 | 46.342 |
| 0.84 | 2.0 | 250 | 1.4845 | 49.8379 | 26.3973 | 45.126 | 45.1791 |
| 0.535 | 3.0 | 375 | 1.6037 | 50.1413 | 27.4581 | 46.7795 | 46.8001 |
| 0.3621 | 4.0 | 500 | 1.6899 | 49.6087 | 25.9818 | 45.0914 | 45.1004 |
| 0.2448 | 5.0 | 625 | 1.7540 | 49.7468 | 26.5312 | 45.5623 | 45.5296 |
| 0.1756 | 6.0 | 750 | 1.8287 | 49.4987 | 26.2315 | 45.3515 | 45.4214 |
| 0.13 | 7.0 | 875 | 1.8809 | 49.6426 | 26.4688 | 45.5167 | 45.5427 |
| 0.1016 | 8.0 | 1000 | 1.8813 | 50.1505 | 26.8606 | 46.0203 | 46.0242 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
bitsanlp/deberta-v3-base_base
|
bitsanlp
| 2022-11-22T14:37:33Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-22T13:49:27Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: deberta-v3-base_base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-base_base
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 28
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
adrianccy/donut-base-sroie-fine-tuned
|
adrianccy
| 2022-11-22T13:41:56Z | 45 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:imagefolder",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2022-11-22T10:33:43Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: donut-base-sroie-fine-tuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-sroie-fine-tuned
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.10.0
- Datasets 2.7.0
- Tokenizers 0.13.2
|
research-backup/relbert-roberta-base-semeval2012-v6-average-prompt-b-triplet-1
|
research-backup
| 2022-11-22T13:00:28Z | 97 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"dataset:relbert/semeval2012_relational_similarity_v6",
"model-index",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-11-22T07:34:42Z |
---
datasets:
- relbert/semeval2012_relational_similarity_v6
model-index:
- name: relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-triplet-1
results:
- task:
name: Relation Mapping
type: sorting-task
dataset:
name: Relation Mapping
args: relbert/relation_mapping
type: relation-mapping
metrics:
- name: Accuracy
type: accuracy
value: 0.7449603174603174
- task:
name: Analogy Questions (SAT full)
type: multiple-choice-qa
dataset:
name: SAT full
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.3502673796791444
- task:
name: Analogy Questions (SAT)
type: multiple-choice-qa
dataset:
name: SAT
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.3560830860534125
- task:
name: Analogy Questions (BATS)
type: multiple-choice-qa
dataset:
name: BATS
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.3468593663146192
- task:
name: Analogy Questions (Google)
type: multiple-choice-qa
dataset:
name: Google
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.432
- task:
name: Analogy Questions (U2)
type: multiple-choice-qa
dataset:
name: U2
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.37719298245614036
- task:
name: Analogy Questions (U4)
type: multiple-choice-qa
dataset:
name: U4
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.38425925925925924
- task:
name: Lexical Relation Classification (BLESS)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8523429260207925
- name: F1 (macro)
type: f1_macro
value: 0.8411456349485952
- task:
name: Lexical Relation Classification (CogALexV)
type: classification
dataset:
name: CogALexV
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8157276995305164
- name: F1 (macro)
type: f1_macro
value: 0.5982289168562968
- task:
name: Lexical Relation Classification (EVALution)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.6386782231852655
- name: F1 (macro)
type: f1_macro
value: 0.6034154846314037
- task:
name: Lexical Relation Classification (K&H+N)
type: classification
dataset:
name: K&H+N
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.95875356472143
- name: F1 (macro)
type: f1_macro
value: 0.8723815565345302
- task:
name: Lexical Relation Classification (ROOT09)
type: classification
dataset:
name: ROOT09
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.846443121278596
- name: F1 (macro)
type: f1_macro
value: 0.8238870756074439
---
# relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-triplet-1
RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on
[relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6).
Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail).
It achieves the following results on the relation understanding tasks:
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-triplet-1/raw/main/analogy.json)):
- Accuracy on SAT (full): 0.3502673796791444
- Accuracy on SAT: 0.3560830860534125
- Accuracy on BATS: 0.3468593663146192
- Accuracy on U2: 0.37719298245614036
- Accuracy on U4: 0.38425925925925924
- Accuracy on Google: 0.432
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-triplet-1/raw/main/classification.json)):
- Micro F1 score on BLESS: 0.8523429260207925
- Micro F1 score on CogALexV: 0.8157276995305164
- Micro F1 score on EVALution: 0.6386782231852655
- Micro F1 score on K&H+N: 0.95875356472143
- Micro F1 score on ROOT09: 0.846443121278596
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-triplet-1/raw/main/relation_mapping.json)):
- Accuracy on Relation Mapping: 0.7449603174603174
### Usage
This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip
```shell
pip install relbert
```
and activate model as below.
```python
from relbert import RelBERT
model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-triplet-1")
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, )
```
### Training hyperparameters
The following hyperparameters were used during training:
- model: roberta-base
- max_length: 64
- mode: average
- data: relbert/semeval2012_relational_similarity_v6
- split: train
- split_eval: validation
- template_mode: manual
- loss_function: triplet
- classification_loss: False
- temperature_nce_constant: 0.05
- temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'}
- epoch: 9
- batch: 128
- lr: 5e-06
- lr_decay: False
- lr_warmup: 1
- weight_decay: 0
- random_seed: 1
- exclude_relation: None
- n_sample: 320
- gradient_accumulation: 8
- relation_level: None
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-triplet-1/raw/main/trainer_config.json).
### Reference
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
```
@inproceedings{ushio-etal-2021-distilling-relation-embeddings,
title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels",
author = "Ushio, Asahi and
Schockaert, Steven and
Camacho-Collados, Jose",
booktitle = "EMNLP 2021",
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
}
```
|
research-backup/relbert-roberta-base-semeval2012-v6-average-prompt-e-nce-2
|
research-backup
| 2022-11-22T12:57:06Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"dataset:relbert/semeval2012_relational_similarity_v6",
"model-index",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-11-22T07:39:41Z |
---
datasets:
- relbert/semeval2012_relational_similarity_v6
model-index:
- name: relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-nce-2
results:
- task:
name: Relation Mapping
type: sorting-task
dataset:
name: Relation Mapping
args: relbert/relation_mapping
type: relation-mapping
metrics:
- name: Accuracy
type: accuracy
value: 0.6670436507936508
- task:
name: Analogy Questions (SAT full)
type: multiple-choice-qa
dataset:
name: SAT full
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.3770053475935829
- task:
name: Analogy Questions (SAT)
type: multiple-choice-qa
dataset:
name: SAT
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.37388724035608306
- task:
name: Analogy Questions (BATS)
type: multiple-choice-qa
dataset:
name: BATS
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.4802668148971651
- task:
name: Analogy Questions (Google)
type: multiple-choice-qa
dataset:
name: Google
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.558
- task:
name: Analogy Questions (U2)
type: multiple-choice-qa
dataset:
name: U2
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.33771929824561403
- task:
name: Analogy Questions (U4)
type: multiple-choice-qa
dataset:
name: U4
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.34953703703703703
- task:
name: Lexical Relation Classification (BLESS)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.893174627090553
- name: F1 (macro)
type: f1_macro
value: 0.8866591988732194
- task:
name: Lexical Relation Classification (CogALexV)
type: classification
dataset:
name: CogALexV
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.7863849765258216
- name: F1 (macro)
type: f1_macro
value: 0.5308624907920565
- task:
name: Lexical Relation Classification (EVALution)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.5704225352112676
- name: F1 (macro)
type: f1_macro
value: 0.5510856788391408
- task:
name: Lexical Relation Classification (K&H+N)
type: classification
dataset:
name: K&H+N
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.9581275648605412
- name: F1 (macro)
type: f1_macro
value: 0.8644516035001516
- task:
name: Lexical Relation Classification (ROOT09)
type: classification
dataset:
name: ROOT09
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8523973675963648
- name: F1 (macro)
type: f1_macro
value: 0.8523947470987124
---
# relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-nce-2
RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on
[relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6).
Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail).
It achieves the following results on the relation understanding tasks:
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-nce-2/raw/main/analogy.json)):
- Accuracy on SAT (full): 0.3770053475935829
- Accuracy on SAT: 0.37388724035608306
- Accuracy on BATS: 0.4802668148971651
- Accuracy on U2: 0.33771929824561403
- Accuracy on U4: 0.34953703703703703
- Accuracy on Google: 0.558
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-nce-2/raw/main/classification.json)):
- Micro F1 score on BLESS: 0.893174627090553
- Micro F1 score on CogALexV: 0.7863849765258216
- Micro F1 score on EVALution: 0.5704225352112676
- Micro F1 score on K&H+N: 0.9581275648605412
- Micro F1 score on ROOT09: 0.8523973675963648
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-nce-2/raw/main/relation_mapping.json)):
- Accuracy on Relation Mapping: 0.6670436507936508
### Usage
This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip
```shell
pip install relbert
```
and activate model as below.
```python
from relbert import RelBERT
model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-nce-2")
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, )
```
### Training hyperparameters
The following hyperparameters were used during training:
- model: roberta-base
- max_length: 64
- mode: average
- data: relbert/semeval2012_relational_similarity_v6
- split: train
- split_eval: validation
- template_mode: manual
- loss_function: nce_logout
- classification_loss: False
- temperature_nce_constant: 0.05
- temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'}
- epoch: 5
- batch: 128
- lr: 5e-06
- lr_decay: False
- lr_warmup: 1
- weight_decay: 0
- random_seed: 2
- exclude_relation: None
- n_sample: 320
- gradient_accumulation: 8
- relation_level: None
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-nce-2/raw/main/trainer_config.json).
### Reference
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
```
@inproceedings{ushio-etal-2021-distilling-relation-embeddings,
title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels",
author = "Ushio, Asahi and
Schockaert, Steven and
Camacho-Collados, Jose",
booktitle = "EMNLP 2021",
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
}
```
|
research-backup/relbert-roberta-base-semeval2012-v6-average-prompt-d-nce-2
|
research-backup
| 2022-11-22T12:14:21Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"dataset:relbert/semeval2012_relational_similarity_v6",
"model-index",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-11-22T07:37:55Z |
---
datasets:
- relbert/semeval2012_relational_similarity_v6
model-index:
- name: relbert/relbert-roberta-base-semeval2012-v6-average-prompt-d-nce-2
results:
- task:
name: Relation Mapping
type: sorting-task
dataset:
name: Relation Mapping
args: relbert/relation_mapping
type: relation-mapping
metrics:
- name: Accuracy
type: accuracy
value: 0.763452380952381
- task:
name: Analogy Questions (SAT full)
type: multiple-choice-qa
dataset:
name: SAT full
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.4358288770053476
- task:
name: Analogy Questions (SAT)
type: multiple-choice-qa
dataset:
name: SAT
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.44510385756676557
- task:
name: Analogy Questions (BATS)
type: multiple-choice-qa
dataset:
name: BATS
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.6453585325180656
- task:
name: Analogy Questions (Google)
type: multiple-choice-qa
dataset:
name: Google
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.764
- task:
name: Analogy Questions (U2)
type: multiple-choice-qa
dataset:
name: U2
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.41228070175438597
- task:
name: Analogy Questions (U4)
type: multiple-choice-qa
dataset:
name: U4
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.4305555555555556
- task:
name: Lexical Relation Classification (BLESS)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.891969263221335
- name: F1 (macro)
type: f1_macro
value: 0.8861553769138059
- task:
name: Lexical Relation Classification (CogALexV)
type: classification
dataset:
name: CogALexV
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.7784037558685445
- name: F1 (macro)
type: f1_macro
value: 0.5482893350573881
- task:
name: Lexical Relation Classification (EVALution)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.6121343445287107
- name: F1 (macro)
type: f1_macro
value: 0.5872535272466797
- task:
name: Lexical Relation Classification (K&H+N)
type: classification
dataset:
name: K&H+N
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.9549280100159978
- name: F1 (macro)
type: f1_macro
value: 0.8679160068348847
- task:
name: Lexical Relation Classification (ROOT09)
type: classification
dataset:
name: ROOT09
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8567847069884049
- name: F1 (macro)
type: f1_macro
value: 0.8549220771705669
---
# relbert/relbert-roberta-base-semeval2012-v6-average-prompt-d-nce-2
RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on
[relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6).
Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail).
It achieves the following results on the relation understanding tasks:
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-d-nce-2/raw/main/analogy.json)):
- Accuracy on SAT (full): 0.4358288770053476
- Accuracy on SAT: 0.44510385756676557
- Accuracy on BATS: 0.6453585325180656
- Accuracy on U2: 0.41228070175438597
- Accuracy on U4: 0.4305555555555556
- Accuracy on Google: 0.764
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-d-nce-2/raw/main/classification.json)):
- Micro F1 score on BLESS: 0.891969263221335
- Micro F1 score on CogALexV: 0.7784037558685445
- Micro F1 score on EVALution: 0.6121343445287107
- Micro F1 score on K&H+N: 0.9549280100159978
- Micro F1 score on ROOT09: 0.8567847069884049
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-d-nce-2/raw/main/relation_mapping.json)):
- Accuracy on Relation Mapping: 0.763452380952381
### Usage
This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip
```shell
pip install relbert
```
and activate model as below.
```python
from relbert import RelBERT
model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-average-prompt-d-nce-2")
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, )
```
### Training hyperparameters
The following hyperparameters were used during training:
- model: roberta-base
- max_length: 64
- mode: average
- data: relbert/semeval2012_relational_similarity_v6
- split: train
- split_eval: validation
- template_mode: manual
- loss_function: nce_logout
- classification_loss: False
- temperature_nce_constant: 0.05
- temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'}
- epoch: 9
- batch: 128
- lr: 5e-06
- lr_decay: False
- lr_warmup: 1
- weight_decay: 0
- random_seed: 2
- exclude_relation: None
- n_sample: 320
- gradient_accumulation: 8
- relation_level: None
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-d-nce-2/raw/main/trainer_config.json).
### Reference
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
```
@inproceedings{ushio-etal-2021-distilling-relation-embeddings,
title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels",
author = "Ushio, Asahi and
Schockaert, Steven and
Camacho-Collados, Jose",
booktitle = "EMNLP 2021",
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
}
```
|
research-backup/relbert-roberta-base-semeval2012-v6-average-prompt-c-nce-1
|
research-backup
| 2022-11-22T11:41:54Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"dataset:relbert/semeval2012_relational_similarity_v6",
"model-index",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-11-22T07:36:44Z |
---
datasets:
- relbert/semeval2012_relational_similarity_v6
model-index:
- name: relbert/relbert-roberta-base-semeval2012-v6-average-prompt-c-nce-1
results:
- task:
name: Relation Mapping
type: sorting-task
dataset:
name: Relation Mapping
args: relbert/relation_mapping
type: relation-mapping
metrics:
- name: Accuracy
type: accuracy
value: 0.7905555555555556
- task:
name: Analogy Questions (SAT full)
type: multiple-choice-qa
dataset:
name: SAT full
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.4625668449197861
- task:
name: Analogy Questions (SAT)
type: multiple-choice-qa
dataset:
name: SAT
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.4599406528189911
- task:
name: Analogy Questions (BATS)
type: multiple-choice-qa
dataset:
name: BATS
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.481378543635353
- task:
name: Analogy Questions (Google)
type: multiple-choice-qa
dataset:
name: Google
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.708
- task:
name: Analogy Questions (U2)
type: multiple-choice-qa
dataset:
name: U2
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.4517543859649123
- task:
name: Analogy Questions (U4)
type: multiple-choice-qa
dataset:
name: U4
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.42824074074074076
- task:
name: Lexical Relation Classification (BLESS)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.896790718698207
- name: F1 (macro)
type: f1_macro
value: 0.8904957973096685
- task:
name: Lexical Relation Classification (CogALexV)
type: classification
dataset:
name: CogALexV
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.7983568075117371
- name: F1 (macro)
type: f1_macro
value: 0.55128776736284
- task:
name: Lexical Relation Classification (EVALution)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.6072589382448538
- name: F1 (macro)
type: f1_macro
value: 0.5871638601862589
- task:
name: Lexical Relation Classification (K&H+N)
type: classification
dataset:
name: K&H+N
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.9596577867427141
- name: F1 (macro)
type: f1_macro
value: 0.8808105637743019
- task:
name: Lexical Relation Classification (ROOT09)
type: classification
dataset:
name: ROOT09
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8708868693199624
- name: F1 (macro)
type: f1_macro
value: 0.8665327123718048
---
# relbert/relbert-roberta-base-semeval2012-v6-average-prompt-c-nce-1
RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on
[relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6).
Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail).
It achieves the following results on the relation understanding tasks:
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-c-nce-1/raw/main/analogy.json)):
- Accuracy on SAT (full): 0.4625668449197861
- Accuracy on SAT: 0.4599406528189911
- Accuracy on BATS: 0.481378543635353
- Accuracy on U2: 0.4517543859649123
- Accuracy on U4: 0.42824074074074076
- Accuracy on Google: 0.708
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-c-nce-1/raw/main/classification.json)):
- Micro F1 score on BLESS: 0.896790718698207
- Micro F1 score on CogALexV: 0.7983568075117371
- Micro F1 score on EVALution: 0.6072589382448538
- Micro F1 score on K&H+N: 0.9596577867427141
- Micro F1 score on ROOT09: 0.8708868693199624
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-c-nce-1/raw/main/relation_mapping.json)):
- Accuracy on Relation Mapping: 0.7905555555555556
### Usage
This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip
```shell
pip install relbert
```
and activate model as below.
```python
from relbert import RelBERT
model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-average-prompt-c-nce-1")
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, )
```
### Training hyperparameters
The following hyperparameters were used during training:
- model: roberta-base
- max_length: 64
- mode: average
- data: relbert/semeval2012_relational_similarity_v6
- split: train
- split_eval: validation
- template_mode: manual
- loss_function: nce_logout
- classification_loss: False
- temperature_nce_constant: 0.05
- temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'}
- epoch: 9
- batch: 128
- lr: 5e-06
- lr_decay: False
- lr_warmup: 1
- weight_decay: 0
- random_seed: 1
- exclude_relation: None
- n_sample: 320
- gradient_accumulation: 8
- relation_level: None
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-c-nce-1/raw/main/trainer_config.json).
### Reference
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
```
@inproceedings{ushio-etal-2021-distilling-relation-embeddings,
title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels",
author = "Ushio, Asahi and
Schockaert, Steven and
Camacho-Collados, Jose",
booktitle = "EMNLP 2021",
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
}
```
|
research-backup/relbert-roberta-base-semeval2012-v6-average-prompt-b-nce-1
|
research-backup
| 2022-11-22T11:10:41Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"dataset:relbert/semeval2012_relational_similarity_v6",
"model-index",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-11-22T07:34:44Z |
---
datasets:
- relbert/semeval2012_relational_similarity_v6
model-index:
- name: relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-nce-1
results:
- task:
name: Relation Mapping
type: sorting-task
dataset:
name: Relation Mapping
args: relbert/relation_mapping
type: relation-mapping
metrics:
- name: Accuracy
type: accuracy
value: 0.8926984126984127
- task:
name: Analogy Questions (SAT full)
type: multiple-choice-qa
dataset:
name: SAT full
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.4572192513368984
- task:
name: Analogy Questions (SAT)
type: multiple-choice-qa
dataset:
name: SAT
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.4599406528189911
- task:
name: Analogy Questions (BATS)
type: multiple-choice-qa
dataset:
name: BATS
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.5369649805447471
- task:
name: Analogy Questions (Google)
type: multiple-choice-qa
dataset:
name: Google
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.748
- task:
name: Analogy Questions (U2)
type: multiple-choice-qa
dataset:
name: U2
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.4298245614035088
- task:
name: Analogy Questions (U4)
type: multiple-choice-qa
dataset:
name: U4
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.4375
- task:
name: Lexical Relation Classification (BLESS)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8945306614434232
- name: F1 (macro)
type: f1_macro
value: 0.8889050346897381
- task:
name: Lexical Relation Classification (CogALexV)
type: classification
dataset:
name: CogALexV
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.7887323943661971
- name: F1 (macro)
type: f1_macro
value: 0.5429622796506292
- task:
name: Lexical Relation Classification (EVALution)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.6132177681473456
- name: F1 (macro)
type: f1_macro
value: 0.5967298388536921
- task:
name: Lexical Relation Classification (K&H+N)
type: classification
dataset:
name: K&H+N
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.9580580093204424
- name: F1 (macro)
type: f1_macro
value: 0.8772669717354012
- task:
name: Lexical Relation Classification (ROOT09)
type: classification
dataset:
name: ROOT09
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8733939204011282
- name: F1 (macro)
type: f1_macro
value: 0.865464870691388
---
# relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-nce-1
RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on
[relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6).
Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail).
It achieves the following results on the relation understanding tasks:
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-nce-1/raw/main/analogy.json)):
- Accuracy on SAT (full): 0.4572192513368984
- Accuracy on SAT: 0.4599406528189911
- Accuracy on BATS: 0.5369649805447471
- Accuracy on U2: 0.4298245614035088
- Accuracy on U4: 0.4375
- Accuracy on Google: 0.748
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-nce-1/raw/main/classification.json)):
- Micro F1 score on BLESS: 0.8945306614434232
- Micro F1 score on CogALexV: 0.7887323943661971
- Micro F1 score on EVALution: 0.6132177681473456
- Micro F1 score on K&H+N: 0.9580580093204424
- Micro F1 score on ROOT09: 0.8733939204011282
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-nce-1/raw/main/relation_mapping.json)):
- Accuracy on Relation Mapping: 0.8926984126984127
### Usage
This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip
```shell
pip install relbert
```
and activate model as below.
```python
from relbert import RelBERT
model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-nce-1")
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, )
```
### Training hyperparameters
The following hyperparameters were used during training:
- model: roberta-base
- max_length: 64
- mode: average
- data: relbert/semeval2012_relational_similarity_v6
- split: train
- split_eval: validation
- template_mode: manual
- loss_function: nce_logout
- classification_loss: False
- temperature_nce_constant: 0.05
- temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'}
- epoch: 9
- batch: 128
- lr: 5e-06
- lr_decay: False
- lr_warmup: 1
- weight_decay: 0
- random_seed: 1
- exclude_relation: None
- n_sample: 320
- gradient_accumulation: 8
- relation_level: None
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-nce-1/raw/main/trainer_config.json).
### Reference
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
```
@inproceedings{ushio-etal-2021-distilling-relation-embeddings,
title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels",
author = "Ushio, Asahi and
Schockaert, Steven and
Camacho-Collados, Jose",
booktitle = "EMNLP 2021",
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
}
```
|
kunalr63/roberta-retrained
|
kunalr63
| 2022-11-22T10:54:15Z | 117 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-11-22T05:25:30Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: roberta-retrained
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-retrained
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Tokenizers 0.13.2
|
echarlaix/vit-food101-int8
|
echarlaix
| 2022-11-22T10:48:21Z | 24 | 0 |
transformers
|
[
"transformers",
"openvino",
"vit",
"image-classification",
"int8",
"dataset:food101",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-10-27T16:58:41Z |
---
license: apache-2.0
datasets:
- food101
tags:
- openvino
- int8
---
## [Vision Transformer (ViT)](https://huggingface.co/juliensimon/autotrain-food101-1471154050) quantized and exported to the OpenVINO IR.
## Model Details
**Model Description:** This ViT model fine-tuned on Food-101 was statically quantized and exported to the OpenVINO IR using [optimum](https://huggingface.co/docs/optimum/intel/optimization_ov).
## Usage example
You can use this model with Transformers *pipeline*.
```python
from transformers import pipeline, AutoFeatureExtractor
from optimum.intel.openvino import OVModelForImageClassification
model_id = "echarlaix/vit-food101-int8"
model = OVModelForImageClassification.from_pretrained(model_id)
feature_extractor = AutoFeatureExtractor.from_pretrained(model_id)
pipe = pipeline("image-classification", model=model, feature_extractor=feature_extractor)
outputs = pipe("http://farm2.staticflickr.com/1375/1394861946_171ea43524_z.jpg")
```
|
research-backup/relbert-roberta-base-semeval2012-v6-average-prompt-a-nce-2
|
research-backup
| 2022-11-22T10:47:13Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"dataset:relbert/semeval2012_relational_similarity_v6",
"model-index",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-11-22T07:32:09Z |
---
datasets:
- relbert/semeval2012_relational_similarity_v6
model-index:
- name: relbert/relbert-roberta-base-semeval2012-v6-average-prompt-a-nce-2
results:
- task:
name: Relation Mapping
type: sorting-task
dataset:
name: Relation Mapping
args: relbert/relation_mapping
type: relation-mapping
metrics:
- name: Accuracy
type: accuracy
value: 0.8508333333333333
- task:
name: Analogy Questions (SAT full)
type: multiple-choice-qa
dataset:
name: SAT full
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.4304812834224599
- task:
name: Analogy Questions (SAT)
type: multiple-choice-qa
dataset:
name: SAT
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.42729970326409494
- task:
name: Analogy Questions (BATS)
type: multiple-choice-qa
dataset:
name: BATS
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.44580322401334077
- task:
name: Analogy Questions (Google)
type: multiple-choice-qa
dataset:
name: Google
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.63
- task:
name: Analogy Questions (U2)
type: multiple-choice-qa
dataset:
name: U2
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.3684210526315789
- task:
name: Analogy Questions (U4)
type: multiple-choice-qa
dataset:
name: U4
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.4375
- task:
name: Lexical Relation Classification (BLESS)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8832303751695043
- name: F1 (macro)
type: f1_macro
value: 0.8741977324174292
- task:
name: Lexical Relation Classification (CogALexV)
type: classification
dataset:
name: CogALexV
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8166666666666667
- name: F1 (macro)
type: f1_macro
value: 0.591110337920912
- task:
name: Lexical Relation Classification (EVALution)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.6240520043336945
- name: F1 (macro)
type: f1_macro
value: 0.6033252228331162
- task:
name: Lexical Relation Classification (K&H+N)
type: classification
dataset:
name: K&H+N
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.9563886763580719
- name: F1 (macro)
type: f1_macro
value: 0.8721700434002555
- task:
name: Lexical Relation Classification (ROOT09)
type: classification
dataset:
name: ROOT09
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8602319022250078
- name: F1 (macro)
type: f1_macro
value: 0.8623792536691078
---
# relbert/relbert-roberta-base-semeval2012-v6-average-prompt-a-nce-2
RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on
[relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6).
Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail).
It achieves the following results on the relation understanding tasks:
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-a-nce-2/raw/main/analogy.json)):
- Accuracy on SAT (full): 0.4304812834224599
- Accuracy on SAT: 0.42729970326409494
- Accuracy on BATS: 0.44580322401334077
- Accuracy on U2: 0.3684210526315789
- Accuracy on U4: 0.4375
- Accuracy on Google: 0.63
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-a-nce-2/raw/main/classification.json)):
- Micro F1 score on BLESS: 0.8832303751695043
- Micro F1 score on CogALexV: 0.8166666666666667
- Micro F1 score on EVALution: 0.6240520043336945
- Micro F1 score on K&H+N: 0.9563886763580719
- Micro F1 score on ROOT09: 0.8602319022250078
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-a-nce-2/raw/main/relation_mapping.json)):
- Accuracy on Relation Mapping: 0.8508333333333333
### Usage
This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip
```shell
pip install relbert
```
and activate model as below.
```python
from relbert import RelBERT
model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-average-prompt-a-nce-2")
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, )
```
### Training hyperparameters
The following hyperparameters were used during training:
- model: roberta-base
- max_length: 64
- mode: average
- data: relbert/semeval2012_relational_similarity_v6
- split: train
- split_eval: validation
- template_mode: manual
- loss_function: nce_logout
- classification_loss: False
- temperature_nce_constant: 0.05
- temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'}
- epoch: 5
- batch: 128
- lr: 5e-06
- lr_decay: False
- lr_warmup: 1
- weight_decay: 0
- random_seed: 2
- exclude_relation: None
- n_sample: 320
- gradient_accumulation: 8
- relation_level: None
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-a-nce-2/raw/main/trainer_config.json).
### Reference
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
```
@inproceedings{ushio-etal-2021-distilling-relation-embeddings,
title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels",
author = "Ushio, Asahi and
Schockaert, Steven and
Camacho-Collados, Jose",
booktitle = "EMNLP 2021",
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
}
```
|
echarlaix/distilbert-base-uncased-finetuned-sst-2-english-openvino
|
echarlaix
| 2022-11-22T10:42:52Z | 20,880 | 0 |
transformers
|
[
"transformers",
"openvino",
"text-classification",
"en",
"dataset:sst2",
"dataset:glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-11T09:43:26Z |
---
language: en
license: apache-2.0
datasets:
- sst2
- glue
tags:
- openvino
---
## [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) exported to the OpenVINO IR.
## Model Details
**Model Description:** This model is a fine-tune checkpoint of DistilBERT-base-uncased, fine-tuned on SST-2. This model reaches an accuracy of 91.3 on the dev set.
## Usage example
You can use this model with Transformers *pipeline*.
```python
from transformers import AutoTokenizer, pipeline
from optimum.intel.openvino import OVModelForSequenceClassification
model_id = "echarlaix/distilbert-base-uncased-finetuned-sst-2-english-openvino"
model = OVModelForSequenceClassification.from_pretrained(model_id)
tokenizer = AutoTokenizer.from_pretrained(model_id)
cls_pipe = pipeline("text-classification", model=model, tokenizer=tokenizer)
text = "He's a dreadful magician."
outputs = cls_pipe(text)
```
|
research-backup/relbert-roberta-base-semeval2012-v6-mask-prompt-e-nce-1
|
research-backup
| 2022-11-22T10:08:40Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"dataset:relbert/semeval2012_relational_similarity_v6",
"model-index",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-11-22T07:30:53Z |
---
datasets:
- relbert/semeval2012_relational_similarity_v6
model-index:
- name: relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-nce-1
results:
- task:
name: Relation Mapping
type: sorting-task
dataset:
name: Relation Mapping
args: relbert/relation_mapping
type: relation-mapping
metrics:
- name: Accuracy
type: accuracy
value: 0.808968253968254
- task:
name: Analogy Questions (SAT full)
type: multiple-choice-qa
dataset:
name: SAT full
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.4839572192513369
- task:
name: Analogy Questions (SAT)
type: multiple-choice-qa
dataset:
name: SAT
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.4896142433234421
- task:
name: Analogy Questions (BATS)
type: multiple-choice-qa
dataset:
name: BATS
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.6264591439688716
- task:
name: Analogy Questions (Google)
type: multiple-choice-qa
dataset:
name: Google
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.748
- task:
name: Analogy Questions (U2)
type: multiple-choice-qa
dataset:
name: U2
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.36403508771929827
- task:
name: Analogy Questions (U4)
type: multiple-choice-qa
dataset:
name: U4
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.43287037037037035
- task:
name: Lexical Relation Classification (BLESS)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.9186379388277837
- name: F1 (macro)
type: f1_macro
value: 0.9146569952039126
- task:
name: Lexical Relation Classification (CogALexV)
type: classification
dataset:
name: CogALexV
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8244131455399061
- name: F1 (macro)
type: f1_macro
value: 0.6192186484290235
- task:
name: Lexical Relation Classification (EVALution)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.6511375947995667
- name: F1 (macro)
type: f1_macro
value: 0.6358411811809679
- task:
name: Lexical Relation Classification (K&H+N)
type: classification
dataset:
name: K&H+N
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.9683522292550601
- name: F1 (macro)
type: f1_macro
value: 0.9036902248765999
- task:
name: Lexical Relation Classification (ROOT09)
type: classification
dataset:
name: ROOT09
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8824819805703541
- name: F1 (macro)
type: f1_macro
value: 0.8801659277988089
---
# relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-nce-1
RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on
[relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6).
Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail).
It achieves the following results on the relation understanding tasks:
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-nce-1/raw/main/analogy.json)):
- Accuracy on SAT (full): 0.4839572192513369
- Accuracy on SAT: 0.4896142433234421
- Accuracy on BATS: 0.6264591439688716
- Accuracy on U2: 0.36403508771929827
- Accuracy on U4: 0.43287037037037035
- Accuracy on Google: 0.748
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-nce-1/raw/main/classification.json)):
- Micro F1 score on BLESS: 0.9186379388277837
- Micro F1 score on CogALexV: 0.8244131455399061
- Micro F1 score on EVALution: 0.6511375947995667
- Micro F1 score on K&H+N: 0.9683522292550601
- Micro F1 score on ROOT09: 0.8824819805703541
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-nce-1/raw/main/relation_mapping.json)):
- Accuracy on Relation Mapping: 0.808968253968254
### Usage
This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip
```shell
pip install relbert
```
and activate model as below.
```python
from relbert import RelBERT
model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-nce-1")
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, )
```
### Training hyperparameters
The following hyperparameters were used during training:
- model: roberta-base
- max_length: 64
- mode: mask
- data: relbert/semeval2012_relational_similarity_v6
- split: train
- split_eval: validation
- template_mode: manual
- loss_function: nce_logout
- classification_loss: False
- temperature_nce_constant: 0.05
- temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'}
- epoch: 5
- batch: 128
- lr: 5e-06
- lr_decay: False
- lr_warmup: 1
- weight_decay: 0
- random_seed: 1
- exclude_relation: None
- n_sample: 320
- gradient_accumulation: 8
- relation_level: None
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-nce-1/raw/main/trainer_config.json).
### Reference
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
```
@inproceedings{ushio-etal-2021-distilling-relation-embeddings,
title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels",
author = "Ushio, Asahi and
Schockaert, Steven and
Camacho-Collados, Jose",
booktitle = "EMNLP 2021",
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
}
```
|
research-backup/relbert-roberta-base-semeval2012-v6-mask-prompt-d-nce-2
|
research-backup
| 2022-11-22T09:45:45Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"dataset:relbert/semeval2012_relational_similarity_v6",
"model-index",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-11-22T07:28:46Z |
---
datasets:
- relbert/semeval2012_relational_similarity_v6
model-index:
- name: relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-d-nce-2
results:
- task:
name: Relation Mapping
type: sorting-task
dataset:
name: Relation Mapping
args: relbert/relation_mapping
type: relation-mapping
metrics:
- name: Accuracy
type: accuracy
value: 0.7883531746031746
- task:
name: Analogy Questions (SAT full)
type: multiple-choice-qa
dataset:
name: SAT full
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.553475935828877
- task:
name: Analogy Questions (SAT)
type: multiple-choice-qa
dataset:
name: SAT
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.5459940652818991
- task:
name: Analogy Questions (BATS)
type: multiple-choice-qa
dataset:
name: BATS
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.725958866036687
- task:
name: Analogy Questions (Google)
type: multiple-choice-qa
dataset:
name: Google
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.9
- task:
name: Analogy Questions (U2)
type: multiple-choice-qa
dataset:
name: U2
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.49122807017543857
- task:
name: Analogy Questions (U4)
type: multiple-choice-qa
dataset:
name: U4
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.5185185185185185
- task:
name: Lexical Relation Classification (BLESS)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.9088443573903873
- name: F1 (macro)
type: f1_macro
value: 0.903210414011203
- task:
name: Lexical Relation Classification (CogALexV)
type: classification
dataset:
name: CogALexV
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8422535211267606
- name: F1 (macro)
type: f1_macro
value: 0.6636142368492658
- task:
name: Lexical Relation Classification (EVALution)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.6641386782231853
- name: F1 (macro)
type: f1_macro
value: 0.6591198888693468
- task:
name: Lexical Relation Classification (K&H+N)
type: classification
dataset:
name: K&H+N
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.9522153439521458
- name: F1 (macro)
type: f1_macro
value: 0.8737588276035191
- task:
name: Lexical Relation Classification (ROOT09)
type: classification
dataset:
name: ROOT09
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8947038545910373
- name: F1 (macro)
type: f1_macro
value: 0.8939851129279454
---
# relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-d-nce-2
RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on
[relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6).
Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail).
It achieves the following results on the relation understanding tasks:
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-d-nce-2/raw/main/analogy.json)):
- Accuracy on SAT (full): 0.553475935828877
- Accuracy on SAT: 0.5459940652818991
- Accuracy on BATS: 0.725958866036687
- Accuracy on U2: 0.49122807017543857
- Accuracy on U4: 0.5185185185185185
- Accuracy on Google: 0.9
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-d-nce-2/raw/main/classification.json)):
- Micro F1 score on BLESS: 0.9088443573903873
- Micro F1 score on CogALexV: 0.8422535211267606
- Micro F1 score on EVALution: 0.6641386782231853
- Micro F1 score on K&H+N: 0.9522153439521458
- Micro F1 score on ROOT09: 0.8947038545910373
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-d-nce-2/raw/main/relation_mapping.json)):
- Accuracy on Relation Mapping: 0.7883531746031746
### Usage
This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip
```shell
pip install relbert
```
and activate model as below.
```python
from relbert import RelBERT
model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-d-nce-2")
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, )
```
### Training hyperparameters
The following hyperparameters were used during training:
- model: roberta-base
- max_length: 64
- mode: mask
- data: relbert/semeval2012_relational_similarity_v6
- split: train
- split_eval: validation
- template_mode: manual
- loss_function: nce_logout
- classification_loss: False
- temperature_nce_constant: 0.05
- temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'}
- epoch: 9
- batch: 128
- lr: 5e-06
- lr_decay: False
- lr_warmup: 1
- weight_decay: 0
- random_seed: 2
- exclude_relation: None
- n_sample: 320
- gradient_accumulation: 8
- relation_level: None
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-d-nce-2/raw/main/trainer_config.json).
### Reference
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
```
@inproceedings{ushio-etal-2021-distilling-relation-embeddings,
title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels",
author = "Ushio, Asahi and
Schockaert, Steven and
Camacho-Collados, Jose",
booktitle = "EMNLP 2021",
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
}
```
|
m-aliabbas/wav2vec2-base-timit-demo-idrak-paperspace1
|
m-aliabbas
| 2022-11-22T09:36:03Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-11-22T09:17:56Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-idrak-paperspace1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-idrak-paperspace1
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3623
- Wer: 0.3471
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.1034 | 0.87 | 500 | 0.3623 | 0.3471 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.12.0+cu116
- Datasets 1.18.3
- Tokenizers 0.12.1
|
research-backup/relbert-roberta-base-semeval2012-v6-mask-prompt-c-nce-2
|
research-backup
| 2022-11-22T09:14:07Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"dataset:relbert/semeval2012_relational_similarity_v6",
"model-index",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-11-22T07:26:54Z |
---
datasets:
- relbert/semeval2012_relational_similarity_v6
model-index:
- name: relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-nce-2
results:
- task:
name: Relation Mapping
type: sorting-task
dataset:
name: Relation Mapping
args: relbert/relation_mapping
type: relation-mapping
metrics:
- name: Accuracy
type: accuracy
value: 0.5858333333333333
- task:
name: Analogy Questions (SAT full)
type: multiple-choice-qa
dataset:
name: SAT full
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.3235294117647059
- task:
name: Analogy Questions (SAT)
type: multiple-choice-qa
dataset:
name: SAT
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.3264094955489614
- task:
name: Analogy Questions (BATS)
type: multiple-choice-qa
dataset:
name: BATS
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.40355753196220123
- task:
name: Analogy Questions (Google)
type: multiple-choice-qa
dataset:
name: Google
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.454
- task:
name: Analogy Questions (U2)
type: multiple-choice-qa
dataset:
name: U2
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.3991228070175439
- task:
name: Analogy Questions (U4)
type: multiple-choice-qa
dataset:
name: U4
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.3680555555555556
- task:
name: Lexical Relation Classification (BLESS)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8927226156395962
- name: F1 (macro)
type: f1_macro
value: 0.8860530490594479
- task:
name: Lexical Relation Classification (CogALexV)
type: classification
dataset:
name: CogALexV
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.7772300469483568
- name: F1 (macro)
type: f1_macro
value: 0.49603297373551636
- task:
name: Lexical Relation Classification (EVALution)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.5910075839653305
- name: F1 (macro)
type: f1_macro
value: 0.5855884123582632
- task:
name: Lexical Relation Classification (K&H+N)
type: classification
dataset:
name: K&H+N
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.9483897892467135
- name: F1 (macro)
type: f1_macro
value: 0.8589949863564919
- task:
name: Lexical Relation Classification (ROOT09)
type: classification
dataset:
name: ROOT09
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8614854277655907
- name: F1 (macro)
type: f1_macro
value: 0.8600976443012404
---
# relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-nce-2
RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on
[relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6).
Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail).
It achieves the following results on the relation understanding tasks:
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-nce-2/raw/main/analogy.json)):
- Accuracy on SAT (full): 0.3235294117647059
- Accuracy on SAT: 0.3264094955489614
- Accuracy on BATS: 0.40355753196220123
- Accuracy on U2: 0.3991228070175439
- Accuracy on U4: 0.3680555555555556
- Accuracy on Google: 0.454
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-nce-2/raw/main/classification.json)):
- Micro F1 score on BLESS: 0.8927226156395962
- Micro F1 score on CogALexV: 0.7772300469483568
- Micro F1 score on EVALution: 0.5910075839653305
- Micro F1 score on K&H+N: 0.9483897892467135
- Micro F1 score on ROOT09: 0.8614854277655907
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-nce-2/raw/main/relation_mapping.json)):
- Accuracy on Relation Mapping: 0.5858333333333333
### Usage
This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip
```shell
pip install relbert
```
and activate model as below.
```python
from relbert import RelBERT
model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-nce-2")
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, )
```
### Training hyperparameters
The following hyperparameters were used during training:
- model: roberta-base
- max_length: 64
- mode: mask
- data: relbert/semeval2012_relational_similarity_v6
- split: train
- split_eval: validation
- template_mode: manual
- loss_function: nce_logout
- classification_loss: False
- temperature_nce_constant: 0.05
- temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'}
- epoch: 9
- batch: 128
- lr: 5e-06
- lr_decay: False
- lr_warmup: 1
- weight_decay: 0
- random_seed: 2
- exclude_relation: None
- n_sample: 320
- gradient_accumulation: 8
- relation_level: None
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-nce-2/raw/main/trainer_config.json).
### Reference
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
```
@inproceedings{ushio-etal-2021-distilling-relation-embeddings,
title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels",
author = "Ushio, Asahi and
Schockaert, Steven and
Camacho-Collados, Jose",
booktitle = "EMNLP 2021",
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
}
```
|
research-backup/relbert-roberta-base-semeval2012-v6-mask-prompt-c-nce-1
|
research-backup
| 2022-11-22T09:00:57Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"dataset:relbert/semeval2012_relational_similarity_v6",
"model-index",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-11-22T07:27:28Z |
---
datasets:
- relbert/semeval2012_relational_similarity_v6
model-index:
- name: relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-nce-1
results:
- task:
name: Relation Mapping
type: sorting-task
dataset:
name: Relation Mapping
args: relbert/relation_mapping
type: relation-mapping
metrics:
- name: Accuracy
type: accuracy
value: 0.919047619047619
- task:
name: Analogy Questions (SAT full)
type: multiple-choice-qa
dataset:
name: SAT full
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.4117647058823529
- task:
name: Analogy Questions (SAT)
type: multiple-choice-qa
dataset:
name: SAT
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.41839762611275966
- task:
name: Analogy Questions (BATS)
type: multiple-choice-qa
dataset:
name: BATS
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.519177320733741
- task:
name: Analogy Questions (Google)
type: multiple-choice-qa
dataset:
name: Google
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.72
- task:
name: Analogy Questions (U2)
type: multiple-choice-qa
dataset:
name: U2
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.3508771929824561
- task:
name: Analogy Questions (U4)
type: multiple-choice-qa
dataset:
name: U4
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.4074074074074074
- task:
name: Lexical Relation Classification (BLESS)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.9056802772336899
- name: F1 (macro)
type: f1_macro
value: 0.9008212802993153
- task:
name: Lexical Relation Classification (CogALexV)
type: classification
dataset:
name: CogALexV
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.839906103286385
- name: F1 (macro)
type: f1_macro
value: 0.6372689183104334
- task:
name: Lexical Relation Classification (EVALution)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.6560130010834236
- name: F1 (macro)
type: f1_macro
value: 0.6454146372683375
- task:
name: Lexical Relation Classification (K&H+N)
type: classification
dataset:
name: K&H+N
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.964804896710023
- name: F1 (macro)
type: f1_macro
value: 0.8961604291304897
- task:
name: Lexical Relation Classification (ROOT09)
type: classification
dataset:
name: ROOT09
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8884362268881228
- name: F1 (macro)
type: f1_macro
value: 0.8874462481330262
---
# relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-nce-1
RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on
[relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6).
Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail).
It achieves the following results on the relation understanding tasks:
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-nce-1/raw/main/analogy.json)):
- Accuracy on SAT (full): 0.4117647058823529
- Accuracy on SAT: 0.41839762611275966
- Accuracy on BATS: 0.519177320733741
- Accuracy on U2: 0.3508771929824561
- Accuracy on U4: 0.4074074074074074
- Accuracy on Google: 0.72
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-nce-1/raw/main/classification.json)):
- Micro F1 score on BLESS: 0.9056802772336899
- Micro F1 score on CogALexV: 0.839906103286385
- Micro F1 score on EVALution: 0.6560130010834236
- Micro F1 score on K&H+N: 0.964804896710023
- Micro F1 score on ROOT09: 0.8884362268881228
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-nce-1/raw/main/relation_mapping.json)):
- Accuracy on Relation Mapping: 0.919047619047619
### Usage
This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip
```shell
pip install relbert
```
and activate model as below.
```python
from relbert import RelBERT
model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-nce-1")
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, )
```
### Training hyperparameters
The following hyperparameters were used during training:
- model: roberta-base
- max_length: 64
- mode: mask
- data: relbert/semeval2012_relational_similarity_v6
- split: train
- split_eval: validation
- template_mode: manual
- loss_function: nce_logout
- classification_loss: False
- temperature_nce_constant: 0.05
- temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'}
- epoch: 5
- batch: 128
- lr: 5e-06
- lr_decay: False
- lr_warmup: 1
- weight_decay: 0
- random_seed: 1
- exclude_relation: None
- n_sample: 320
- gradient_accumulation: 8
- relation_level: None
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-nce-1/raw/main/trainer_config.json).
### Reference
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
```
@inproceedings{ushio-etal-2021-distilling-relation-embeddings,
title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels",
author = "Ushio, Asahi and
Schockaert, Steven and
Camacho-Collados, Jose",
booktitle = "EMNLP 2021",
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
}
```
|
research-backup/relbert-roberta-base-semeval2012-v6-mask-prompt-b-nce-2
|
research-backup
| 2022-11-22T08:30:07Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"dataset:relbert/semeval2012_relational_similarity_v6",
"model-index",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-11-22T07:24:50Z |
---
datasets:
- relbert/semeval2012_relational_similarity_v6
model-index:
- name: relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-b-nce-2
results:
- task:
name: Relation Mapping
type: sorting-task
dataset:
name: Relation Mapping
args: relbert/relation_mapping
type: relation-mapping
metrics:
- name: Accuracy
type: accuracy
value: 0.8554365079365079
- task:
name: Analogy Questions (SAT full)
type: multiple-choice-qa
dataset:
name: SAT full
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.4197860962566845
- task:
name: Analogy Questions (SAT)
type: multiple-choice-qa
dataset:
name: SAT
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.42136498516320475
- task:
name: Analogy Questions (BATS)
type: multiple-choice-qa
dataset:
name: BATS
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.4535853251806559
- task:
name: Analogy Questions (Google)
type: multiple-choice-qa
dataset:
name: Google
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.666
- task:
name: Analogy Questions (U2)
type: multiple-choice-qa
dataset:
name: U2
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.40789473684210525
- task:
name: Analogy Questions (U4)
type: multiple-choice-qa
dataset:
name: U4
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.44212962962962965
- task:
name: Lexical Relation Classification (BLESS)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.9100497212596053
- name: F1 (macro)
type: f1_macro
value: 0.9093922982791334
- task:
name: Lexical Relation Classification (CogALexV)
type: classification
dataset:
name: CogALexV
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8133802816901409
- name: F1 (macro)
type: f1_macro
value: 0.6029825137477882
- task:
name: Lexical Relation Classification (EVALution)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.6375947995666306
- name: F1 (macro)
type: f1_macro
value: 0.6388590886994098
- task:
name: Lexical Relation Classification (K&H+N)
type: classification
dataset:
name: K&H+N
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.9636920080684427
- name: F1 (macro)
type: f1_macro
value: 0.8892956368301006
- task:
name: Lexical Relation Classification (ROOT09)
type: classification
dataset:
name: ROOT09
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8737073017862739
- name: F1 (macro)
type: f1_macro
value: 0.8722600157780708
---
# relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-b-nce-2
RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on
[relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6).
Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail).
It achieves the following results on the relation understanding tasks:
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-b-nce-2/raw/main/analogy.json)):
- Accuracy on SAT (full): 0.4197860962566845
- Accuracy on SAT: 0.42136498516320475
- Accuracy on BATS: 0.4535853251806559
- Accuracy on U2: 0.40789473684210525
- Accuracy on U4: 0.44212962962962965
- Accuracy on Google: 0.666
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-b-nce-2/raw/main/classification.json)):
- Micro F1 score on BLESS: 0.9100497212596053
- Micro F1 score on CogALexV: 0.8133802816901409
- Micro F1 score on EVALution: 0.6375947995666306
- Micro F1 score on K&H+N: 0.9636920080684427
- Micro F1 score on ROOT09: 0.8737073017862739
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-b-nce-2/raw/main/relation_mapping.json)):
- Accuracy on Relation Mapping: 0.8554365079365079
### Usage
This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip
```shell
pip install relbert
```
and activate model as below.
```python
from relbert import RelBERT
model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-b-nce-2")
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, )
```
### Training hyperparameters
The following hyperparameters were used during training:
- model: roberta-base
- max_length: 64
- mode: mask
- data: relbert/semeval2012_relational_similarity_v6
- split: train
- split_eval: validation
- template_mode: manual
- loss_function: nce_logout
- classification_loss: False
- temperature_nce_constant: 0.05
- temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'}
- epoch: 9
- batch: 128
- lr: 5e-06
- lr_decay: False
- lr_warmup: 1
- weight_decay: 0
- random_seed: 2
- exclude_relation: None
- n_sample: 320
- gradient_accumulation: 8
- relation_level: None
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-b-nce-2/raw/main/trainer_config.json).
### Reference
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
```
@inproceedings{ushio-etal-2021-distilling-relation-embeddings,
title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels",
author = "Ushio, Asahi and
Schockaert, Steven and
Camacho-Collados, Jose",
booktitle = "EMNLP 2021",
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
}
```
|
alexziweiwang/combined-MTL9
|
alexziweiwang
| 2022-11-22T08:28:47Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-11-16T21:30:31Z |
---
tags:
- generated_from_trainer
model-index:
- name: combined-MTL9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# combined-MTL9
This model is a fine-tuned version of [yongjian/wav2vec2-large-a](https://huggingface.co/yongjian/wav2vec2-large-a) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3413
- Wer: 0.8603
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 76.4918 | 0.35 | 500 | 3.4171 | 1.0 |
| 2.9927 | 0.69 | 1000 | 2.4743 | 1.0667 |
| 2.2033 | 1.04 | 1500 | 1.6693 | 1.25 |
| 1.6165 | 1.39 | 2000 | 1.5341 | 1.1808 |
| 1.4208 | 1.74 | 2500 | 1.3148 | 1.1179 |
| 1.2858 | 2.08 | 3000 | 1.2272 | 1.0872 |
| 1.1317 | 2.43 | 3500 | 1.0865 | 1.0731 |
| 1.0668 | 2.78 | 4000 | 1.0798 | 1.0474 |
| 1.0429 | 3.12 | 4500 | 1.4627 | 1.0936 |
| 0.9615 | 3.47 | 5000 | 1.2540 | 1.0090 |
| 0.975 | 3.82 | 5500 | 0.9936 | 0.9679 |
| 0.8517 | 4.17 | 6000 | 1.1039 | 1.0282 |
| 0.8281 | 4.51 | 6500 | 1.0609 | 0.9897 |
| 0.8413 | 4.86 | 7000 | 0.9513 | 0.9397 |
| 0.7618 | 5.21 | 7500 | 1.1656 | 0.9718 |
| 0.7173 | 5.56 | 8000 | 1.1974 | 0.9603 |
| 0.7449 | 5.9 | 8500 | 1.0144 | 0.9731 |
| 0.6762 | 6.25 | 9000 | 1.1774 | 0.9231 |
| 0.6749 | 6.6 | 9500 | 1.1823 | 0.9205 |
| 0.6776 | 6.94 | 10000 | 0.9167 | 0.9244 |
| 0.5937 | 7.29 | 10500 | 1.3344 | 0.9769 |
| 0.6488 | 7.64 | 11000 | 1.0245 | 0.9692 |
| 0.6116 | 7.99 | 11500 | 0.9444 | 0.9141 |
| 0.5497 | 8.33 | 12000 | 0.9499 | 0.9692 |
| 0.5937 | 8.68 | 12500 | 1.1087 | 0.9231 |
| 0.5268 | 9.03 | 13000 | 1.3408 | 0.9269 |
| 0.5078 | 9.38 | 13500 | 1.1737 | 0.9038 |
| 0.497 | 9.72 | 14000 | 0.9963 | 0.8987 |
| 0.5231 | 10.07 | 14500 | 1.3247 | 0.9590 |
| 0.4651 | 10.42 | 15000 | 1.1988 | 0.9308 |
| 0.481 | 10.76 | 15500 | 1.0034 | 0.9308 |
| 0.481 | 11.11 | 16000 | 1.0040 | 0.8782 |
| 0.4751 | 11.46 | 16500 | 0.8824 | 0.8538 |
| 0.4554 | 11.81 | 17000 | 0.9741 | 0.8821 |
| 0.426 | 12.15 | 17500 | 0.8552 | 0.8615 |
| 0.4186 | 12.5 | 18000 | 1.0646 | 0.8833 |
| 0.4154 | 12.85 | 18500 | 0.9618 | 0.8936 |
| 0.5115 | 13.19 | 19000 | 1.0312 | 0.8910 |
| 0.3564 | 13.54 | 19500 | 1.0686 | 0.8769 |
| 0.3927 | 13.89 | 20000 | 1.2533 | 0.9103 |
| 0.3628 | 14.24 | 20500 | 1.2945 | 0.8872 |
| 0.3808 | 14.58 | 21000 | 1.0195 | 0.8538 |
| 0.3981 | 14.93 | 21500 | 1.0388 | 0.8808 |
| 0.3337 | 15.28 | 22000 | 1.0464 | 0.8923 |
| 0.3092 | 15.62 | 22500 | 1.0843 | 0.8705 |
| 0.378 | 15.97 | 23000 | 1.0880 | 0.8859 |
| 0.3231 | 16.32 | 23500 | 0.9205 | 0.8782 |
| 0.3588 | 16.67 | 24000 | 1.0064 | 0.8962 |
| 0.3048 | 17.01 | 24500 | 0.9130 | 0.8705 |
| 0.3 | 17.36 | 25000 | 1.0100 | 0.9077 |
| 0.3045 | 17.71 | 25500 | 1.0559 | 0.9077 |
| 0.3024 | 18.06 | 26000 | 1.1225 | 0.9026 |
| 0.2614 | 18.4 | 26500 | 1.0911 | 0.8897 |
| 0.2755 | 18.75 | 27000 | 1.0872 | 0.8808 |
| 0.2798 | 19.1 | 27500 | 1.2911 | 0.9154 |
| 0.2455 | 19.44 | 28000 | 1.0646 | 0.8821 |
| 0.2524 | 19.79 | 28500 | 1.3356 | 0.9154 |
| 0.2435 | 20.14 | 29000 | 1.1257 | 0.8641 |
| 0.2458 | 20.49 | 29500 | 1.2221 | 0.8667 |
| 0.2216 | 20.83 | 30000 | 1.1364 | 0.8769 |
| 0.234 | 21.18 | 30500 | 1.2094 | 0.8808 |
| 0.233 | 21.53 | 31000 | 1.1604 | 0.8910 |
| 0.2536 | 21.88 | 31500 | 1.0934 | 0.8808 |
| 0.1885 | 22.22 | 32000 | 1.2177 | 0.8718 |
| 0.2186 | 22.57 | 32500 | 1.0539 | 0.8667 |
| 0.1991 | 22.92 | 33000 | 1.2222 | 0.8641 |
| 0.2027 | 23.26 | 33500 | 1.3863 | 0.8577 |
| 0.193 | 23.61 | 34000 | 1.2293 | 0.8705 |
| 0.2054 | 23.96 | 34500 | 1.3398 | 0.8769 |
| 0.2197 | 24.31 | 35000 | 1.3138 | 0.8705 |
| 0.1898 | 24.65 | 35500 | 1.2897 | 0.8679 |
| 0.1933 | 25.0 | 36000 | 1.2666 | 0.8769 |
| 0.1632 | 25.35 | 36500 | 1.2758 | 0.8756 |
| 0.1869 | 25.69 | 37000 | 1.1811 | 0.8603 |
| 0.1731 | 26.04 | 37500 | 1.2511 | 0.8679 |
| 0.1821 | 26.39 | 38000 | 1.3391 | 0.8718 |
| 0.1648 | 26.74 | 38500 | 1.2505 | 0.8628 |
| 0.1909 | 27.08 | 39000 | 1.2984 | 0.85 |
| 0.1902 | 27.43 | 39500 | 1.2261 | 0.8487 |
| 0.1449 | 27.78 | 40000 | 1.2853 | 0.8487 |
| 0.1583 | 28.12 | 40500 | 1.3361 | 0.8628 |
| 0.148 | 28.47 | 41000 | 1.3638 | 0.8654 |
| 0.1648 | 28.82 | 41500 | 1.3380 | 0.8603 |
| 0.1461 | 29.17 | 42000 | 1.3561 | 0.8603 |
| 0.1565 | 29.51 | 42500 | 1.3489 | 0.8615 |
| 0.16 | 29.86 | 43000 | 1.3413 | 0.8603 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 1.18.3
- Tokenizers 0.13.2
|
gary109/ai-light-dance_drums_ft_pretrain_wav2vec2-base-new-v2
|
gary109
| 2022-11-22T07:50:02Z | 77 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"gary109/AI_Light_Dance",
"generated_from_trainer",
"dataset:ai_light_dance",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-11-22T06:05:50Z |
---
tags:
- automatic-speech-recognition
- gary109/AI_Light_Dance
- generated_from_trainer
datasets:
- ai_light_dance
metrics:
- wer
model-index:
- name: ai-light-dance_drums_ft_pretrain_wav2vec2-base-new-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ai-light-dance_drums_ft_pretrain_wav2vec2-base-new-v2
This model is a fine-tuned version of [gary109/ai-light-dance_drums_pretrain_wav2vec2-base-new](https://huggingface.co/gary109/ai-light-dance_drums_pretrain_wav2vec2-base-new) on the GARY109/AI_LIGHT_DANCE - ONSET-IDMT-SMT-DRUMS-V2+MDBDRUMS dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5264
- Wer: 0.3635
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.6468 | 0.98 | 22 | 3.2315 | 1.0 |
| 1.5745 | 1.98 | 44 | 3.1603 | 1.0 |
| 1.465 | 2.98 | 66 | 2.2551 | 1.0 |
| 1.3168 | 3.98 | 88 | 1.8461 | 1.0 |
| 1.1359 | 4.98 | 110 | 1.4874 | 0.9797 |
| 0.9769 | 5.98 | 132 | 1.7359 | 0.5495 |
| 0.9019 | 6.98 | 154 | 1.5833 | 0.5268 |
| 0.8057 | 7.98 | 176 | 1.4892 | 0.5304 |
| 1.0845 | 8.98 | 198 | 1.3939 | 0.5197 |
| 0.7562 | 9.98 | 220 | 1.1238 | 0.5447 |
| 0.7259 | 10.98 | 242 | 1.2936 | 0.5006 |
| 0.7318 | 11.98 | 264 | 1.2763 | 0.4660 |
| 0.6452 | 12.98 | 286 | 1.2947 | 0.4779 |
| 0.6353 | 13.98 | 308 | 1.1925 | 0.4517 |
| 0.6463 | 14.98 | 330 | 0.8667 | 0.4100 |
| 0.5381 | 15.98 | 352 | 1.1243 | 0.3909 |
| 0.5637 | 16.98 | 374 | 0.8683 | 0.3754 |
| 0.6149 | 17.98 | 396 | 1.1040 | 0.3731 |
| 0.6138 | 18.98 | 418 | 1.1068 | 0.3850 |
| 0.7381 | 19.98 | 440 | 0.9203 | 0.3623 |
| 0.5064 | 20.98 | 462 | 0.8806 | 0.3540 |
| 0.4731 | 21.98 | 484 | 0.7259 | 0.3623 |
| 0.5232 | 22.98 | 506 | 0.7935 | 0.3516 |
| 0.4689 | 23.98 | 528 | 0.7771 | 0.3540 |
| 0.4902 | 24.98 | 550 | 0.6897 | 0.3909 |
| 0.4079 | 25.98 | 572 | 0.8030 | 0.3552 |
| 0.5045 | 26.98 | 594 | 0.6778 | 0.3790 |
| 0.4373 | 27.98 | 616 | 0.7456 | 0.3695 |
| 0.4366 | 28.98 | 638 | 0.7009 | 0.3433 |
| 0.3944 | 29.98 | 660 | 0.6841 | 0.3468 |
| 0.4206 | 30.98 | 682 | 0.7093 | 0.3373 |
| 0.3949 | 31.98 | 704 | 0.6901 | 0.3576 |
| 0.4416 | 32.98 | 726 | 0.6762 | 0.3397 |
| 0.4248 | 33.98 | 748 | 0.7196 | 0.3540 |
| 0.4214 | 34.98 | 770 | 0.6669 | 0.3254 |
| 0.416 | 35.98 | 792 | 0.6422 | 0.3445 |
| 0.3687 | 36.98 | 814 | 0.6345 | 0.3504 |
| 0.4119 | 37.98 | 836 | 0.6306 | 0.3385 |
| 0.359 | 38.98 | 858 | 0.6538 | 0.3576 |
| 0.359 | 39.98 | 880 | 0.6613 | 0.3349 |
| 0.3488 | 40.98 | 902 | 0.5976 | 0.3468 |
| 0.3543 | 41.98 | 924 | 0.6327 | 0.3433 |
| 0.3647 | 42.98 | 946 | 0.6208 | 0.3600 |
| 0.3529 | 43.98 | 968 | 0.6008 | 0.3492 |
| 0.3691 | 44.98 | 990 | 0.6065 | 0.3492 |
| 0.329 | 45.98 | 1012 | 0.6288 | 0.3373 |
| 0.3357 | 46.98 | 1034 | 0.5760 | 0.3480 |
| 0.3318 | 47.98 | 1056 | 0.5637 | 0.3564 |
| 0.3181 | 48.98 | 1078 | 0.5560 | 0.3468 |
| 0.3313 | 49.98 | 1100 | 0.5905 | 0.3337 |
| 0.3059 | 50.98 | 1122 | 0.5443 | 0.3278 |
| 0.3375 | 51.98 | 1144 | 0.5695 | 0.3576 |
| 0.3191 | 52.98 | 1166 | 0.5874 | 0.3385 |
| 0.3115 | 53.98 | 1188 | 0.5264 | 0.3635 |
| 0.3044 | 54.98 | 1210 | 0.5480 | 0.3433 |
| 0.3256 | 55.98 | 1232 | 0.5677 | 0.3385 |
| 0.2938 | 56.98 | 1254 | 0.5597 | 0.3445 |
| 0.2853 | 57.98 | 1276 | 0.5942 | 0.3373 |
| 0.3348 | 58.98 | 1298 | 0.5733 | 0.3421 |
| 0.3024 | 59.98 | 1320 | 0.5604 | 0.3433 |
| 0.2655 | 60.98 | 1342 | 0.5348 | 0.3468 |
| 0.3029 | 61.98 | 1364 | 0.5752 | 0.3206 |
| 0.3435 | 62.98 | 1386 | 0.5489 | 0.3063 |
| 0.3125 | 63.98 | 1408 | 0.5736 | 0.3075 |
| 0.263 | 64.98 | 1430 | 0.5505 | 0.3206 |
| 0.2665 | 65.98 | 1452 | 0.5391 | 0.3230 |
| 0.299 | 66.98 | 1474 | 0.5389 | 0.3135 |
| 0.2909 | 67.98 | 1496 | 0.5841 | 0.3099 |
| 0.2988 | 68.98 | 1518 | 0.5847 | 0.3004 |
| 0.2879 | 69.98 | 1540 | 0.5941 | 0.2968 |
| 0.2802 | 70.98 | 1562 | 0.6612 | 0.2920 |
| 0.2877 | 71.98 | 1584 | 0.5641 | 0.3051 |
| 0.2727 | 72.98 | 1606 | 0.6138 | 0.3063 |
| 0.2668 | 73.98 | 1628 | 0.6087 | 0.2920 |
| 0.2675 | 74.98 | 1650 | 0.5876 | 0.2932 |
| 0.264 | 75.98 | 1672 | 0.6043 | 0.2980 |
| 0.2352 | 76.98 | 1694 | 0.5829 | 0.2932 |
| 0.2494 | 77.98 | 1716 | 0.5775 | 0.3063 |
| 0.2621 | 78.98 | 1738 | 0.5676 | 0.2956 |
| 0.2788 | 79.98 | 1760 | 0.5864 | 0.2932 |
| 0.2615 | 80.98 | 1782 | 0.5754 | 0.3015 |
| 0.2542 | 81.98 | 1804 | 0.5651 | 0.3027 |
| 0.2641 | 82.98 | 1826 | 0.5731 | 0.3004 |
| 0.2532 | 83.98 | 1848 | 0.5782 | 0.2968 |
| 0.2645 | 84.98 | 1870 | 0.5718 | 0.3039 |
| 0.2296 | 85.98 | 1892 | 0.5628 | 0.3147 |
| 0.2394 | 86.98 | 1914 | 0.5920 | 0.3027 |
| 0.2636 | 87.98 | 1936 | 0.6085 | 0.2968 |
| 0.2371 | 88.98 | 1958 | 0.5809 | 0.3075 |
| 0.2364 | 89.98 | 1980 | 0.5927 | 0.3039 |
| 0.2812 | 90.98 | 2002 | 0.5713 | 0.3123 |
| 0.2141 | 91.98 | 2024 | 0.5743 | 0.3039 |
| 0.2919 | 92.98 | 2046 | 0.5837 | 0.3063 |
| 0.2288 | 93.98 | 2068 | 0.5860 | 0.3015 |
| 0.2585 | 94.98 | 2090 | 0.5776 | 0.3147 |
| 0.2529 | 95.98 | 2112 | 0.5625 | 0.3159 |
| 0.2343 | 96.98 | 2134 | 0.5700 | 0.3087 |
| 0.2567 | 97.98 | 2156 | 0.5729 | 0.3087 |
| 0.2448 | 98.98 | 2178 | 0.5728 | 0.3111 |
| 0.2501 | 99.98 | 2200 | 0.5744 | 0.3099 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.8.1+cu111
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
philschmid/lilt-en-funsd
|
philschmid
| 2022-11-22T07:42:39Z | 821 | 3 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"lilt",
"token-classification",
"generated_from_trainer",
"dataset:funsd-layoutlmv3",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-11-18T08:27:17Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- funsd-layoutlmv3
model-index:
- name: lilt-en-funsd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lilt-en-funsd
This model is a fine-tuned version of [SCUT-DLVCLab/lilt-roberta-en-base](https://huggingface.co/SCUT-DLVCLab/lilt-roberta-en-base) on the funsd-layoutlmv3 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6117
- Answer: {'precision': 0.8821428571428571, 'recall': 0.9069767441860465, 'f1': 0.8943874471937237, 'number': 817}
- Header: {'precision': 0.6126126126126126, 'recall': 0.5714285714285714, 'f1': 0.591304347826087, 'number': 119}
- Question: {'precision': 0.9045045045045045, 'recall': 0.9322191272051996, 'f1': 0.9181527206218564, 'number': 1077}
- Overall Precision: 0.8797
- Overall Recall: 0.9006
- Overall F1: 0.8900
- Overall Accuracy: 0.8204
## Model Usage
```python
from transformers import LiltForTokenClassification, LayoutLMv3Processor
from PIL import Image, ImageDraw, ImageFont
import torch
# load model and processor from huggingface hub
model = LiltForTokenClassification.from_pretrained("philschmid/lilt-en-funsd")
processor = LayoutLMv3Processor.from_pretrained("philschmid/lilt-en-funsd")
# helper function to unnormalize bboxes for drawing onto the image
def unnormalize_box(bbox, width, height):
return [
width * (bbox[0] / 1000),
height * (bbox[1] / 1000),
width * (bbox[2] / 1000),
height * (bbox[3] / 1000),
]
label2color = {
"B-HEADER": "blue",
"B-QUESTION": "red",
"B-ANSWER": "green",
"I-HEADER": "blue",
"I-QUESTION": "red",
"I-ANSWER": "green",
}
# draw results onto the image
def draw_boxes(image, boxes, predictions):
width, height = image.size
normalizes_boxes = [unnormalize_box(box, width, height) for box in boxes]
# draw predictions over the image
draw = ImageDraw.Draw(image)
font = ImageFont.load_default()
for prediction, box in zip(predictions, normalizes_boxes):
if prediction == "O":
continue
draw.rectangle(box, outline="black")
draw.rectangle(box, outline=label2color[prediction])
draw.text((box[0] + 10, box[1] - 10), text=prediction, fill=label2color[prediction], font=font)
return image
# run inference
def run_inference(image, model=model, processor=processor, output_image=True):
# create model input
encoding = processor(image, return_tensors="pt")
del encoding["pixel_values"]
# run inference
outputs = model(**encoding)
predictions = outputs.logits.argmax(-1).squeeze().tolist()
# get labels
labels = [model.config.id2label[prediction] for prediction in predictions]
if output_image:
return draw_boxes(image, encoding["bbox"][0], labels)
else:
return labels
run_inference(dataset["test"][34]["image"])
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 2500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Answer | Header | Question | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 0.0211 | 10.53 | 200 | 1.5528 | {'precision': 0.8458904109589042, 'recall': 0.9069767441860465, 'f1': 0.8753691671588896, 'number': 817} | {'precision': 0.5684210526315789, 'recall': 0.453781512605042, 'f1': 0.5046728971962617, 'number': 119} | {'precision': 0.896551724137931, 'recall': 0.89322191272052, 'f1': 0.8948837209302325, 'number': 1077} | 0.8596 | 0.8728 | 0.8662 | 0.8011 |
| 0.0132 | 21.05 | 400 | 1.3143 | {'precision': 0.8447058823529412, 'recall': 0.8788249694002448, 'f1': 0.8614277144571085, 'number': 817} | {'precision': 0.6020408163265306, 'recall': 0.4957983193277311, 'f1': 0.543778801843318, 'number': 119} | {'precision': 0.8854262144821264, 'recall': 0.8969359331476323, 'f1': 0.8911439114391144, 'number': 1077} | 0.8548 | 0.8659 | 0.8603 | 0.8095 |
| 0.0052 | 31.58 | 600 | 1.5747 | {'precision': 0.8482446206115515, 'recall': 0.9167686658506732, 'f1': 0.8811764705882352, 'number': 817} | {'precision': 0.6283185840707964, 'recall': 0.5966386554621849, 'f1': 0.6120689655172413, 'number': 119} | {'precision': 0.8997161778618732, 'recall': 0.883008356545961, 'f1': 0.8912839737582005, 'number': 1077} | 0.8626 | 0.8798 | 0.8711 | 0.8030 |
| 0.0073 | 42.11 | 800 | 1.4848 | {'precision': 0.8487972508591065, 'recall': 0.9069767441860465, 'f1': 0.8769230769230769, 'number': 817} | {'precision': 0.5190839694656488, 'recall': 0.5714285714285714, 'f1': 0.5439999999999999, 'number': 119} | {'precision': 0.8941947565543071, 'recall': 0.8867223769730733, 'f1': 0.8904428904428905, 'number': 1077} | 0.8514 | 0.8763 | 0.8636 | 0.7969 |
| 0.0057 | 52.63 | 1000 | 1.3993 | {'precision': 0.8852071005917159, 'recall': 0.9155446756425949, 'f1': 0.9001203369434416, 'number': 817} | {'precision': 0.5454545454545454, 'recall': 0.6050420168067226, 'f1': 0.5737051792828685, 'number': 119} | {'precision': 0.899090909090909, 'recall': 0.9182915506035283, 'f1': 0.9085898024804776, 'number': 1077} | 0.8710 | 0.8987 | 0.8846 | 0.8198 |
| 0.0023 | 63.16 | 1200 | 1.6463 | {'precision': 0.8961201501877347, 'recall': 0.8763769889840881, 'f1': 0.886138613861386, 'number': 817} | {'precision': 0.5625, 'recall': 0.5294117647058824, 'f1': 0.5454545454545455, 'number': 119} | {'precision': 0.888, 'recall': 0.9275766016713092, 'f1': 0.9073569482288827, 'number': 1077} | 0.8733 | 0.8833 | 0.8782 | 0.8082 |
| 0.001 | 73.68 | 1400 | 1.6476 | {'precision': 0.8676814988290398, 'recall': 0.9069767441860465, 'f1': 0.8868940754039496, 'number': 817} | {'precision': 0.6571428571428571, 'recall': 0.5798319327731093, 'f1': 0.6160714285714286, 'number': 119} | {'precision': 0.908256880733945, 'recall': 0.9192200557103064, 'f1': 0.9137055837563451, 'number': 1077} | 0.8785 | 0.8942 | 0.8863 | 0.8137 |
| 0.0014 | 84.21 | 1600 | 1.6493 | {'precision': 0.8814814814814815, 'recall': 0.8739290085679314, 'f1': 0.8776889981561156, 'number': 817} | {'precision': 0.6194690265486725, 'recall': 0.5882352941176471, 'f1': 0.603448275862069, 'number': 119} | {'precision': 0.894404332129964, 'recall': 0.9201485608170845, 'f1': 0.9070938215102976, 'number': 1077} | 0.8740 | 0.8818 | 0.8778 | 0.8041 |
| 0.0006 | 94.74 | 1800 | 1.6193 | {'precision': 0.8766467065868263, 'recall': 0.8959608323133414, 'f1': 0.8861985472154963, 'number': 817} | {'precision': 0.6068376068376068, 'recall': 0.5966386554621849, 'f1': 0.6016949152542374, 'number': 119} | {'precision': 0.8946428571428572, 'recall': 0.9303621169916435, 'f1': 0.912152935821575, 'number': 1077} | 0.8711 | 0.8967 | 0.8837 | 0.8137 |
| 0.0001 | 105.26 | 2000 | 1.6048 | {'precision': 0.8751472320376914, 'recall': 0.9094247246022031, 'f1': 0.8919567827130852, 'number': 817} | {'precision': 0.6140350877192983, 'recall': 0.5882352941176471, 'f1': 0.6008583690987125, 'number': 119} | {'precision': 0.9062784349408554, 'recall': 0.924791086350975, 'f1': 0.9154411764705882, 'number': 1077} | 0.8773 | 0.8987 | 0.8879 | 0.8194 |
| 0.0001 | 115.79 | 2200 | 1.6117 | {'precision': 0.8821428571428571, 'recall': 0.9069767441860465, 'f1': 0.8943874471937237, 'number': 817} | {'precision': 0.6126126126126126, 'recall': 0.5714285714285714, 'f1': 0.591304347826087, 'number': 119} | {'precision': 0.9045045045045045, 'recall': 0.9322191272051996, 'f1': 0.9181527206218564, 'number': 1077} | 0.8797 | 0.9006 | 0.8900 | 0.8204 |
| 0.0001 | 126.32 | 2400 | 1.6163 | {'precision': 0.8799048751486326, 'recall': 0.9057527539779682, 'f1': 0.8926417370325694, 'number': 817} | {'precision': 0.6052631578947368, 'recall': 0.5798319327731093, 'f1': 0.5922746781115881, 'number': 119} | {'precision': 0.9062784349408554, 'recall': 0.924791086350975, 'f1': 0.9154411764705882, 'number': 1077} | 0.8788 | 0.8967 | 0.8876 | 0.8192 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.12.1
|
research-backup/relbert-roberta-base-semeval2012-v6-average-prompt-a-triplet-0
|
research-backup
| 2022-11-22T07:32:05Z | 100 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"dataset:relbert/semeval2012_relational_similarity_v6",
"model-index",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-11-21T15:11:15Z |
---
datasets:
- relbert/semeval2012_relational_similarity_v6
model-index:
- name: relbert/relbert-roberta-base-semeval2012-v6-average-prompt-a-triplet-0
results:
- task:
name: Relation Mapping
type: sorting-task
dataset:
name: Relation Mapping
args: relbert/relation_mapping
type: relation-mapping
metrics:
- name: Accuracy
type: accuracy
value: 0.77375
- task:
name: Analogy Questions (SAT full)
type: multiple-choice-qa
dataset:
name: SAT full
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.3422459893048128
- task:
name: Analogy Questions (SAT)
type: multiple-choice-qa
dataset:
name: SAT
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.34421364985163205
- task:
name: Analogy Questions (BATS)
type: multiple-choice-qa
dataset:
name: BATS
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.45969983324068925
- task:
name: Analogy Questions (Google)
type: multiple-choice-qa
dataset:
name: Google
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.476
- task:
name: Analogy Questions (U2)
type: multiple-choice-qa
dataset:
name: U2
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.33771929824561403
- task:
name: Analogy Questions (U4)
type: multiple-choice-qa
dataset:
name: U4
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.34953703703703703
- task:
name: Lexical Relation Classification (BLESS)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.849630857315052
- name: F1 (macro)
type: f1_macro
value: 0.8270516141593186
- task:
name: Lexical Relation Classification (CogALexV)
type: classification
dataset:
name: CogALexV
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8122065727699531
- name: F1 (macro)
type: f1_macro
value: 0.5472113531139201
- task:
name: Lexical Relation Classification (EVALution)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.605092091007584
- name: F1 (macro)
type: f1_macro
value: 0.5327366048438427
- task:
name: Lexical Relation Classification (K&H+N)
type: classification
dataset:
name: K&H+N
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.9129164637963414
- name: F1 (macro)
type: f1_macro
value: 0.7847480560698086
- task:
name: Lexical Relation Classification (ROOT09)
type: classification
dataset:
name: ROOT09
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8270134753995614
- name: F1 (macro)
type: f1_macro
value: 0.8313563740149
---
# relbert/relbert-roberta-base-semeval2012-v6-average-prompt-a-triplet-0
RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on
[relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6).
Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail).
It achieves the following results on the relation understanding tasks:
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-a-triplet-0/raw/main/analogy.json)):
- Accuracy on SAT (full): 0.3422459893048128
- Accuracy on SAT: 0.34421364985163205
- Accuracy on BATS: 0.45969983324068925
- Accuracy on U2: 0.33771929824561403
- Accuracy on U4: 0.34953703703703703
- Accuracy on Google: 0.476
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-a-triplet-0/raw/main/classification.json)):
- Micro F1 score on BLESS: 0.849630857315052
- Micro F1 score on CogALexV: 0.8122065727699531
- Micro F1 score on EVALution: 0.605092091007584
- Micro F1 score on K&H+N: 0.9129164637963414
- Micro F1 score on ROOT09: 0.8270134753995614
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-a-triplet-0/raw/main/relation_mapping.json)):
- Accuracy on Relation Mapping: 0.77375
### Usage
This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip
```shell
pip install relbert
```
and activate model as below.
```python
from relbert import RelBERT
model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-average-prompt-a-triplet-0")
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, )
```
### Training hyperparameters
The following hyperparameters were used during training:
- model: roberta-base
- max_length: 64
- mode: average
- data: relbert/semeval2012_relational_similarity_v6
- split: train
- split_eval: validation
- template_mode: manual
- loss_function: triplet
- classification_loss: False
- temperature_nce_constant: 0.05
- temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'}
- epoch: 10
- batch: 128
- lr: 5e-06
- lr_decay: False
- lr_warmup: 1
- weight_decay: 0
- random_seed: 0
- exclude_relation: None
- n_sample: 320
- gradient_accumulation: 8
- relation_level: None
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-a-triplet-0/raw/main/trainer_config.json).
### Reference
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
```
@inproceedings{ushio-etal-2021-distilling-relation-embeddings,
title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels",
author = "Ushio, Asahi and
Schockaert, Steven and
Camacho-Collados, Jose",
booktitle = "EMNLP 2021",
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
}
```
|
research-backup/relbert-roberta-base-semeval2012-v6-mask-prompt-c-triplet-0
|
research-backup
| 2022-11-22T07:30:45Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"dataset:relbert/semeval2012_relational_similarity_v6",
"model-index",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-11-21T15:04:16Z |
---
datasets:
- relbert/semeval2012_relational_similarity_v6
model-index:
- name: relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-triplet-0
results:
- task:
name: Relation Mapping
type: sorting-task
dataset:
name: Relation Mapping
args: relbert/relation_mapping
type: relation-mapping
metrics:
- name: Accuracy
type: accuracy
value: 0.7428373015873015
- task:
name: Analogy Questions (SAT full)
type: multiple-choice-qa
dataset:
name: SAT full
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.3502673796791444
- task:
name: Analogy Questions (SAT)
type: multiple-choice-qa
dataset:
name: SAT
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.35311572700296734
- task:
name: Analogy Questions (BATS)
type: multiple-choice-qa
dataset:
name: BATS
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.5697609783212896
- task:
name: Analogy Questions (Google)
type: multiple-choice-qa
dataset:
name: Google
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.678
- task:
name: Analogy Questions (U2)
type: multiple-choice-qa
dataset:
name: U2
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.37280701754385964
- task:
name: Analogy Questions (U4)
type: multiple-choice-qa
dataset:
name: U4
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.4027777777777778
- task:
name: Lexical Relation Classification (BLESS)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8265782733162573
- name: F1 (macro)
type: f1_macro
value: 0.8097358007943485
- task:
name: Lexical Relation Classification (CogALexV)
type: classification
dataset:
name: CogALexV
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.7572769953051643
- name: F1 (macro)
type: f1_macro
value: 0.44873901164798935
- task:
name: Lexical Relation Classification (EVALution)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.5211267605633803
- name: F1 (macro)
type: f1_macro
value: 0.4144470035861812
- task:
name: Lexical Relation Classification (K&H+N)
type: classification
dataset:
name: K&H+N
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8593586979202894
- name: F1 (macro)
type: f1_macro
value: 0.7164045411277497
- task:
name: Lexical Relation Classification (ROOT09)
type: classification
dataset:
name: ROOT09
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8549044186775305
- name: F1 (macro)
type: f1_macro
value: 0.852498730871873
---
# relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-triplet-0
RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on
[relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6).
Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail).
It achieves the following results on the relation understanding tasks:
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-triplet-0/raw/main/analogy.json)):
- Accuracy on SAT (full): 0.3502673796791444
- Accuracy on SAT: 0.35311572700296734
- Accuracy on BATS: 0.5697609783212896
- Accuracy on U2: 0.37280701754385964
- Accuracy on U4: 0.4027777777777778
- Accuracy on Google: 0.678
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-triplet-0/raw/main/classification.json)):
- Micro F1 score on BLESS: 0.8265782733162573
- Micro F1 score on CogALexV: 0.7572769953051643
- Micro F1 score on EVALution: 0.5211267605633803
- Micro F1 score on K&H+N: 0.8593586979202894
- Micro F1 score on ROOT09: 0.8549044186775305
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-triplet-0/raw/main/relation_mapping.json)):
- Accuracy on Relation Mapping: 0.7428373015873015
### Usage
This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip
```shell
pip install relbert
```
and activate model as below.
```python
from relbert import RelBERT
model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-triplet-0")
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, )
```
### Training hyperparameters
The following hyperparameters were used during training:
- model: roberta-base
- max_length: 64
- mode: mask
- data: relbert/semeval2012_relational_similarity_v6
- split: train
- split_eval: validation
- template_mode: manual
- loss_function: triplet
- classification_loss: False
- temperature_nce_constant: 0.05
- temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'}
- epoch: 8
- batch: 128
- lr: 5e-06
- lr_decay: False
- lr_warmup: 1
- weight_decay: 0
- random_seed: 0
- exclude_relation: None
- n_sample: 320
- gradient_accumulation: 8
- relation_level: None
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-triplet-0/raw/main/trainer_config.json).
### Reference
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
```
@inproceedings{ushio-etal-2021-distilling-relation-embeddings,
title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels",
author = "Ushio, Asahi and
Schockaert, Steven and
Camacho-Collados, Jose",
booktitle = "EMNLP 2021",
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
}
```
|
research-backup/relbert-roberta-base-semeval2012-v6-mask-prompt-b-triplet-0
|
research-backup
| 2022-11-22T07:30:18Z | 97 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"dataset:relbert/semeval2012_relational_similarity_v6",
"model-index",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-11-21T15:02:09Z |
---
datasets:
- relbert/semeval2012_relational_similarity_v6
model-index:
- name: relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-b-triplet-0
results:
- task:
name: Relation Mapping
type: sorting-task
dataset:
name: Relation Mapping
args: relbert/relation_mapping
type: relation-mapping
metrics:
- name: Accuracy
type: accuracy
value: 0.6770238095238095
- task:
name: Analogy Questions (SAT full)
type: multiple-choice-qa
dataset:
name: SAT full
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.32887700534759357
- task:
name: Analogy Questions (SAT)
type: multiple-choice-qa
dataset:
name: SAT
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.33827893175074186
- task:
name: Analogy Questions (BATS)
type: multiple-choice-qa
dataset:
name: BATS
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.48360200111172874
- task:
name: Analogy Questions (Google)
type: multiple-choice-qa
dataset:
name: Google
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.49
- task:
name: Analogy Questions (U2)
type: multiple-choice-qa
dataset:
name: U2
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.3684210526315789
- task:
name: Analogy Questions (U4)
type: multiple-choice-qa
dataset:
name: U4
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.35648148148148145
- task:
name: Lexical Relation Classification (BLESS)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8271809552508663
- name: F1 (macro)
type: f1_macro
value: 0.8139940994079059
- task:
name: Lexical Relation Classification (CogALexV)
type: classification
dataset:
name: CogALexV
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.7927230046948357
- name: F1 (macro)
type: f1_macro
value: 0.47757376520100464
- task:
name: Lexical Relation Classification (EVALution)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.5866738894907909
- name: F1 (macro)
type: f1_macro
value: 0.5099661171290004
- task:
name: Lexical Relation Classification (K&H+N)
type: classification
dataset:
name: K&H+N
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8843291368157474
- name: F1 (macro)
type: f1_macro
value: 0.7260666016287155
- task:
name: Lexical Relation Classification (ROOT09)
type: classification
dataset:
name: ROOT09
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8335944844876215
- name: F1 (macro)
type: f1_macro
value: 0.8285945038843159
---
# relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-b-triplet-0
RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on
[relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6).
Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail).
It achieves the following results on the relation understanding tasks:
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-b-triplet-0/raw/main/analogy.json)):
- Accuracy on SAT (full): 0.32887700534759357
- Accuracy on SAT: 0.33827893175074186
- Accuracy on BATS: 0.48360200111172874
- Accuracy on U2: 0.3684210526315789
- Accuracy on U4: 0.35648148148148145
- Accuracy on Google: 0.49
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-b-triplet-0/raw/main/classification.json)):
- Micro F1 score on BLESS: 0.8271809552508663
- Micro F1 score on CogALexV: 0.7927230046948357
- Micro F1 score on EVALution: 0.5866738894907909
- Micro F1 score on K&H+N: 0.8843291368157474
- Micro F1 score on ROOT09: 0.8335944844876215
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-b-triplet-0/raw/main/relation_mapping.json)):
- Accuracy on Relation Mapping: 0.6770238095238095
### Usage
This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip
```shell
pip install relbert
```
and activate model as below.
```python
from relbert import RelBERT
model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-b-triplet-0")
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, )
```
### Training hyperparameters
The following hyperparameters were used during training:
- model: roberta-base
- max_length: 64
- mode: mask
- data: relbert/semeval2012_relational_similarity_v6
- split: train
- split_eval: validation
- template_mode: manual
- loss_function: triplet
- classification_loss: False
- temperature_nce_constant: 0.05
- temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'}
- epoch: 8
- batch: 128
- lr: 5e-06
- lr_decay: False
- lr_warmup: 1
- weight_decay: 0
- random_seed: 0
- exclude_relation: None
- n_sample: 320
- gradient_accumulation: 8
- relation_level: None
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-b-triplet-0/raw/main/trainer_config.json).
### Reference
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
```
@inproceedings{ushio-etal-2021-distilling-relation-embeddings,
title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels",
author = "Ushio, Asahi and
Schockaert, Steven and
Camacho-Collados, Jose",
booktitle = "EMNLP 2021",
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
}
```
|
research-backup/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-0
|
research-backup
| 2022-11-22T07:29:52Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"dataset:relbert/semeval2012_relational_similarity_v6",
"model-index",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-11-21T14:59:11Z |
---
datasets:
- relbert/semeval2012_relational_similarity_v6
model-index:
- name: relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-0
results:
- task:
name: Relation Mapping
type: sorting-task
dataset:
name: Relation Mapping
args: relbert/relation_mapping
type: relation-mapping
metrics:
- name: Accuracy
type: accuracy
value: 0.7335515873015873
- task:
name: Analogy Questions (SAT full)
type: multiple-choice-qa
dataset:
name: SAT full
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.3850267379679144
- task:
name: Analogy Questions (SAT)
type: multiple-choice-qa
dataset:
name: SAT
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.3857566765578635
- task:
name: Analogy Questions (BATS)
type: multiple-choice-qa
dataset:
name: BATS
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.6142301278488049
- task:
name: Analogy Questions (Google)
type: multiple-choice-qa
dataset:
name: Google
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.606
- task:
name: Analogy Questions (U2)
type: multiple-choice-qa
dataset:
name: U2
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.33771929824561403
- task:
name: Analogy Questions (U4)
type: multiple-choice-qa
dataset:
name: U4
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.3541666666666667
- task:
name: Lexical Relation Classification (BLESS)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8312490583094772
- name: F1 (macro)
type: f1_macro
value: 0.8160781730825878
- task:
name: Lexical Relation Classification (CogALexV)
type: classification
dataset:
name: CogALexV
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8084507042253521
- name: F1 (macro)
type: f1_macro
value: 0.5318915064045434
- task:
name: Lexical Relation Classification (EVALution)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.6386782231852655
- name: F1 (macro)
type: f1_macro
value: 0.6037663010095351
- task:
name: Lexical Relation Classification (K&H+N)
type: classification
dataset:
name: K&H+N
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.91242957501565
- name: F1 (macro)
type: f1_macro
value: 0.7652689334496495
- task:
name: Lexical Relation Classification (ROOT09)
type: classification
dataset:
name: ROOT09
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8793481667188969
- name: F1 (macro)
type: f1_macro
value: 0.8771620145886178
---
# relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-0
RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on
[relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6).
Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail).
It achieves the following results on the relation understanding tasks:
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-0/raw/main/analogy.json)):
- Accuracy on SAT (full): 0.3850267379679144
- Accuracy on SAT: 0.3857566765578635
- Accuracy on BATS: 0.6142301278488049
- Accuracy on U2: 0.33771929824561403
- Accuracy on U4: 0.3541666666666667
- Accuracy on Google: 0.606
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-0/raw/main/classification.json)):
- Micro F1 score on BLESS: 0.8312490583094772
- Micro F1 score on CogALexV: 0.8084507042253521
- Micro F1 score on EVALution: 0.6386782231852655
- Micro F1 score on K&H+N: 0.91242957501565
- Micro F1 score on ROOT09: 0.8793481667188969
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-0/raw/main/relation_mapping.json)):
- Accuracy on Relation Mapping: 0.7335515873015873
### Usage
This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip
```shell
pip install relbert
```
and activate model as below.
```python
from relbert import RelBERT
model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-0")
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, )
```
### Training hyperparameters
The following hyperparameters were used during training:
- model: roberta-base
- max_length: 64
- mode: mask
- data: relbert/semeval2012_relational_similarity_v6
- split: train
- split_eval: validation
- template_mode: manual
- loss_function: triplet
- classification_loss: False
- temperature_nce_constant: 0.05
- temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'}
- epoch: 8
- batch: 128
- lr: 5e-06
- lr_decay: False
- lr_warmup: 1
- weight_decay: 0
- random_seed: 0
- exclude_relation: None
- n_sample: 320
- gradient_accumulation: 8
- relation_level: None
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-0/raw/main/trainer_config.json).
### Reference
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
```
@inproceedings{ushio-etal-2021-distilling-relation-embeddings,
title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels",
author = "Ushio, Asahi and
Schockaert, Steven and
Camacho-Collados, Jose",
booktitle = "EMNLP 2021",
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
}
```
|
research-backup/relbert-roberta-base-semeval2012-v6-average-prompt-d-nce-0
|
research-backup
| 2022-11-22T07:28:59Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"dataset:relbert/semeval2012_relational_similarity_v6",
"model-index",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-11-21T14:54:36Z |
---
datasets:
- relbert/semeval2012_relational_similarity_v6
model-index:
- name: relbert/relbert-roberta-base-semeval2012-v6-average-prompt-d-nce-0
results:
- task:
name: Relation Mapping
type: sorting-task
dataset:
name: Relation Mapping
args: relbert/relation_mapping
type: relation-mapping
metrics:
- name: Accuracy
type: accuracy
value: 0.8158134920634921
- task:
name: Analogy Questions (SAT full)
type: multiple-choice-qa
dataset:
name: SAT full
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.42245989304812837
- task:
name: Analogy Questions (SAT)
type: multiple-choice-qa
dataset:
name: SAT
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.41543026706231456
- task:
name: Analogy Questions (BATS)
type: multiple-choice-qa
dataset:
name: BATS
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.6837131739855475
- task:
name: Analogy Questions (Google)
type: multiple-choice-qa
dataset:
name: Google
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.822
- task:
name: Analogy Questions (U2)
type: multiple-choice-qa
dataset:
name: U2
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.40350877192982454
- task:
name: Analogy Questions (U4)
type: multiple-choice-qa
dataset:
name: U4
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.4652777777777778
- task:
name: Lexical Relation Classification (BLESS)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.9071869820702124
- name: F1 (macro)
type: f1_macro
value: 0.9024202874494452
- task:
name: Lexical Relation Classification (CogALexV)
type: classification
dataset:
name: CogALexV
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8255868544600939
- name: F1 (macro)
type: f1_macro
value: 0.6238359204705145
- task:
name: Lexical Relation Classification (EVALution)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.6289274106175514
- name: F1 (macro)
type: f1_macro
value: 0.6173167061017508
- task:
name: Lexical Relation Classification (K&H+N)
type: classification
dataset:
name: K&H+N
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.9555540098768867
- name: F1 (macro)
type: f1_macro
value: 0.871337903489136
- task:
name: Lexical Relation Classification (ROOT09)
type: classification
dataset:
name: ROOT09
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8790347853337511
- name: F1 (macro)
type: f1_macro
value: 0.8736570441239309
---
# relbert/relbert-roberta-base-semeval2012-v6-average-prompt-d-nce-0
RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on
[relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6).
Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail).
It achieves the following results on the relation understanding tasks:
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-d-nce-0/raw/main/analogy.json)):
- Accuracy on SAT (full): 0.42245989304812837
- Accuracy on SAT: 0.41543026706231456
- Accuracy on BATS: 0.6837131739855475
- Accuracy on U2: 0.40350877192982454
- Accuracy on U4: 0.4652777777777778
- Accuracy on Google: 0.822
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-d-nce-0/raw/main/classification.json)):
- Micro F1 score on BLESS: 0.9071869820702124
- Micro F1 score on CogALexV: 0.8255868544600939
- Micro F1 score on EVALution: 0.6289274106175514
- Micro F1 score on K&H+N: 0.9555540098768867
- Micro F1 score on ROOT09: 0.8790347853337511
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-d-nce-0/raw/main/relation_mapping.json)):
- Accuracy on Relation Mapping: 0.8158134920634921
### Usage
This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip
```shell
pip install relbert
```
and activate model as below.
```python
from relbert import RelBERT
model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-average-prompt-d-nce-0")
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, )
```
### Training hyperparameters
The following hyperparameters were used during training:
- model: roberta-base
- max_length: 64
- mode: average
- data: relbert/semeval2012_relational_similarity_v6
- split: train
- split_eval: validation
- template_mode: manual
- loss_function: nce_logout
- classification_loss: False
- temperature_nce_constant: 0.05
- temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'}
- epoch: 8
- batch: 128
- lr: 5e-06
- lr_decay: False
- lr_warmup: 1
- weight_decay: 0
- random_seed: 0
- exclude_relation: None
- n_sample: 320
- gradient_accumulation: 8
- relation_level: None
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-d-nce-0/raw/main/trainer_config.json).
### Reference
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
```
@inproceedings{ushio-etal-2021-distilling-relation-embeddings,
title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels",
author = "Ushio, Asahi and
Schockaert, Steven and
Camacho-Collados, Jose",
booktitle = "EMNLP 2021",
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
}
```
|
research-backup/relbert-roberta-base-semeval2012-v6-average-prompt-c-nce-0
|
research-backup
| 2022-11-22T07:28:32Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"dataset:relbert/semeval2012_relational_similarity_v6",
"model-index",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-11-21T14:52:15Z |
---
datasets:
- relbert/semeval2012_relational_similarity_v6
model-index:
- name: relbert/relbert-roberta-base-semeval2012-v6-average-prompt-c-nce-0
results:
- task:
name: Relation Mapping
type: sorting-task
dataset:
name: Relation Mapping
args: relbert/relation_mapping
type: relation-mapping
metrics:
- name: Accuracy
type: accuracy
value: 0.796984126984127
- task:
name: Analogy Questions (SAT full)
type: multiple-choice-qa
dataset:
name: SAT full
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.4037433155080214
- task:
name: Analogy Questions (SAT)
type: multiple-choice-qa
dataset:
name: SAT
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.3916913946587537
- task:
name: Analogy Questions (BATS)
type: multiple-choice-qa
dataset:
name: BATS
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.6859366314619233
- task:
name: Analogy Questions (Google)
type: multiple-choice-qa
dataset:
name: Google
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.784
- task:
name: Analogy Questions (U2)
type: multiple-choice-qa
dataset:
name: U2
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.42105263157894735
- task:
name: Analogy Questions (U4)
type: multiple-choice-qa
dataset:
name: U4
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.4583333333333333
- task:
name: Lexical Relation Classification (BLESS)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.9010094922404701
- name: F1 (macro)
type: f1_macro
value: 0.8947571278975387
- task:
name: Lexical Relation Classification (CogALexV)
type: classification
dataset:
name: CogALexV
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8227699530516432
- name: F1 (macro)
type: f1_macro
value: 0.6007828127513786
- task:
name: Lexical Relation Classification (EVALution)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.6164680390032503
- name: F1 (macro)
type: f1_macro
value: 0.5989494559912151
- task:
name: Lexical Relation Classification (K&H+N)
type: classification
dataset:
name: K&H+N
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.9572928983793559
- name: F1 (macro)
type: f1_macro
value: 0.8821535108627934
- task:
name: Lexical Relation Classification (ROOT09)
type: classification
dataset:
name: ROOT09
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8743340645565655
- name: F1 (macro)
type: f1_macro
value: 0.8719695915031801
---
# relbert/relbert-roberta-base-semeval2012-v6-average-prompt-c-nce-0
RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on
[relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6).
Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail).
It achieves the following results on the relation understanding tasks:
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-c-nce-0/raw/main/analogy.json)):
- Accuracy on SAT (full): 0.4037433155080214
- Accuracy on SAT: 0.3916913946587537
- Accuracy on BATS: 0.6859366314619233
- Accuracy on U2: 0.42105263157894735
- Accuracy on U4: 0.4583333333333333
- Accuracy on Google: 0.784
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-c-nce-0/raw/main/classification.json)):
- Micro F1 score on BLESS: 0.9010094922404701
- Micro F1 score on CogALexV: 0.8227699530516432
- Micro F1 score on EVALution: 0.6164680390032503
- Micro F1 score on K&H+N: 0.9572928983793559
- Micro F1 score on ROOT09: 0.8743340645565655
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-c-nce-0/raw/main/relation_mapping.json)):
- Accuracy on Relation Mapping: 0.796984126984127
### Usage
This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip
```shell
pip install relbert
```
and activate model as below.
```python
from relbert import RelBERT
model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-average-prompt-c-nce-0")
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, )
```
### Training hyperparameters
The following hyperparameters were used during training:
- model: roberta-base
- max_length: 64
- mode: average
- data: relbert/semeval2012_relational_similarity_v6
- split: train
- split_eval: validation
- template_mode: manual
- loss_function: nce_logout
- classification_loss: False
- temperature_nce_constant: 0.05
- temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'}
- epoch: 6
- batch: 128
- lr: 5e-06
- lr_decay: False
- lr_warmup: 1
- weight_decay: 0
- random_seed: 0
- exclude_relation: None
- n_sample: 320
- gradient_accumulation: 8
- relation_level: None
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-c-nce-0/raw/main/trainer_config.json).
### Reference
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
```
@inproceedings{ushio-etal-2021-distilling-relation-embeddings,
title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels",
author = "Ushio, Asahi and
Schockaert, Steven and
Camacho-Collados, Jose",
booktitle = "EMNLP 2021",
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
}
```
|
research-backup/relbert-roberta-base-semeval2012-v6-average-prompt-b-nce-0
|
research-backup
| 2022-11-22T07:28:05Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"dataset:relbert/semeval2012_relational_similarity_v6",
"model-index",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-11-21T14:49:35Z |
---
datasets:
- relbert/semeval2012_relational_similarity_v6
model-index:
- name: relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-nce-0
results:
- task:
name: Relation Mapping
type: sorting-task
dataset:
name: Relation Mapping
args: relbert/relation_mapping
type: relation-mapping
metrics:
- name: Accuracy
type: accuracy
value: 0.7995436507936508
- task:
name: Analogy Questions (SAT full)
type: multiple-choice-qa
dataset:
name: SAT full
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.4572192513368984
- task:
name: Analogy Questions (SAT)
type: multiple-choice-qa
dataset:
name: SAT
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.4599406528189911
- task:
name: Analogy Questions (BATS)
type: multiple-choice-qa
dataset:
name: BATS
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.7326292384658143
- task:
name: Analogy Questions (Google)
type: multiple-choice-qa
dataset:
name: Google
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.84
- task:
name: Analogy Questions (U2)
type: multiple-choice-qa
dataset:
name: U2
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.4166666666666667
- task:
name: Analogy Questions (U4)
type: multiple-choice-qa
dataset:
name: U4
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.4375
- task:
name: Lexical Relation Classification (BLESS)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.9058309477173422
- name: F1 (macro)
type: f1_macro
value: 0.89974713256054
- task:
name: Lexical Relation Classification (CogALexV)
type: classification
dataset:
name: CogALexV
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.826056338028169
- name: F1 (macro)
type: f1_macro
value: 0.6201374746332642
- task:
name: Lexical Relation Classification (EVALution)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.6457204767063922
- name: F1 (macro)
type: f1_macro
value: 0.6243022596465048
- task:
name: Lexical Relation Classification (K&H+N)
type: classification
dataset:
name: K&H+N
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.9559017875773805
- name: F1 (macro)
type: f1_macro
value: 0.8816731086105152
- task:
name: Lexical Relation Classification (ROOT09)
type: classification
dataset:
name: ROOT09
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8862425571921027
- name: F1 (macro)
type: f1_macro
value: 0.8841357906198278
---
# relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-nce-0
RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on
[relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6).
Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail).
It achieves the following results on the relation understanding tasks:
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-nce-0/raw/main/analogy.json)):
- Accuracy on SAT (full): 0.4572192513368984
- Accuracy on SAT: 0.4599406528189911
- Accuracy on BATS: 0.7326292384658143
- Accuracy on U2: 0.4166666666666667
- Accuracy on U4: 0.4375
- Accuracy on Google: 0.84
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-nce-0/raw/main/classification.json)):
- Micro F1 score on BLESS: 0.9058309477173422
- Micro F1 score on CogALexV: 0.826056338028169
- Micro F1 score on EVALution: 0.6457204767063922
- Micro F1 score on K&H+N: 0.9559017875773805
- Micro F1 score on ROOT09: 0.8862425571921027
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-nce-0/raw/main/relation_mapping.json)):
- Accuracy on Relation Mapping: 0.7995436507936508
### Usage
This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip
```shell
pip install relbert
```
and activate model as below.
```python
from relbert import RelBERT
model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-nce-0")
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, )
```
### Training hyperparameters
The following hyperparameters were used during training:
- model: roberta-base
- max_length: 64
- mode: average
- data: relbert/semeval2012_relational_similarity_v6
- split: train
- split_eval: validation
- template_mode: manual
- loss_function: nce_logout
- classification_loss: False
- temperature_nce_constant: 0.05
- temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'}
- epoch: 8
- batch: 128
- lr: 5e-06
- lr_decay: False
- lr_warmup: 1
- weight_decay: 0
- random_seed: 0
- exclude_relation: None
- n_sample: 320
- gradient_accumulation: 8
- relation_level: None
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-nce-0/raw/main/trainer_config.json).
### Reference
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
```
@inproceedings{ushio-etal-2021-distilling-relation-embeddings,
title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels",
author = "Ushio, Asahi and
Schockaert, Steven and
Camacho-Collados, Jose",
booktitle = "EMNLP 2021",
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
}
```
|
research-backup/relbert-roberta-base-semeval2012-v6-mask-prompt-e-nce-0
|
research-backup
| 2022-11-22T07:27:12Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"dataset:relbert/semeval2012_relational_similarity_v6",
"model-index",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-11-21T14:45:16Z |
---
datasets:
- relbert/semeval2012_relational_similarity_v6
model-index:
- name: relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-nce-0
results:
- task:
name: Relation Mapping
type: sorting-task
dataset:
name: Relation Mapping
args: relbert/relation_mapping
type: relation-mapping
metrics:
- name: Accuracy
type: accuracy
value: 0.8203769841269841
- task:
name: Analogy Questions (SAT full)
type: multiple-choice-qa
dataset:
name: SAT full
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.49732620320855614
- task:
name: Analogy Questions (SAT)
type: multiple-choice-qa
dataset:
name: SAT
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.49554896142433236
- task:
name: Analogy Questions (BATS)
type: multiple-choice-qa
dataset:
name: BATS
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.613118399110617
- task:
name: Analogy Questions (Google)
type: multiple-choice-qa
dataset:
name: Google
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.694
- task:
name: Analogy Questions (U2)
type: multiple-choice-qa
dataset:
name: U2
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.44298245614035087
- task:
name: Analogy Questions (U4)
type: multiple-choice-qa
dataset:
name: U4
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.4675925925925926
- task:
name: Lexical Relation Classification (BLESS)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.9080910049721259
- name: F1 (macro)
type: f1_macro
value: 0.9055495580705791
- task:
name: Lexical Relation Classification (CogALexV)
type: classification
dataset:
name: CogALexV
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8291079812206574
- name: F1 (macro)
type: f1_macro
value: 0.6322244948930222
- task:
name: Lexical Relation Classification (EVALution)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.6310942578548212
- name: F1 (macro)
type: f1_macro
value: 0.6288452835665572
- task:
name: Lexical Relation Classification (K&H+N)
type: classification
dataset:
name: K&H+N
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.9536064547541212
- name: F1 (macro)
type: f1_macro
value: 0.8526202146492776
- task:
name: Lexical Relation Classification (ROOT09)
type: classification
dataset:
name: ROOT09
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8849890316515199
- name: F1 (macro)
type: f1_macro
value: 0.8856882855922619
---
# relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-nce-0
RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on
[relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6).
Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail).
It achieves the following results on the relation understanding tasks:
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-nce-0/raw/main/analogy.json)):
- Accuracy on SAT (full): 0.49732620320855614
- Accuracy on SAT: 0.49554896142433236
- Accuracy on BATS: 0.613118399110617
- Accuracy on U2: 0.44298245614035087
- Accuracy on U4: 0.4675925925925926
- Accuracy on Google: 0.694
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-nce-0/raw/main/classification.json)):
- Micro F1 score on BLESS: 0.9080910049721259
- Micro F1 score on CogALexV: 0.8291079812206574
- Micro F1 score on EVALution: 0.6310942578548212
- Micro F1 score on K&H+N: 0.9536064547541212
- Micro F1 score on ROOT09: 0.8849890316515199
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-nce-0/raw/main/relation_mapping.json)):
- Accuracy on Relation Mapping: 0.8203769841269841
### Usage
This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip
```shell
pip install relbert
```
and activate model as below.
```python
from relbert import RelBERT
model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-nce-0")
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, )
```
### Training hyperparameters
The following hyperparameters were used during training:
- model: roberta-base
- max_length: 64
- mode: mask
- data: relbert/semeval2012_relational_similarity_v6
- split: train
- split_eval: validation
- template_mode: manual
- loss_function: nce_logout
- classification_loss: False
- temperature_nce_constant: 0.05
- temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'}
- epoch: 10
- batch: 128
- lr: 5e-06
- lr_decay: False
- lr_warmup: 1
- weight_decay: 0
- random_seed: 0
- exclude_relation: None
- n_sample: 320
- gradient_accumulation: 8
- relation_level: None
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-nce-0/raw/main/trainer_config.json).
### Reference
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
```
@inproceedings{ushio-etal-2021-distilling-relation-embeddings,
title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels",
author = "Ushio, Asahi and
Schockaert, Steven and
Camacho-Collados, Jose",
booktitle = "EMNLP 2021",
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
}
```
|
research-backup/relbert-roberta-base-semeval2012-v6-mask-prompt-b-nce-0
|
research-backup
| 2022-11-22T07:25:53Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"dataset:relbert/semeval2012_relational_similarity_v6",
"model-index",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-11-21T14:38:42Z |
---
datasets:
- relbert/semeval2012_relational_similarity_v6
model-index:
- name: relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-b-nce-0
results:
- task:
name: Relation Mapping
type: sorting-task
dataset:
name: Relation Mapping
args: relbert/relation_mapping
type: relation-mapping
metrics:
- name: Accuracy
type: accuracy
value: 0.8518253968253968
- task:
name: Analogy Questions (SAT full)
type: multiple-choice-qa
dataset:
name: SAT full
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.4679144385026738
- task:
name: Analogy Questions (SAT)
type: multiple-choice-qa
dataset:
name: SAT
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.4688427299703264
- task:
name: Analogy Questions (BATS)
type: multiple-choice-qa
dataset:
name: BATS
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.7204002223457476
- task:
name: Analogy Questions (Google)
type: multiple-choice-qa
dataset:
name: Google
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.85
- task:
name: Analogy Questions (U2)
type: multiple-choice-qa
dataset:
name: U2
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.39473684210526316
- task:
name: Analogy Questions (U4)
type: multiple-choice-qa
dataset:
name: U4
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.48842592592592593
- task:
name: Lexical Relation Classification (BLESS)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.9174325749585657
- name: F1 (macro)
type: f1_macro
value: 0.9126308463435432
- task:
name: Lexical Relation Classification (CogALexV)
type: classification
dataset:
name: CogALexV
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8429577464788731
- name: F1 (macro)
type: f1_macro
value: 0.6599099425304438
- task:
name: Lexical Relation Classification (EVALution)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.6435536294691224
- name: F1 (macro)
type: f1_macro
value: 0.6405512542694948
- task:
name: Lexical Relation Classification (K&H+N)
type: classification
dataset:
name: K&H+N
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.9610488975446895
- name: F1 (macro)
type: f1_macro
value: 0.8766645353309496
- task:
name: Lexical Relation Classification (ROOT09)
type: classification
dataset:
name: ROOT09
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8943904732058916
- name: F1 (macro)
type: f1_macro
value: 0.893806087595518
---
# relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-b-nce-0
RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on
[relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6).
Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail).
It achieves the following results on the relation understanding tasks:
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-b-nce-0/raw/main/analogy.json)):
- Accuracy on SAT (full): 0.4679144385026738
- Accuracy on SAT: 0.4688427299703264
- Accuracy on BATS: 0.7204002223457476
- Accuracy on U2: 0.39473684210526316
- Accuracy on U4: 0.48842592592592593
- Accuracy on Google: 0.85
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-b-nce-0/raw/main/classification.json)):
- Micro F1 score on BLESS: 0.9174325749585657
- Micro F1 score on CogALexV: 0.8429577464788731
- Micro F1 score on EVALution: 0.6435536294691224
- Micro F1 score on K&H+N: 0.9610488975446895
- Micro F1 score on ROOT09: 0.8943904732058916
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-b-nce-0/raw/main/relation_mapping.json)):
- Accuracy on Relation Mapping: 0.8518253968253968
### Usage
This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip
```shell
pip install relbert
```
and activate model as below.
```python
from relbert import RelBERT
model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-b-nce-0")
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, )
```
### Training hyperparameters
The following hyperparameters were used during training:
- model: roberta-base
- max_length: 64
- mode: mask
- data: relbert/semeval2012_relational_similarity_v6
- split: train
- split_eval: validation
- template_mode: manual
- loss_function: nce_logout
- classification_loss: False
- temperature_nce_constant: 0.05
- temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'}
- epoch: 8
- batch: 128
- lr: 5e-06
- lr_decay: False
- lr_warmup: 1
- weight_decay: 0
- random_seed: 0
- exclude_relation: None
- n_sample: 320
- gradient_accumulation: 8
- relation_level: None
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-b-nce-0/raw/main/trainer_config.json).
### Reference
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
```
@inproceedings{ushio-etal-2021-distilling-relation-embeddings,
title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels",
author = "Ushio, Asahi and
Schockaert, Steven and
Camacho-Collados, Jose",
booktitle = "EMNLP 2021",
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
}
```
|
tomXBE/distilbert-base-uncased-finetuned-squad
|
tomXBE
| 2022-11-22T06:11:38Z | 120 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-11-22T03:25:55Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1564
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2231 | 1.0 | 5533 | 1.1602 |
| 0.9559 | 2.0 | 11066 | 1.1334 |
| 0.7571 | 3.0 | 16599 | 1.1564 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
xfreakazoidx/NghtmrFrk
|
xfreakazoidx
| 2022-11-22T04:46:17Z | 0 | 1 | null |
[
"region:us"
] | null | 2022-11-21T17:46:49Z |
"Nightmare Combined Model" was an attempt at mixing all four of my recent models. Prompt is "NghtmrFrk" It's pretty amazing if you want a bit of all the models in one model instead. It gives you some truly nightmare worthy stuff, be creative with what you type in. Or just have no prompt but NghtmrFrk for random horror! Even though this is a combined model, you may want to try the models I have separately if your are looking for a certain style specifically. CFG keep low, steps can be anything. Same with sampler.

|
valurank/Pegasus_cnn_news_headline_generator
|
valurank
| 2022-11-22T04:00:25Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-22T02:21:20Z |
---
tags:
- generated_from_trainer
model-index:
- name: pegasus_cnn_news_article_title_12000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus_cnn_news_article_title_12000
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2258
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.2874 | 0.65 | 500 | 0.2258 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
COctovianto/bert-finetuned-squad
|
COctovianto
| 2022-11-22T03:51:55Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-11-17T10:07:49Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: COctovianto/bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# COctovianto/bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5679
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 16635, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 1.2701 | 0 |
| 0.7799 | 1 |
| 0.5679 | 2 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Datasets 2.7.0
- Tokenizers 0.13.2
|
kojima-r/wav2vec2-base-birddb-small
|
kojima-r
| 2022-11-22T03:48:56Z | 49 | 1 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"pretraining",
"endpoints_compatible",
"region:us"
] | null | 2022-11-22T00:35:51Z |
This is a wav2vec2-base model trained from selected bird songs in a birddb dataset.
```
import librosa
import torch
from transformers import Wav2Vec2ForPreTraining,Wav2Vec2Processor
sound_file = 'sample.wav'
sound_data,_ = librosa.load(sound_file, sr=16000)
model_id = "kojima-r/wav2vec2-base-birddb-small"
model = Wav2Vec2ForPreTraining.from_pretrained(model_id)
result=model(torch.tensor([sound_data]))
hidden_vecs=result.projected_states
```

|
guannan/facial_recognition
|
guannan
| 2022-11-22T03:16:05Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2022-11-22T03:16:05Z |
---
license: bigscience-openrail-m
---
|
birgermoell/whisper-large-sv
|
birgermoell
| 2022-11-22T02:31:07Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"sv",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-11-21T16:58:56Z |
---
language:
- sv
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: whisper-large-sv
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: sv-SE
split: train[:1%]+validation[:1%]
args: sv-SE
metrics:
- name: Wer
type: wer
value: 30.935251798561154
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-large-sv
This model is a fine-tuned version of [openai/whisper-large](https://huggingface.co/openai/whisper-large) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5259
- Wer: 30.9353
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1
- training_steps: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 4.5521 | 0.04 | 5 | 3.5048 | 48.2014 |
| 1.8009 | 0.08 | 10 | 1.5259 | 30.9353 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.6.1
- Tokenizers 0.13.1
|
danupurnomo/dummy-titanic
|
danupurnomo
| 2022-11-22T01:47:22Z | 5 | 0 |
sklearn
|
[
"sklearn",
"joblib",
"tabular-classification",
"region:us"
] |
tabular-classification
| 2022-11-21T08:07:40Z |
---
tags:
- tabular-classification
- sklearn
dataset:
- titanic
widget:
structuredData:
PassengerId:
- 1191
Pclass:
- 1
Name:
- Sherlock Holmes
Sex:
- male
SibSp:
- 0
Parch:
- 0
Ticket:
- C.A.29395
Fare:
- 12
Cabin:
- F44
Embarked:
- S
---
## Titanic (Survived/Not Survived) - Binary Classification
### How to use
```python
from huggingface_hub import hf_hub_url, cached_download
import joblib
import pandas as pd
import numpy as np
from tensorflow.keras.models import load_model
REPO_ID = 'danupurnomo/dummy-titanic'
PIPELINE_FILENAME = 'final_pipeline.pkl'
TF_FILENAME = 'titanic_model.h5'
model_pipeline = joblib.load(cached_download(
hf_hub_url(REPO_ID, PIPELINE_FILENAME)
))
model_seq = load_model(cached_download(
hf_hub_url(REPO_ID, TF_FILENAME)
))
```
### Example A New Data
```python
new_data = {
'PassengerId': 1191,
'Pclass': 1,
'Name': 'Sherlock Holmes',
'Sex': 'male',
'Age': 30,
'SibSp': 0,
'Parch': 0,
'Ticket': 'C.A.29395',
'Fare': 12,
'Cabin': 'F44',
'Embarked': 'S'
}
new_data = pd.DataFrame([new_data])
```
### Transform Inference-Set
```python
new_data_transform = model_pipeline.transform(new_data)
```
### Predict using Neural Networks
```python
y_pred_inf_single = model_seq.predict(new_data_transform)
y_pred_inf_single = np.where(y_pred_inf_single >= 0.5, 1, 0)
print('Result : ', y_pred_inf_single)
# [[0]]
```
|
Jellywibble/gptneo125M-rm-on-100-qa-pairs
|
Jellywibble
| 2022-11-22T01:46:52Z | 88 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neo",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-22T01:28:42Z |
WnB run: https://wandb.ai/jellywibble/huggingface/runs/1yo5mgs4?workspace=user-jellywibble
|
TUMxudashuai/ppo-LunarLander-v2
|
TUMxudashuai
| 2022-11-22T01:41:02Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-11-22T01:40:20Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 155.33 +/- 58.36
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Prajwaln/distilbert-base-uncased-finetuned-cola
|
Prajwaln
| 2022-11-22T01:15:40Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-22T01:09:07Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
pritoms/gpt2-finetuned-transcriptSteve
|
pritoms
| 2022-11-22T00:38:27Z | 102 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-21T20:06:38Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-finetuned-transcriptSteve
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-finetuned-transcriptSteve
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6308
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 18 | 2.6415 |
| No log | 2.0 | 36 | 2.6353 |
| No log | 3.0 | 54 | 2.6308 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
Tonjk/NEW_OCR_10_8wangchanberta-base-att-spm-uncased
|
Tonjk
| 2022-11-21T23:28:55Z | 166 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"camembert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-11-21T11:02:19Z |
---
tags:
- generated_from_trainer
model-index:
- name: NEW_OCR_10_8wangchanberta-base-att-spm-uncased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NEW_OCR_10_8wangchanberta-base-att-spm-uncased
This model is a fine-tuned version of [airesearch/wangchanberta-base-att-spm-uncased](https://huggingface.co/airesearch/wangchanberta-base-att-spm-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0147
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 0.108 | 1.0 | 10701 | 0.0167 |
| 0.0161 | 2.0 | 21402 | 0.0140 |
| 0.0126 | 3.0 | 32103 | 0.0130 |
| 0.0105 | 4.0 | 42804 | 0.0125 |
| 0.009 | 5.0 | 53505 | 0.0135 |
| 0.008 | 6.0 | 64206 | 0.0137 |
| 0.0074 | 7.0 | 74907 | 0.0139 |
| 0.0064 | 8.0 | 85608 | 0.0143 |
| 0.0058 | 9.0 | 96309 | 0.0147 |
| 0.0054 | 10.0 | 107010 | 0.0147 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.12.1+cu113
- Datasets 1.17.0
- Tokenizers 0.10.3
|
consciousAI/question-answering-roberta-base-s
|
consciousAI
| 2022-11-21T22:11:48Z | 146 | 3 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"question-answering",
"Question Answering",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-10-18T19:36:48Z |
---
license: apache-2.0
tags:
- Question Answering
metrics:
- squad
model-index:
- name: question-answering-roberta-base-s
results: []
---
# Question Answering
The model is intended to be used for Q&A task, given the question & context, the model would attempt to infer the answer text, answer span & confidence score.<br>
Model is encoder-only (roberta-base) with QuestionAnswering LM Head, fine-tuned on SQUADx dataset with **exact_match:** 86.14 & **f1:** 92.330 performance scores.
[Live Demo: Question Answering Encoders vs Generative](https://huggingface.co/spaces/consciousAI/question_answering)
Please follow this link for [Encoder based Question Answering V2](https://huggingface.co/consciousAI/question-answering-roberta-base-s-v2/)
<br>Please follow this link for [Generative Question Answering](https://huggingface.co/consciousAI/question-answering-generative-t5-v1-base-s-q-c/)
Example code:
```
from transformers import pipeline
model_checkpoint = "consciousAI/question-answering-roberta-base-s"
context = """
🤗 Transformers is backed by the three most popular deep learning libraries — Jax, PyTorch and TensorFlow — with a seamless integration
between them. It's straightforward to train your models with one before loading them for inference with the other.
"""
question = "Which deep learning libraries back 🤗 Transformers?"
question_answerer = pipeline("question-answering", model=model_checkpoint)
question_answerer(question=question, context=context)
```
## Training and evaluation data
SQUAD Split
## Training procedure
Preprocessing:
1. SQUAD Data longer chunks were sub-chunked with input context max-length 384 tokens and stride as 128 tokens.
2. Target answers readjusted for sub-chunks, sub-chunks with no-answers or partial answers were set to target answer span as (0,0)
Metrics:
1. Adjusted accordingly to handle sub-chunking.
2. n best = 20
3. skip answers with length zero or higher than max answer length (30)
### Training hyperparameters
Custom Training Loop:
The following hyperparameters were used during training:
- learning_rate: 2e-5
- train_batch_size: 32
- eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Epoch | F1 | Exact Match |
|:-----:|:--------:|:-----------:|
| 1.0 | 91.3085 | 84.5412 |
| 2.0 | 92.3304 | 86.1400 |
### Framework versions
- Transformers 4.23.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.5.2
- Tokenizers 0.13.0
|
xfreakazoidx/NightmareWormWetWorms
|
xfreakazoidx
| 2022-11-21T21:12:26Z | 0 | 1 | null |
[
"region:us"
] | null | 2022-11-21T19:06:08Z |
Third model is Nightmare Wet Worms. Prompt being "NghtmrWrmFrk". It's more based on my models that are full of tentacles, worms, maggots, wet looking, drippy....etc. This model isn't perfect and alot of words don't seem to matter as much, but you can still get some amazing results if your into this type of look. Heck, just type a bunch of random words and you get weird images! Keep the CFG low, steps at any amount though. Samples can be anything.

|
huggingtweets/big___oven-raspberryl0ver
|
huggingtweets
| 2022-11-21T20:59:11Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-10-25T18:28:16Z |
---
language: en
thumbnail: http://www.huggingtweets.com/big___oven-raspberryl0ver/1669064347124/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1571653458972794884/eaxhUsib_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1552729971956727808/zVaFH3ex_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">oskcar & 🌞</div>
<div style="text-align: center; font-size: 14px;">@big___oven-raspberryl0ver</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from oskcar & 🌞.
| Data | oskcar | 🌞 |
| --- | --- | --- |
| Tweets downloaded | 2755 | 1689 |
| Retweets | 667 | 385 |
| Short tweets | 317 | 192 |
| Tweets kept | 1771 | 1112 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/36yboi59/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @big___oven-raspberryl0ver's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3orysshx) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3orysshx/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/big___oven-raspberryl0ver')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/adamscochran-fehrsam-taschalabs
|
huggingtweets
| 2022-11-21T20:20:38Z | 111 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-21T20:17:36Z |
---
language: en
thumbnail: http://www.huggingtweets.com/adamscochran-fehrsam-taschalabs/1669062033978/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1504547300416364550/rFebXP9K_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1513406762904612866/-haRj3pk_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1593745112844144641/Q2zhPcdt_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Tascha & Fred Ehrsam & Adam Cochran (adamscochran.eth)</div>
<div style="text-align: center; font-size: 14px;">@adamscochran-fehrsam-taschalabs</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Tascha & Fred Ehrsam & Adam Cochran (adamscochran.eth).
| Data | Tascha | Fred Ehrsam | Adam Cochran (adamscochran.eth) |
| --- | --- | --- | --- |
| Tweets downloaded | 3244 | 1674 | 3242 |
| Retweets | 215 | 188 | 555 |
| Short tweets | 210 | 150 | 150 |
| Tweets kept | 2819 | 1336 | 2537 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/35tvoqtp/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @adamscochran-fehrsam-taschalabs's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/fv0c31k5) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/fv0c31k5/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/adamscochran-fehrsam-taschalabs')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
prajjwal1/ctrl_discovery_3
|
prajjwal1
| 2022-11-21T20:04:09Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"ctrl",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
Please refer to this repository (https://github.com/prajjwal1/discosense) for usage instructions.
---
language:
- en
tags:
- conditional
- text
- generation
license: "mit"
datasets:
- discofuse
- discovery
metrics:
- perplexity
- ppl
---
|
prajjwal1/ctrl_discovery_2
|
prajjwal1
| 2022-11-21T20:03:46Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"ctrl",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
Please refer to this repository (https://github.com/prajjwal1/discosense) for usage instructions.
---
language:
- en
tags:
- conditional
- text
- generation
license: "mit"
datasets:
- discofuse
- discovery
metrics:
- perplexity
- ppl
---
|
dung1308/phobert-base-finetuned-vbert
|
dung1308
| 2022-11-21T20:03:21Z | 70 | 0 |
transformers
|
[
"transformers",
"tf",
"roberta",
"fill-mask",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-11-21T14:35:38Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: dung1308/phobert-base-finetuned-vbert
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# dung1308/phobert-base-finetuned-vbert
This model is a fine-tuned version of [vinai/phobert-base](https://huggingface.co/vinai/phobert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 4.3312
- Validation Loss: 3.8888
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 4.3312 | 3.8888 | 0 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.8.0
- Datasets 2.7.0
- Tokenizers 0.11.0
|
nubby/dconway-artstyle
|
nubby
| 2022-11-21T19:52:50Z | 0 | 3 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2022-11-21T19:37:03Z |
---
license: creativeml-openrail-m
---
Anything-V3.0 based StableDiffusion model with Dreambooth training based on the general artstyle of Daniel Conway. Trained for 2,400 steps using 30 total training images.
## Usage
Can be used in StableDiffusion, including the extremely popular Web UI by Automatic1111, like any other model by placing the .CKPT file in the correct directory. Please consult the documentation for your installation of StableDiffusion for more specific instructions.
Use the following tokens in your prompt to achieve the desired output.
Token: ```"dconway"``` Class: ```"illustration style"```
I have generally found the best results from using the token and class together at the beginning of the prompt. You can also try using one or the other or mixing them in other ways to achieve different outputs.
Example Prompt 1: ```"dconway illustration style, 1girl, pink hair, blue eyes, french braid, hair bun, single sidelock, adjusting hair, light smile, parted lips, looking at viewer, head tilt, atrium, bird cage, water, potted plant, clock, fountain, dappled sunlight, sunbeam, light rays, caustics, bloom, extremely detailed, intricate, masterpiece, best quality"```
Example Prompt 2: ```"dconway illustration style, a beautiful landscape with a river rushing towards a mountain range in the distance with clouds above, glacier, flower"```
For a more anime style try adding ```"3d model"``` to your negative prompt.
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
|
bnriiitb/whisper-small-hi
|
bnriiitb
| 2022-11-21T19:33:33Z | 81 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"hi",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-11-20T08:39:57Z |
---
language:
- hi
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Small Hi - Naga Budigam
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
args: 'config: hi, split: test'
metrics:
- name: Wer
type: wer
value: 42.69448912215356
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Hi - Naga Budigam
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3620
- Wer: 42.6945
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- training_steps: 400
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.5719 | 0.06 | 100 | 0.6811 | 79.3913 |
| 0.4096 | 0.12 | 200 | 0.4827 | 62.2492 |
| 0.3104 | 0.18 | 300 | 0.3839 | 44.1167 |
| 0.2728 | 0.24 | 400 | 0.3620 | 42.6945 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
gngpostalsrvc/BERiT_2000_2_layers_40_epochs
|
gngpostalsrvc
| 2022-11-21T19:24:36Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-11-21T18:29:10Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: BERiT_2000_2_layers_40_epochs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERiT_2000_2_layers_40_epochs
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 6.8375
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 40
- label_smoothing_factor: 0.2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 15.0851 | 0.19 | 500 | 8.5468 |
| 7.8971 | 0.39 | 1000 | 7.3376 |
| 7.3108 | 0.58 | 1500 | 7.1632 |
| 7.134 | 0.77 | 2000 | 7.0700 |
| 7.0956 | 0.97 | 2500 | 7.0723 |
| 7.0511 | 1.16 | 3000 | 6.9560 |
| 7.0313 | 1.36 | 3500 | 6.9492 |
| 7.0028 | 1.55 | 4000 | 6.9048 |
| 6.9563 | 1.74 | 4500 | 6.8456 |
| 6.9214 | 1.94 | 5000 | 6.8019 |
| 11.1596 | 2.13 | 5500 | 7.5882 |
| 7.5824 | 2.32 | 6000 | 7.1291 |
| 7.2581 | 2.52 | 6500 | 7.1123 |
| 7.2232 | 2.71 | 7000 | 7.1059 |
| 7.1734 | 2.9 | 7500 | 7.1120 |
| 7.1504 | 3.1 | 8000 | 7.0946 |
| 7.1314 | 3.29 | 8500 | 7.0799 |
| 7.1236 | 3.49 | 9000 | 7.1175 |
| 7.1275 | 3.68 | 9500 | 7.0905 |
| 7.1087 | 3.87 | 10000 | 7.0839 |
| 7.1212 | 4.07 | 10500 | 7.0822 |
| 7.1136 | 4.26 | 11000 | 7.0703 |
| 7.1025 | 4.45 | 11500 | 7.1035 |
| 7.0931 | 4.65 | 12000 | 7.0759 |
| 7.0899 | 4.84 | 12500 | 7.0883 |
| 7.0834 | 5.03 | 13000 | 7.1307 |
| 7.0761 | 5.23 | 13500 | 7.0642 |
| 7.0706 | 5.42 | 14000 | 7.0324 |
| 7.0678 | 5.62 | 14500 | 7.0704 |
| 7.0614 | 5.81 | 15000 | 7.0317 |
| 7.0569 | 6.0 | 15500 | 7.0421 |
| 7.057 | 6.2 | 16000 | 7.0250 |
| 7.0503 | 6.39 | 16500 | 7.0129 |
| 7.0529 | 6.58 | 17000 | 7.0316 |
| 7.0453 | 6.78 | 17500 | 7.0436 |
| 7.0218 | 6.97 | 18000 | 7.0064 |
| 7.0415 | 7.16 | 18500 | 7.0385 |
| 7.0338 | 7.36 | 19000 | 6.9756 |
| 7.0488 | 7.55 | 19500 | 7.0054 |
| 7.0347 | 7.75 | 20000 | 6.9946 |
| 7.0464 | 7.94 | 20500 | 7.0055 |
| 7.017 | 8.13 | 21000 | 7.0158 |
| 7.0159 | 8.33 | 21500 | 7.0052 |
| 7.0223 | 8.52 | 22000 | 6.9925 |
| 6.9989 | 8.71 | 22500 | 7.0307 |
| 7.0218 | 8.91 | 23000 | 6.9767 |
| 6.9998 | 9.1 | 23500 | 7.0096 |
| 7.01 | 9.3 | 24000 | 6.9599 |
| 6.9964 | 9.49 | 24500 | 6.9896 |
| 6.9906 | 9.68 | 25000 | 6.9903 |
| 7.0336 | 9.88 | 25500 | 6.9807 |
| 7.0053 | 10.07 | 26000 | 6.9776 |
| 6.9826 | 10.26 | 26500 | 6.9836 |
| 6.9897 | 10.46 | 27000 | 6.9886 |
| 6.9829 | 10.65 | 27500 | 6.9991 |
| 6.9849 | 10.84 | 28000 | 6.9651 |
| 6.9901 | 11.04 | 28500 | 6.9822 |
| 6.9852 | 11.23 | 29000 | 6.9921 |
| 6.9757 | 11.43 | 29500 | 6.9636 |
| 6.991 | 11.62 | 30000 | 6.9952 |
| 6.9818 | 11.81 | 30500 | 6.9799 |
| 6.9911 | 12.01 | 31000 | 6.9725 |
| 6.9423 | 12.2 | 31500 | 6.9540 |
| 6.9885 | 12.39 | 32000 | 6.9771 |
| 6.9636 | 12.59 | 32500 | 6.9475 |
| 6.9567 | 12.78 | 33000 | 6.9653 |
| 6.9749 | 12.97 | 33500 | 6.9711 |
| 6.9739 | 13.17 | 34000 | 6.9691 |
| 6.9651 | 13.36 | 34500 | 6.9569 |
| 6.9599 | 13.56 | 35000 | 6.9608 |
| 6.957 | 13.75 | 35500 | 6.9531 |
| 6.9539 | 13.94 | 36000 | 6.9704 |
| 6.958 | 14.14 | 36500 | 6.9478 |
| 6.9597 | 14.33 | 37000 | 6.9510 |
| 6.9466 | 14.52 | 37500 | 6.9625 |
| 6.9518 | 14.72 | 38000 | 6.9787 |
| 6.9509 | 14.91 | 38500 | 6.9391 |
| 6.9505 | 15.1 | 39000 | 6.9694 |
| 6.9311 | 15.3 | 39500 | 6.9440 |
| 6.9513 | 15.49 | 40000 | 6.9425 |
| 6.9268 | 15.69 | 40500 | 6.9223 |
| 6.9415 | 15.88 | 41000 | 6.9435 |
| 6.9308 | 16.07 | 41500 | 6.9281 |
| 6.9216 | 16.27 | 42000 | 6.9415 |
| 6.9265 | 16.46 | 42500 | 6.9164 |
| 6.9023 | 16.65 | 43000 | 6.9237 |
| 6.9407 | 16.85 | 43500 | 6.9100 |
| 6.9211 | 17.04 | 44000 | 6.9295 |
| 6.9147 | 17.23 | 44500 | 6.9131 |
| 6.9224 | 17.43 | 45000 | 6.9188 |
| 6.9215 | 17.62 | 45500 | 6.9077 |
| 6.915 | 17.82 | 46000 | 6.9371 |
| 6.906 | 18.01 | 46500 | 6.8932 |
| 6.91 | 18.2 | 47000 | 6.9100 |
| 6.8999 | 18.4 | 47500 | 6.9251 |
| 6.9113 | 18.59 | 48000 | 6.9078 |
| 6.9197 | 18.78 | 48500 | 6.9099 |
| 6.8985 | 18.98 | 49000 | 6.9074 |
| 6.9009 | 19.17 | 49500 | 6.8971 |
| 6.8937 | 19.36 | 50000 | 6.8982 |
| 6.9094 | 19.56 | 50500 | 6.9077 |
| 6.9069 | 19.75 | 51000 | 6.9006 |
| 6.8991 | 19.95 | 51500 | 6.8912 |
| 6.8924 | 20.14 | 52000 | 6.8881 |
| 6.899 | 20.33 | 52500 | 6.8899 |
| 6.9028 | 20.53 | 53000 | 6.8938 |
| 6.8997 | 20.72 | 53500 | 6.8822 |
| 6.8943 | 20.91 | 54000 | 6.9005 |
| 6.8804 | 21.11 | 54500 | 6.9048 |
| 6.8848 | 21.3 | 55000 | 6.9062 |
| 6.9072 | 21.49 | 55500 | 6.9104 |
| 6.8783 | 21.69 | 56000 | 6.9069 |
| 6.8879 | 21.88 | 56500 | 6.8938 |
| 6.8922 | 22.08 | 57000 | 6.8797 |
| 6.8892 | 22.27 | 57500 | 6.9168 |
| 6.8863 | 22.46 | 58000 | 6.8820 |
| 6.8822 | 22.66 | 58500 | 6.9130 |
| 6.8752 | 22.85 | 59000 | 6.8973 |
| 6.8823 | 23.04 | 59500 | 6.8933 |
| 6.8813 | 23.24 | 60000 | 6.8919 |
| 6.8787 | 23.43 | 60500 | 6.8855 |
| 6.8886 | 23.63 | 61000 | 6.8956 |
| 6.8744 | 23.82 | 61500 | 6.9092 |
| 6.8799 | 24.01 | 62000 | 6.8944 |
| 6.879 | 24.21 | 62500 | 6.8850 |
| 6.8797 | 24.4 | 63000 | 6.8782 |
| 6.8724 | 24.59 | 63500 | 6.8691 |
| 6.8803 | 24.79 | 64000 | 6.8965 |
| 6.8899 | 24.98 | 64500 | 6.8986 |
| 6.8873 | 25.17 | 65000 | 6.9034 |
| 6.8777 | 25.37 | 65500 | 6.8658 |
| 6.8784 | 25.56 | 66000 | 6.8803 |
| 6.8791 | 25.76 | 66500 | 6.8727 |
| 6.8736 | 25.95 | 67000 | 6.8832 |
| 6.8865 | 26.14 | 67500 | 6.8811 |
| 6.8668 | 26.34 | 68000 | 6.8817 |
| 6.8709 | 26.53 | 68500 | 6.8945 |
| 6.8755 | 26.72 | 69000 | 6.8777 |
| 6.8635 | 26.92 | 69500 | 6.8747 |
| 6.8752 | 27.11 | 70000 | 6.8875 |
| 6.8729 | 27.3 | 70500 | 6.8696 |
| 6.8728 | 27.5 | 71000 | 6.8659 |
| 6.8692 | 27.69 | 71500 | 6.8856 |
| 6.868 | 27.89 | 72000 | 6.8689 |
| 6.8668 | 28.08 | 72500 | 6.8877 |
| 6.8576 | 28.27 | 73000 | 6.8783 |
| 6.8633 | 28.47 | 73500 | 6.8828 |
| 6.8737 | 28.66 | 74000 | 6.8717 |
| 6.8702 | 28.85 | 74500 | 6.8485 |
| 6.8785 | 29.05 | 75000 | 6.8771 |
| 6.8818 | 29.24 | 75500 | 6.8815 |
| 6.8647 | 29.43 | 76000 | 6.8877 |
| 6.8574 | 29.63 | 76500 | 6.8920 |
| 6.8474 | 29.82 | 77000 | 6.8936 |
| 6.8558 | 30.02 | 77500 | 6.8768 |
| 6.8645 | 30.21 | 78000 | 6.8921 |
| 6.8786 | 30.4 | 78500 | 6.8604 |
| 6.8693 | 30.6 | 79000 | 6.8603 |
| 6.855 | 30.79 | 79500 | 6.8559 |
| 6.8429 | 30.98 | 80000 | 6.8746 |
| 6.8688 | 31.18 | 80500 | 6.8774 |
| 6.8735 | 31.37 | 81000 | 6.8643 |
| 6.8541 | 31.56 | 81500 | 6.8767 |
| 6.8695 | 31.76 | 82000 | 6.8804 |
| 6.8607 | 31.95 | 82500 | 6.8674 |
| 6.8538 | 32.15 | 83000 | 6.8572 |
| 6.8472 | 32.34 | 83500 | 6.8683 |
| 6.8763 | 32.53 | 84000 | 6.8758 |
| 6.8405 | 32.73 | 84500 | 6.8764 |
| 6.8658 | 32.92 | 85000 | 6.8614 |
| 6.8834 | 33.11 | 85500 | 6.8641 |
| 6.8554 | 33.31 | 86000 | 6.8787 |
| 6.8738 | 33.5 | 86500 | 6.8747 |
| 6.848 | 33.69 | 87000 | 6.8699 |
| 6.8621 | 33.89 | 87500 | 6.8654 |
| 6.8543 | 34.08 | 88000 | 6.8639 |
| 6.8606 | 34.28 | 88500 | 6.8852 |
| 6.8666 | 34.47 | 89000 | 6.8840 |
| 6.8717 | 34.66 | 89500 | 6.8773 |
| 6.854 | 34.86 | 90000 | 6.8671 |
| 6.8526 | 35.05 | 90500 | 6.8762 |
| 6.8592 | 35.24 | 91000 | 6.8644 |
| 6.8641 | 35.44 | 91500 | 6.8599 |
| 6.8655 | 35.63 | 92000 | 6.8622 |
| 6.8557 | 35.82 | 92500 | 6.8671 |
| 6.8546 | 36.02 | 93000 | 6.8573 |
| 6.853 | 36.21 | 93500 | 6.8542 |
| 6.8597 | 36.41 | 94000 | 6.8518 |
| 6.8576 | 36.6 | 94500 | 6.8700 |
| 6.8549 | 36.79 | 95000 | 6.8628 |
| 6.8576 | 36.99 | 95500 | 6.8695 |
| 6.8505 | 37.18 | 96000 | 6.8870 |
| 6.8564 | 37.37 | 96500 | 6.8898 |
| 6.8627 | 37.57 | 97000 | 6.8619 |
| 6.8502 | 37.76 | 97500 | 6.8696 |
| 6.8548 | 37.96 | 98000 | 6.8663 |
| 6.8512 | 38.15 | 98500 | 6.8683 |
| 6.8484 | 38.34 | 99000 | 6.8605 |
| 6.8581 | 38.54 | 99500 | 6.8749 |
| 6.8525 | 38.73 | 100000 | 6.8849 |
| 6.8375 | 38.92 | 100500 | 6.8712 |
| 6.8423 | 39.12 | 101000 | 6.8905 |
| 6.8559 | 39.31 | 101500 | 6.8574 |
| 6.8441 | 39.5 | 102000 | 6.8722 |
| 6.8467 | 39.7 | 102500 | 6.8550 |
| 6.8389 | 39.89 | 103000 | 6.8375 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
shikhartuli/flexibert-mini
|
shikhartuli
| 2022-11-21T18:38:59Z | 45 | 2 |
transformers
|
[
"transformers",
"pytorch",
"flexibert",
"dataset:bookcorpus",
"dataset:wikipedia",
"dataset:openwebtext",
"arxiv:2205.11656",
"license:bsd-3-clause",
"endpoints_compatible",
"region:us"
] | null | 2022-11-21T17:59:15Z |
---
license: bsd-3-clause
datasets:
- bookcorpus
- wikipedia
- openwebtext
---
# FlexiBERT-Mini model
Pretrained model on the English language using a macked language modeling (MLM) objective. It was found by executing a neural architecture search (NAS) over a design space of ~3.32 billion *flexible* and *heterogeneous* transformer architectures in [this paper](https://arxiv.org/abs/2205.11656). The model is case sensitive.
# Model description
The model consists of diverse attention heads including the traditional self-attention and the discrete cosine transform (DCT). The design space also supports weighted multiplicative attention (WMA), discrete Fourier transform (DFT), and convolution operations in the same transformer model along with different hidden dimensions for each encoder layer.
# How to use
This model should be finetuned on a downstream task. Other models within the FlexiBERT design space can be generated using a model dicsiontary. See this [github repo](https://github.com/JHA-Lab/txf_design-space) for more details. To instantiate a fresh FlexiBERT-Mini model (for pre-trainining using the MLM objective):
```python
from transformers import FlexiBERTConfig, FlexiBERTModel, FlexiBERTForMaskedLM
config = FlexiBERTConfig()
model_dict = {'l': 4, 'o': ['sa', 'sa', 'l', 'l'], 'h': [256, 256, 128, 128], 'n': [2, 2, 4, 4],
'f': [[512, 512, 512], [512, 512, 512], [1024], [1024]], 'p': ['sdp', 'sdp', 'dct', 'dct']}
config.from_model_dict(model_dict)
model = FlexiBERTForMaskedLM(config)
```
# Developer
[Shikhar Tuli](https://github.com/shikhartuli). For any questions, comments or suggestions, please reach me at [stuli@princeton.edu](mailto:stuli@princeton.edu).
# Cite this work
Cite our work using the following bitex entry:
```
@article{tuli2022jair,
title={{FlexiBERT}: Are Current Transformer Architectures too Homogeneous and Rigid?},
author={Tuli, Shikhar and Dedhia, Bhishma and Tuli, Shreshth and Jha, Niraj K.},
year={2022},
eprint={2205.11656},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
# License
BSD-3-Clause.
Copyright (c) 2022, Shikhar Tuli and Jha Lab.
All rights reserved.
See License file for more details.
|
xfreakazoidx/NightmareMrBeanBabysToilet
|
xfreakazoidx
| 2022-11-21T17:45:04Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-11-21T16:38:13Z |
Nightmare Mr. Bean Babys Toilet. Prompt being " NghtmrBbyFrk". This one was requested after my pictures, so I delivered. The results will really just give you Mr. Bean Baby and his toilet. The model is semi-limited. So lets say you put "cat", the baby will likely have some cat features like cat ears. Or you can say things like "blue baby" to make him blue. It's not the greatest model. Steps can be anything, in this case just more detail with more steps as per normal. CFG needs to be between 3-7. Any sampler.
thumbnail

|
GDJ1978/akudamadrive
|
GDJ1978
| 2022-11-21T17:02:43Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-11-21T13:31:09Z |
the trigger word in automatic is the embedding name eg akudamadrivestyle-500
Its my first attempt, its good looking but nothing spectacular
0.05 training rate, 500 steps, 5 images of average quality
|
statworx/bert-base-german-cased-finetuned-swiss
|
statworx
| 2022-11-21T16:37:35Z | 109 | 1 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"swiss",
"de",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-11-17T08:39:06Z |
---
language: de
license: apache-2.0
tags:
- bert
- swiss
---
# Model Card for Bert-base-german-cased-finetuned-swiss
Bert-base-german-cased model fine-tuned on a corpus of German-Swiss.
# Model Details
## Model Description
Bert-base-german-cased model fine-tuned on a corpus of German-Swiss.
Fine-tuning was done on the Swiss German data of the [Leipzig Corpora Collection](https://wortschatz.uni-leipzig.de/en/download/) and [SwissCrawl](https://icosys.ch/swisscrawl). For testing purposes, the model was evaluated on the [Swiss Dialect Classification dataset](https://huggingface.co/datasets/statworx/swiss-dialects) as down-stream task.
It outperformed its parent model ([bert-base-german-cased](https://huggingface.co/bert-base-german-cased)) by approx. 5% accuracy.
- **Developed by:** Fabian Müller
- **Model type:** Language model
- **Language(s) (NLP):** de
- **License:** apache-2.0
- **Parent Model:** [bert-base-german-cased](https://huggingface.co/bert-base-german-cased)
|
RajaSi/sd-prompt-generator-gpt-neo-gn
|
RajaSi
| 2022-11-21T16:35:06Z | 100 | 2 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-21T11:07:13Z |
hey friends welcome in this applied NLP tutorial we're going to learn how to fine tune a text generation model one second how to push the text generated like the fine-tuned model into hugging face model Hub and in this process we are also going to explore the stable diffusion part of it so this is a combination of a lot of different things.
the model is uploaded to hugging face model Hub and the model I'm calling it SD prompt generator GPT Neo because this is a prompt generator for stable diffusion so if you want to create something using stable division the AI are generated so you ideally need to give a very detailed prompt.
ideally is as you can see from the name it says SD prompt generator GPT Neo so we're going to use GPT Neo model to fine tune our prompts so that we have created a text generation model where we can give a prompt text and that will generate new prompt or a new extended prompt better prompt for us so what are we going to do we are going to take a set of existing stable diffusion prompts and we have got a 124 in 124 million stay GPT Neo model and we are going to fine tune that model based on this data and then we are going to finally save that model and then push the model into hugging phase model Hub
|
Wheatley961/Raw_3_no_1_Test_3_new.model
|
Wheatley961
| 2022-11-21T16:23:53Z | 1 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-11-21T16:23:27Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 24 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 2,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 6.474612215184842e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 24,
"warmup_steps": 3,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
EP9/bert2bert_shared-spanish-finetuned-summarization-finetuned-xsum
|
EP9
| 2022-11-21T16:12:27Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-20T23:09:59Z |
---
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bert2bert_shared-spanish-finetuned-summarization-finetuned-xsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert2bert_shared-spanish-finetuned-summarization-finetuned-xsum
This model is a fine-tuned version of [mrm8488/bert2bert_shared-spanish-finetuned-summarization](https://huggingface.co/mrm8488/bert2bert_shared-spanish-finetuned-summarization) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3690
- Rouge1: 50.02
- Rouge2: 35.706
- Rougel: 46.6253
- Rougelsum: 46.6412
- Gen Len: 22.1176
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 2.5969 | 1.0 | 3090 | 2.4559 | 49.4282 | 35.2705 | 46.095 | 46.0994 | 22.5422 |
| 2.3318 | 2.0 | 6180 | 2.3690 | 50.02 | 35.706 | 46.6253 | 46.6412 | 22.1176 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
alexziweiwang/base-on-torgo0003
|
alexziweiwang
| 2022-11-21T16:07:25Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-11-21T11:45:06Z |
---
tags:
- generated_from_trainer
model-index:
- name: base-on-torgo0003
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# base-on-torgo0003
This model is a fine-tuned version of [yongjian/wav2vec2-large-a](https://huggingface.co/yongjian/wav2vec2-large-a) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6579
- Wer: 0.7547
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 28.1611 | 0.46 | 500 | 3.4550 | 1.0163 |
| 3.2238 | 0.92 | 1000 | 2.8781 | 1.0411 |
| 2.8617 | 1.39 | 1500 | 2.9896 | 1.0028 |
| 2.5841 | 1.85 | 2000 | 2.3744 | 1.2896 |
| 2.2029 | 2.31 | 2500 | 1.8598 | 1.2722 |
| 1.9976 | 2.77 | 3000 | 1.6505 | 1.2513 |
| 1.7817 | 3.23 | 3500 | 1.5291 | 1.2294 |
| 1.6484 | 3.69 | 4000 | 1.4635 | 1.2106 |
| 1.56 | 4.16 | 4500 | 1.4251 | 1.1989 |
| 1.417 | 4.62 | 5000 | 1.4040 | 1.1904 |
| 1.2884 | 5.08 | 5500 | 1.2734 | 1.1568 |
| 1.2788 | 5.54 | 6000 | 1.2242 | 1.1384 |
| 1.2159 | 6.0 | 6500 | 1.0561 | 1.1349 |
| 1.1125 | 6.46 | 7000 | 1.1001 | 1.1175 |
| 1.1495 | 6.93 | 7500 | 1.0409 | 1.1112 |
| 1.0222 | 7.39 | 8000 | 1.0525 | 1.0952 |
| 1.0104 | 7.85 | 8500 | 1.0184 | 1.1048 |
| 0.9956 | 8.31 | 9000 | 1.0438 | 1.1196 |
| 0.8747 | 8.77 | 9500 | 1.0736 | 1.1005 |
| 0.8437 | 9.23 | 10000 | 1.0041 | 1.0768 |
| 0.861 | 9.7 | 10500 | 0.9407 | 1.0496 |
| 0.8238 | 10.16 | 11000 | 0.9237 | 1.0697 |
| 0.7806 | 10.62 | 11500 | 0.8706 | 1.0343 |
| 0.7475 | 11.08 | 12000 | 0.9576 | 1.0407 |
| 0.6963 | 11.54 | 12500 | 0.9195 | 1.0159 |
| 0.7624 | 12.0 | 13000 | 0.8102 | 1.0060 |
| 0.6311 | 12.47 | 13500 | 0.8208 | 0.9897 |
| 0.6649 | 12.93 | 14000 | 0.7699 | 0.9968 |
| 0.6025 | 13.39 | 14500 | 0.7834 | 0.9547 |
| 0.5691 | 13.85 | 15000 | 0.7414 | 0.9632 |
| 0.532 | 14.31 | 15500 | 0.7056 | 0.9473 |
| 0.5572 | 14.77 | 16000 | 0.8136 | 0.9929 |
| 0.5455 | 15.24 | 16500 | 0.7355 | 0.9264 |
| 0.5369 | 15.7 | 17000 | 0.7531 | 0.9352 |
| 0.4771 | 16.16 | 17500 | 0.7527 | 0.9228 |
| 0.4778 | 16.62 | 18000 | 0.7312 | 0.9218 |
| 0.4384 | 17.08 | 18500 | 0.6774 | 0.8913 |
| 0.4619 | 17.54 | 19000 | 0.6888 | 0.8896 |
| 0.4341 | 18.01 | 19500 | 0.7068 | 0.9030 |
| 0.4164 | 18.47 | 20000 | 0.6484 | 0.8754 |
| 0.3883 | 18.93 | 20500 | 0.6388 | 0.8676 |
| 0.4135 | 19.39 | 21000 | 0.6732 | 0.8683 |
| 0.4121 | 19.85 | 21500 | 0.6354 | 0.8591 |
| 0.3694 | 20.31 | 22000 | 0.6751 | 0.8581 |
| 0.367 | 20.78 | 22500 | 0.6487 | 0.8411 |
| 0.3798 | 21.24 | 23000 | 0.5955 | 0.8312 |
| 0.3249 | 21.7 | 23500 | 0.6209 | 0.8230 |
| 0.3182 | 22.16 | 24000 | 0.7341 | 0.8212 |
| 0.3196 | 22.62 | 24500 | 0.6533 | 0.8106 |
| 0.297 | 23.08 | 25000 | 0.7163 | 0.8177 |
| 0.3021 | 23.55 | 25500 | 0.7209 | 0.8149 |
| 0.3248 | 24.01 | 26000 | 0.6268 | 0.8018 |
| 0.3013 | 24.47 | 26500 | 0.7014 | 0.7915 |
| 0.2986 | 24.93 | 27000 | 0.7306 | 0.8028 |
| 0.2913 | 25.39 | 27500 | 0.6866 | 0.7912 |
| 0.2706 | 25.85 | 28000 | 0.6860 | 0.7851 |
| 0.2572 | 26.32 | 28500 | 0.6478 | 0.7752 |
| 0.2794 | 26.78 | 29000 | 0.6308 | 0.7703 |
| 0.2796 | 27.24 | 29500 | 0.6302 | 0.7653 |
| 0.2604 | 27.7 | 30000 | 0.6638 | 0.7621 |
| 0.2367 | 28.16 | 30500 | 0.6492 | 0.7593 |
| 0.2383 | 28.62 | 31000 | 0.6560 | 0.7614 |
| 0.2495 | 29.09 | 31500 | 0.6577 | 0.7593 |
| 0.2513 | 29.55 | 32000 | 0.6579 | 0.7547 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 1.18.3
- Tokenizers 0.13.2
|
christofid/dapscibert
|
christofid
| 2022-11-21T16:05:11Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-11-21T16:02:30Z |
---
license: mit
---
### dapSciBERT
DapSciBERT is a BERT-like model trained based on the domain adaptive pretraining method ([Gururangan et al.](https://aclanthology.org/2020.acl-main.740/)) for the patent domain. Allenai/scibert_scivocab_uncased is used as base for the training. The training dataset used consists of a corpus of 10,000,000
patent abstracts that have been filed between 1998-2020 in US and European patent offices as well as the World Intellectual Property Organization.
|
christofid/dapbert
|
christofid
| 2022-11-21T16:01:15Z | 117 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-11-21T15:43:44Z |
---
license: mit
---
### dapBERT
DapBERT is a BERT-like model trained based on the domain adaptive pretraining method ([Gururangan et al.](https://aclanthology.org/2020.acl-main.740/)) for the patent domain. Bert-base-uncased is used as base for the training. The training dataset used consists of a corpus of 10,000,000
patent abstracts that have been filed between 1998-2020 in US and European patent offices as well as the World Intellectual Property Organization.
|
zugp/ddpm-butterflies-128
|
zugp
| 2022-11-21T14:16:57Z | 3 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"en",
"dataset:huggan/few-shot-universe",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-11-07T02:55:42Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: huggan/few-shot-universe
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/few-shot-universe` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/zugp/ddpm-butterflies-128/tensorboard?#scalars)
|
GDJ1978/psychedelicdoodles
|
GDJ1978
| 2022-11-21T14:15:39Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-11-21T13:56:15Z |
just an experimental embedding on a doodle that was put through img2img
psychedelic is the prompt trigger, or psy
1000 steps on 3 images, 0.05 training rate
|
Harrier/a2c-AntBulletEnv-v0
|
Harrier
| 2022-11-21T14:05:17Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-11-21T14:03:57Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1596.00 +/- 357.12
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.