modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-08-27 06:27:59
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
521 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-08-27 06:27:44
card
stringlengths
11
1.01M
pheinisch/roberta-base-150T-argumentative-sentence-detector
pheinisch
2022-11-23T14:37:42Z
116
1
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "argument mining", "claims", "sentence classification", "en", "dataset:FS150T", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-11-09T07:04:50Z
--- language: - "en" tags: - "argument mining" - "claims" - "sentence classification" datasets: - "FS150T" metrics: - "accuracy" - "f1" --- # _EXPERIMENTAL_ roberta-base-150T-argumentative-sentence-detector (this model might not be the optimal one for accomplishing the task) - Task: Detects whether a sentence is argumentative (1 - yes/ 0 - not) given the topic and the sentence itself. - language: English - dataset: Few-Shot-150T Corpus v1.1 (FS150T-Corpus) _fine-tuned roberta-base_ ## Performace on test data (threshold: 0.5) ```` {'accuracy': 0.7451388888888889, 'f1': 0.6690712353471596, 'precision': 0.733201581027668, 'recall': 0.615257048092869} ````
PlanTL-GOB-ES/roberta-base-es-wikicat-es
PlanTL-GOB-ES
2022-11-23T14:02:14Z
332
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "español", "text classification", "WikiCAT_esv2", "es", "dataset:projecte-aina/WikiCAT_esv2", "arxiv:1907.11692", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-11-23T13:34:08Z
--- language: - es license: apache-2.0 tags: - "español" - "text classification" - "WikiCAT_esv2" datasets: - "projecte-aina/WikiCAT_esv2" metrics: - f1-macro model-index: - name: roberta-base-es-wikicat-es results: - task: type: text-classification dataset: type: projecte-aina/WikiCAT_esv2 name: WikiCAT_esv2 metrics: - name: F1-macro type: f1 value: 0.76632 - name: Accuracy type: accuracy value: 0.79347 widget: - text: "Sedna es el cuerpo menor del sistema solar número 90377; concretamente es un objeto transneptuniano." - text: "El Fútbol Club Barcelona, conocido popularmente como Barça, es una entidad polideportiva con sede en Barcelona, España." --- # Spanish BERTa-v2 (roberta-base-es) finetuned for Text Classification. ## Table of Contents <details> <summary>Click to expand</summary> - [Model description](#model-description) - [Intended uses and limitations](#intended-uses-and-limitations) - [How to use](#how-to-use) - [Limitations and bias](#limitations-and-bias) - [Training](#training) - [Training data](#training-data) - [Training procedure](#training-procedure) - [Evaluation](#evaluation) - [Variable and metrics](#variable-and-metrics) - [Evaluation results](#evaluation-results) - [Additional information](#additional-information) - [Author](#author) - [Contact information](#contact-information) - [Copyright](#copyright) - [Licensing information](#licensing-information) - [Funding](#funding) - [Disclaimer](#disclaimer) </details> ## Model description The **roberta-base-es-wikicat-es** is a Text Classification model for the Catalan language fine-tuned from the [roberta-base-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) model, a [RoBERTa](https://arxiv.org/abs/1907.11692) base model pre-trained on a medium-size corpus collected from publicly available corpora and crawlers (check the roberta-base-bne model card for more details). ## Intended uses and limitations **roberta-base-es-wikicat-es** model can be used to classify texts. The model is limited by its training dataset and may not generalize well for all use cases. ## How to use Here is how to use this model: ```python from transformers import pipeline from pprint import pprint nlp = pipeline("text-classification", model="roberta-base-es-wikicat-es") example = "Sedna es el cuerpo menor del sistema solar número 90377; concretamente es un objeto transneptuniano." tc_results = nlp(example) pprint(tc_results) ``` ## Limitations and bias At the time of submission, no measures have been taken to estimate the bias embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated. ## Training ### Training data We used the TC dataset in Spanish called [WikiCAT_esv2](https://huggingface.co/datasets/PlanTL-GOB-ES/WikiCAT_esv2) for training and evaluation. ### Training procedure The model was trained with a batch size of 16 and three learning rates (1e-5, 3e-5, 5e-5) for 5 epochs. We then selected the best learning rate (2e-5) and checkpoint (epoch 3) using the downstream task metric in the corresponding development set. ## Evaluation ### Variable and metrics This model was finetuned maximizing F1 (macro) score. ### Evaluation results We evaluated the _roberta-base-es-wikicat-es_ on the WikiCAT_esv2 dev set: | Model | WikiCAT_ca (F1)| | ------------|:-------------| | rroberta-base-es-wikicat-es | 0.76632 | For more details, check the fine-tuning and evaluation scripts in the official [GitHub repository](https://github.com/projecte-aina/club). ## Additional information ### Author Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es) ### Contact information For further information, send an email to aina@bsc.es ### Copyright Copyright (c) 2022 Text Mining Unit at Barcelona Supercomputing Center ### Licensing information [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0) ### Funding This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina). ## Disclaimer <details> <summary>Click to expand</summary> The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence. In no event shall the owner and creator of the models (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.
mshuggingface/swin-tiny-patch4-window7-224-ms-test1
mshuggingface
2022-11-23T13:54:56Z
205
0
transformers
[ "transformers", "pytorch", "tensorboard", "swin", "image-classification", "generated_from_trainer", "dataset:imagefolder", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-11-23T13:51:08Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: swin-tiny-patch4-window7-224-ms-test1 results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.5 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-ms-test1 This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.6036 - Accuracy: 0.5 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 1 | 0.7667 | 0.5 | | No log | 2.0 | 2 | 0.6644 | 0.5 | | No log | 3.0 | 3 | 0.6036 | 0.5 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.7.1 - Tokenizers 0.13.2
archipela/ell-conventions
archipela
2022-11-23T13:34:18Z
101
0
transformers
[ "transformers", "pytorch", "autotrain", "text-regression", "unk", "dataset:huynhdoo/autotrain-data-ell-conventions", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
null
2022-11-23T13:32:43Z
--- tags: - autotrain - text-regression language: - unk widget: - text: "I love AutoTrain 🤗" datasets: - huynhdoo/autotrain-data-ell-conventions co2_eq_emissions: emissions: 2.6341173422087247 --- # Model Trained Using AutoTrain - Problem type: Single Column Regression - Model ID: 2218371153 - CO2 Emissions (in grams): 2.6341 ## Validation Metrics - Loss: 0.259 - MSE: 0.259 - MAE: 0.402 - R2: 0.426 - RMSE: 0.509 - Explained Variance: 0.439 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/huynhdoo/autotrain-ell-conventions-2218371153 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("huynhdoo/autotrain-ell-conventions-2218371153", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("huynhdoo/autotrain-ell-conventions-2218371153", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
archipela/ell-vocabulary
archipela
2022-11-23T13:33:26Z
100
0
transformers
[ "transformers", "pytorch", "autotrain", "text-regression", "unk", "dataset:huynhdoo/autotrain-data-ell-vocabulary", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
null
2022-11-23T13:31:43Z
--- tags: - autotrain - text-regression language: - unk widget: - text: "I love AutoTrain 🤗" datasets: - huynhdoo/autotrain-data-ell-vocabulary co2_eq_emissions: emissions: 2.3719978527185237 --- # Model Trained Using AutoTrain - Problem type: Single Column Regression - Model ID: 2218271145 - CO2 Emissions (in grams): 2.3720 ## Validation Metrics - Loss: 0.228 - MSE: 0.228 - MAE: 0.383 - R2: 0.343 - RMSE: 0.478 - Explained Variance: 0.402 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/huynhdoo/autotrain-ell-vocabulary-2218271145 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("huynhdoo/autotrain-ell-vocabulary-2218271145", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("huynhdoo/autotrain-ell-vocabulary-2218271145", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
archipela/ell-grammar
archipela
2022-11-23T13:31:50Z
100
0
transformers
[ "transformers", "pytorch", "autotrain", "text-regression", "unk", "dataset:huynhdoo/autotrain-data-ell-grammar", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
null
2022-11-23T13:29:53Z
--- tags: - autotrain - text-regression language: - unk widget: - text: "I love AutoTrain 🤗" datasets: - huynhdoo/autotrain-data-ell-grammar co2_eq_emissions: emissions: 2.4374734387953882 --- # Model Trained Using AutoTrain - Problem type: Single Column Regression - Model ID: 2218171131 - CO2 Emissions (in grams): 2.4375 ## Validation Metrics - Loss: 0.325 - MSE: 0.325 - MAE: 0.449 - R2: 0.342 - RMSE: 0.570 - Explained Variance: 0.425 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/huynhdoo/autotrain-ell-grammar-2218171131 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("huynhdoo/autotrain-ell-grammar-2218171131", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("huynhdoo/autotrain-ell-grammar-2218171131", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
jamiehudson/579-STmodel-v4
jamiehudson
2022-11-23T13:31:46Z
1
0
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-11-23T12:18:54Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 1800 with parameters: ``` {'batch_size': 4, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 1800, "warmup_steps": 180, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
archipela/ell-cohesion
archipela
2022-11-23T13:30:47Z
100
0
transformers
[ "transformers", "pytorch", "autotrain", "text-regression", "unk", "dataset:huynhdoo/autotrain-data-ell-cohesion", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
null
2022-11-23T13:27:59Z
--- tags: - autotrain - text-regression language: - unk widget: - text: "I love AutoTrain 🤗" datasets: - huynhdoo/autotrain-data-ell-cohesion co2_eq_emissions: emissions: 4.569992504332477 --- # Model Trained Using AutoTrain - Problem type: Single Column Regression - Model ID: 2217971118 - CO2 Emissions (in grams): 4.5700 ## Validation Metrics - Loss: 0.259 - MSE: 0.259 - MAE: 0.407 - R2: 0.416 - RMSE: 0.509 - Explained Variance: 0.427 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/huynhdoo/autotrain-ell-cohesion-2217971118 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("huynhdoo/autotrain-ell-cohesion-2217971118", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("huynhdoo/autotrain-ell-cohesion-2217971118", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
aherzberg/wav2vec2-base-POSITIVE_NEGATIVE_ONLY_BALANCED_CLASSES
aherzberg
2022-11-23T13:27:27Z
158
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "audio-classification", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
audio-classification
2022-11-23T12:20:30Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: wav2vec2-base-POSITIVE_NEGATIVE_ONLY_BALANCED_CLASSES results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-POSITIVE_NEGATIVE_ONLY_BALANCED_CLASSES This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3710 - Accuracy: 0.8822 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7822 | 0.96 | 18 | 0.6874 | 0.7424 | | 0.5685 | 1.96 | 36 | 0.5974 | 0.7845 | | 0.45 | 2.96 | 54 | 0.4988 | 0.8182 | | 0.399 | 3.96 | 72 | 0.4583 | 0.8384 | | 0.3457 | 4.96 | 90 | 0.4415 | 0.8451 | | 0.352 | 5.96 | 108 | 0.3710 | 0.8822 | | 0.2878 | 6.96 | 126 | 0.3881 | 0.8620 | | 0.2669 | 7.96 | 144 | 0.4309 | 0.8502 | | 0.2406 | 8.96 | 162 | 0.4271 | 0.8502 | | 0.2491 | 9.96 | 180 | 0.4271 | 0.8502 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0 - Datasets 1.14.0 - Tokenizers 0.10.3
heziiiii/ddpm-butterflies-128
heziiiii
2022-11-23T13:26:40Z
0
0
diffusers
[ "diffusers", "tensorboard", "en", "dataset:huggan/smithsonian_butterflies_subset", "license:apache-2.0", "diffusers:DDPMPipeline", "region:us" ]
null
2022-11-23T12:08:11Z
--- language: en license: apache-2.0 library_name: diffusers tags: [] datasets: huggan/smithsonian_butterflies_subset metrics: [] --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # ddpm-butterflies-128 ## Model description This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library on the `huggan/smithsonian_butterflies_subset` dataset. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training data [TODO: describe the data used to train the model] ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - gradient_accumulation_steps: 1 - optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None - lr_scheduler: None - lr_warmup_steps: 500 - ema_inv_gamma: None - ema_inv_gamma: None - ema_inv_gamma: None - mixed_precision: fp16 ### Training results 📈 [TensorBoard logs](https://huggingface.co/heziiiii/ddpm-butterflies-128/tensorboard?#scalars)
sd-concepts-library/yellow-cockatiel-parrot
sd-concepts-library
2022-11-23T12:50:05Z
0
1
null
[ "license:mit", "region:us" ]
null
2022-11-23T12:49:55Z
--- license: mit --- ### Yellow Cockatiel Parrot on Stable Diffusion This is the `<rosa-popugai>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<rosa-popugai> 0](https://huggingface.co/sd-concepts-library/yellow-cockatiel-parrot/resolve/main/concept_images/3.jpeg) ![<rosa-popugai> 1](https://huggingface.co/sd-concepts-library/yellow-cockatiel-parrot/resolve/main/concept_images/0.jpeg) ![<rosa-popugai> 2](https://huggingface.co/sd-concepts-library/yellow-cockatiel-parrot/resolve/main/concept_images/2.jpeg) ![<rosa-popugai> 3](https://huggingface.co/sd-concepts-library/yellow-cockatiel-parrot/resolve/main/concept_images/1.jpeg)
jamiehudson/579-STmodel-v2
jamiehudson
2022-11-23T12:41:08Z
2
0
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-11-23T12:40:56Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 300 with parameters: ``` {'batch_size': 4, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 300, "warmup_steps": 30, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
rach405/test_trainer3
rach405
2022-11-23T12:34:47Z
104
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-11-21T12:22:38Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: test_trainer3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test_trainer3 This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 125 | 1.8785 | 0.396 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cpu - Tokenizers 0.11.6
pucpr/gpt2-bio-pt
pucpr
2022-11-23T12:33:37Z
389
7
transformers
[ "transformers", "pytorch", "tf", "jax", "gpt2", "text-generation", "pt", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: "pt" widget: - text: "O paciente recebeu " - text: "A cardiologia provou que " - text: "O paciente chegou no hospital " - text: "Cientistas descobriram que " - text: "O nível de atividade biológica " - text: "O DNA e o RNA " thumbnail: "https://raw.githubusercontent.com/HAILab-PUCPR/gpt2-bio-pt/main/img/logo-gpt2-bio-pt.png" --- <img src="https://raw.githubusercontent.com/HAILab-PUCPR/gpt2-bio-pt/main/img/logo-gpt2-bio-pt.png" alt="Logo GPt2-Bio-Pt"> # GPT2-BioPT - a Language Model for Portuguese Biomedical text generation ## Introduction GPT2-BioPT (Portuguese Biomedical GPT-2 small) is a language model for Portuguese based on the OpenAI GPT-2 model, trained from the [GPorTuguese-2](https://huggingface.co/pierreguillou/gpt2-small-portuguese/) with biomedical literature. We used **Transfer Learning and Fine-tuning techniques** with 110MB of training data, corresponding to 16,209,373 tokens and 729,654 sentences. ## GPT-2 *Note: information copied/pasted from [Model: gpt2 >> GPT-2](https://huggingface.co/gpt2#gpt-2)* Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in this [paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) and first released at this [page](https://openai.com/blog/better-language-models/) (February 14, 2019). Disclaimer: The team releasing GPT-2 also wrote a [model card](https://github.com/openai/gpt-2/blob/master/model_card.md) for their model. Content from this model card has been written by the Hugging Face team to complete the information they provided and give specific examples of bias. ## Model description *Note: information copied/pasted from [Model: gpt2 >> Model description](https://huggingface.co/gpt2#model-description)* GPT-2 is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was trained to guess the next word in sentences. More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence, shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a prompt. ## How to use GPT2-BioPT with HuggingFace ``` from transformers import pipeline chef = pipeline('text-generation',model="pucpr/gpt2-bio-pt", tokenizer="pucpr/gpt2-bio-pt",config={'max_length':800}) result = chef('O paciente chegou no hospital')[0]['generated_text'] print(result) ``` Resultado: *```O paciente chegou no hospital três meses após a operação, não houve complicações graves. Entre os grupos que apresentaram maior número de lesões, o exame da cavidade pélvica estava significantemente associado à ausência de complicações. Foi encontrada uma maior incidência de fraturas (...)```* ## Citation ``` @INPROCEEDINGS{9474713, author={Schneider, Elisa Terumi Rubel and de Souza, João Vitor Andrioli and Gumiel, Yohan Bonescki and Moro, Claudia and Paraiso, Emerson Cabrera}, booktitle={2021 IEEE 34th International Symposium on Computer-Based Medical Systems (CBMS)}, title={A GPT-2 Language Model for Biomedical Texts in Portuguese}, year={2021}, volume={}, number={}, pages={474-479}, doi={10.1109/CBMS52027.2021.00056}} ``` ## Questions? Post a Github issue on the [GPT2-Bio-Pt repo](https://github.com/HAILab-PUCPR/gpt2-bio-pt/).
akmmsr/bert-finetuned-ner
akmmsr
2022-11-23T12:31:34Z
69
0
transformers
[ "transformers", "tf", "bert", "token-classification", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-11-18T12:54:34Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: akmmsr/bert-finetuned-ner results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # akmmsr/bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0266 - Validation Loss: 0.0519 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2634, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.1758 | 0.0625 | 0 | | 0.0457 | 0.0537 | 1 | | 0.0266 | 0.0519 | 2 | ### Framework versions - Transformers 4.24.0 - TensorFlow 2.9.2 - Datasets 2.7.1 - Tokenizers 0.13.2
jamiehudson/579-STmodel-v1
jamiehudson
2022-11-23T12:30:25Z
1
0
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-11-23T12:30:13Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 300 with parameters: ``` {'batch_size': 4, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 300, "warmup_steps": 30, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
bsenst/skin-cancer-HAM10k
bsenst
2022-11-23T12:14:13Z
0
1
null
[ "license:openrail", "region:us" ]
null
2022-11-22T13:33:36Z
--- license: openrail --- ***Disclaimer: This work is part of an educational project. It is not intended for clinical application. As such it can not make real world predictions for skin lesions. To get recommendations regarding skin lesions one should ask for expert advice such as provided by a dermatologist.*** The model (xception_v4_1_07_0.699.h5) was trained as described in this kaggle notebook: https://www.kaggle.com/bnzn261029/capstone1-ham10k-skincancer The code repository on github: https://github.com/bsenst/capstone1-skin-lesion-classifier The dataset on kaggle: https://www.kaggle.com/datasets/kmader/skin-cancer-mnist-ham10000 The gradio app on huggingface spaces: https://huggingface.co/spaces/bsenst/keras-image-classifier |Layer (type)|Output Shape|Param| |-|-|-| |input_2 (InputLayer)|[(None, 150, 150, 3)]|0| |xception (Functional)|(None, 5, 5, 2048)|20861480| |global_average_pooling2d (GlobalAveragePooling2D) |(None, 2048)|0| |dense (Dense)|(None, 7)|14343| Total params: 20,875,823 Trainable params: 14,343 Non-trainable params: 20,861,480
cafeai/cafe_aesthetic
cafeai
2022-11-23T12:08:27Z
3,264
50
transformers
[ "transformers", "pytorch", "beit", "image-classification", "license:agpl-3.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-11-14T09:56:39Z
--- license: agpl-3.0 --- # Info Since people are downloading this and I don't know why, I'll add some information. This model is an image classifier fine-tuned on `microsoft/beit-base-patch16-384`. Its purpose is to be used in the dataset conditioning step for the [Waifu Diffusion project](https://huggingface.co/hakurei/waifu-diffusion), a fine-tune effort for Stable Diffusion. As WD1.4 is planned to have a *significantly large dataset* (~15m images), it is infeasible to analyze every image manually to determine whether or not it should be included in the final training dataset. This image classifier is trained on approximately 3.5k real-life and anime/manga images. Its purpose is to remove aesthetically worthless images from our dataset by classifying them as "`not_aesthetic`". The image classifier was trained to **err on the side of caution** and will generally tend to include images unless they are in a "manga-like" format, have messy lines and/or are sketches, or include an unacceptable amount of text (namely text that covers the primary subject of the image). The idea is that certain images will hurt a SD fine-tune. Note: This classifier is not perfect, just like every other classifier out there. However, with a sufficiently large dataset, any imperfections or misclassifications should average themselves out due to the Law of Large Numbers. You can test out the classifier [here](https://huggingface.co/spaces/cafeai/cafe_aesthetic_demo), along with some other classifiers for the project. # License Released under the aGPLv3. Use the model as you wish for any purpose. If you make changes, share the changes.
christofid/dabert-multi
christofid
2022-11-23T12:05:14Z
121
0
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-11-23T11:43:17Z
--- license: mit --- ### dapBERT DapBERT-multi is a BERT-like model trained based on the domain adaptive pretraining method ([Gururangan et al.](https://aclanthology.org/2020.acl-main.740/)) for the patent domain. Bert-base-multilingual-cased is used as base for the training. The training dataset used consists of a corpus of 10,000,000 patent abstracts that have been filed between 1998-2020 in US and European patent offices as well as the World Intellectual Property Organization.
gwz0202/ddpm-butterflied-128
gwz0202
2022-11-23T12:03:46Z
0
0
diffusers
[ "diffusers", "tensorboard", "en", "dataset:huggan/few-shot-pokemon", "license:apache-2.0", "diffusers:DDPMPipeline", "region:us" ]
null
2022-11-23T10:51:41Z
--- language: en license: apache-2.0 library_name: diffusers tags: [] datasets: huggan/few-shot-pokemon metrics: [] --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # ddpm-butterflied-128 ## Model description This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library on the `huggan/few-shot-pokemon` dataset. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training data [TODO: describe the data used to train the model] ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - gradient_accumulation_steps: 1 - optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None - lr_scheduler: None - lr_warmup_steps: 500 - ema_inv_gamma: None - ema_inv_gamma: None - ema_inv_gamma: None - mixed_precision: fp16 ### Training results 📈 [TensorBoard logs](https://huggingface.co/gwz0202/ddpm-butterflied-128/tensorboard?#scalars)
dscoursetechnion/t5-small-finetuned-xsum
dscoursetechnion
2022-11-23T12:03:09Z
113
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "dataset:xsum", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-11-23T08:03:31Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - xsum metrics: - rouge model-index: - name: t5-small-finetuned-xsum results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: xsum type: xsum config: default split: train args: default metrics: - name: Rouge1 type: rouge value: 26.7823 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-finetuned-xsum This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset. It achieves the following results on the evaluation set: - Loss: 2.5658 - Rouge1: 26.7823 - Rouge2: 6.7168 - Rougel: 20.9066 - Rougelsum: 20.9054 - Gen Len: 18.8193 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | 2.8016 | 1.0 | 4251 | 2.5658 | 26.7823 | 6.7168 | 20.9066 | 20.9054 | 18.8193 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.7.1 - Tokenizers 0.13.2
Intel/bert-mini-sst2-distilled-sparse-90-1X4-block
Intel
2022-11-23T11:48:53Z
115
1
transformers
[ "transformers", "pytorch", "onnx", "bert", "text-classification", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-08-16T01:44:17Z
--- license: mit --- # Sparse BERT mini model (uncased) Finetuned model pruned to 1:4 structured sparsity. The model is a pruned version of the [BERT mini model](https://huggingface.co/prajjwal1/bert-mini). ## Intended Use The model can be used for inference with sparsity optimization. For further details on the model and its usage will be soon available. ## Evaluation Results We get the following results on the sst2 tasks development set: | Task | SST-2 (Acc) | |------|-------------| | | 87.2 | Better than dense [bert mini](https://huggingface.co/M-FAC/bert-mini-finetuned-sst2) which is 84.74%.
josiahkhor/en_triage_subject
josiahkhor
2022-11-23T11:43:56Z
5
0
spacy
[ "spacy", "text-classification", "en", "region:us" ]
text-classification
2022-11-23T11:30:59Z
--- tags: - spacy - text-classification language: - en model-index: - name: en_triage_subject results: [] --- | Feature | Description | | --- | --- | | **Name** | `en_triage_subject` | | **Version** | `0.0.0` | | **spaCy** | `>=3.4.3,<3.5.0` | | **Default Pipeline** | `tok2vec`, `textcat` | | **Components** | `tok2vec`, `textcat` | | **Vectors** | 514157 keys, 514157 unique vectors (300 dimensions) | | **Sources** | n/a | | **License** | n/a | | **Author** | [n/a]() | ### Label Scheme <details> <summary>View label scheme (5 labels for 1 components)</summary> | Component | Labels | | --- | --- | | **`textcat`** | `General Correspondence`, `Invoice`, `New Claim Form`, `Assessor Report`, `Complaint` | </details> ### Accuracy | Type | Score | | --- | --- | | `CATS_SCORE` | 79.52 | | `CATS_MICRO_P` | 99.34 | | `CATS_MICRO_R` | 99.34 | | `CATS_MICRO_F` | 99.34 | | `CATS_MACRO_P` | 79.37 | | `CATS_MACRO_R` | 79.67 | | `CATS_MACRO_F` | 79.52 | | `CATS_MACRO_AUC` | 79.99 | | `CATS_MACRO_AUC_PER_TYPE` | 0.00 | | `TOK2VEC_LOSS` | 25952.93 | | `TEXTCAT_LOSS` | 58.98 |
sabrinaverga/complete-prova
sabrinaverga
2022-11-23T11:25:56Z
83
0
transformers
[ "transformers", "pytorch", "tensorboard", "layoutlmv3", "token-classification", "generated_from_trainer", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-11-23T11:08:05Z
--- license: cc-by-nc-sa-4.0 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: complete-prova results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # complete-prova This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4074 - Precision: 0.5533 - Recall: 0.3424 - F1: 0.4230 - Accuracy: 0.9092 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 2 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 500 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 0.76 | 100 | 0.6181 | 0.0 | 0.0 | 0.0 | 0.8733 | | No log | 1.52 | 200 | 0.5377 | 0.4167 | 0.0485 | 0.0869 | 0.8792 | | No log | 2.27 | 300 | 0.4737 | 0.4286 | 0.1222 | 0.1902 | 0.8870 | | No log | 3.03 | 400 | 0.4254 | 0.5152 | 0.3278 | 0.4007 | 0.9063 | | 0.5393 | 3.79 | 500 | 0.4074 | 0.5533 | 0.3424 | 0.4230 | 0.9092 | ### Framework versions - Transformers 4.25.0.dev0 - Pytorch 1.13.0+cu116 - Datasets 2.6.1 - Tokenizers 0.13.2
jesspi/IFE-sentence-model
jesspi
2022-11-23T10:29:47Z
1
0
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-11-23T10:29:34Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 3170 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 6.629946430758516e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 3170, "warmup_steps": 317, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
selmey/behaviour-change-valence-german
selmey
2022-11-23T10:02:13Z
103
1
transformers
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-11-23T09:17:40Z
Bert-base-german-cased finetuned on the Valence level of the GLoHBCD Dataset (https://github.com/SelinaMeyer/GLoHBCD). The dataset leverages Motivational Interviewing client behaviour codes to evaluate user utterances across different dimensions and gauge user's stance and thoughts about behaviour change in the context of weight loss. This model classifies German text around behaviour change as either "Change Talk" (utterances in favour of change, 1) or "Sustain Talk" (utterances in favour of the status quo, 0). When using the model, please cite: @InProceedings{meyer-elsweiler:2022:LREC, author = {Meyer, Selina and Elsweiler, David}, title = {GLoHBCD: A Naturalistic German Dataset for Language of Health Behaviour Change on Online Support Forums}, booktitle = {Proceedings of the Language Resources and Evaluation Conference}, month = {June}, year = {2022}, address = {Marseille, France}, publisher = {European Language Resources Association}, pages = {2226--2235}, url = {https://aclanthology.org/2022.lrec-1.239}}
cgt/pert-qa
cgt
2022-11-23T09:46:49Z
47
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "dataset:cmrc2018", "license:cc-by-nc-sa-4.0", "endpoints_compatible", "region:us" ]
question-answering
2022-11-03T06:29:16Z
--- license: cc-by-nc-sa-4.0 tags: - generated_from_trainer datasets: - cmrc2018 model-index: - name: pert-qa results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pert-qa This model is a fine-tuned version of [hfl/chinese-pert-large](https://huggingface.co/hfl/chinese-pert-large) on the cmrc2018 dataset. It achieves the following results on the evaluation set: - Loss: 0.6942 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.1273 | 1.0 | 1200 | 0.7088 | | 0.6132 | 2.0 | 2400 | 0.6942 | ### Framework versions - Transformers 4.23.1 - Pytorch 1.10.0+cu113 - Datasets 2.6.1 - Tokenizers 0.13.1
Watwat100/gpu1
Watwat100
2022-11-23T09:19:44Z
2
0
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "feature-extraction", "sentence-similarity", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-11-23T09:19:31Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 1744 with parameters: ``` {'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 1744, "warmup_steps": 175, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
Roy029/mpyt5_e5
Roy029
2022-11-23T08:59:18Z
106
0
transformers
[ "transformers", "pytorch", "mt5", "text2text-generation", "license:openrail", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-11-22T10:04:27Z
--- license: openrail --- # Model Card for mpyt5_e5 <!-- Provide a quick summary of what the model is/does. [Optional] --> 事前に自然言語だけでなくPythonを学習したモデル # Training Details ## Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> Python Code (1.05GB) ## Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> - MLM - python vocab (https://huggingface.co/kkuramitsu/mt5-pytoken) ### Preprocessing mT5 + Python ### Speeds, Sizes, Times <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> - mT5-small(300M Paramators) - max_length = 128 # Model Version - *epoch5: This Model - *epoch10: https://huggingface.co/Roy029/mpyt5_e10 - *epoch15: https://huggingface.co/Roy029/mpyt5_e15 - *epoch20: https://huggingface.co/Roy029/mpyt5_e20
birgermoell/whisper-small-sv-bm
birgermoell
2022-11-23T08:54:31Z
14
0
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "hf-asr-leaderboard", "generated_from_trainer", "sv", "dataset:mozilla-foundation/common_voice_11_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-11-05T00:29:07Z
--- language: - sv license: apache-2.0 tags: - hf-asr-leaderboard - generated_from_trainer datasets: - mozilla-foundation/common_voice_11_0 metrics: - wer model-index: - name: WhisperSmallSwedishBirgerMoell results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 11.0 type: mozilla-foundation/common_voice_11_0 config: sv-SE split: train+validation args: sv-SE metrics: - name: Wer type: wer value: 19.58538356053884 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # WhisperSmallSwedishBirgerMoell This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.3253 - Wer: 19.5854 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.1523 | 1.29 | 1000 | 0.2924 | 21.5509 | | 0.0515 | 2.59 | 2000 | 0.2856 | 20.4593 | | 0.0214 | 3.88 | 3000 | 0.3010 | 19.9054 | | 0.0042 | 5.17 | 4000 | 0.3253 | 19.5854 | ### Framework versions - Transformers 4.25.0.dev0 - Pytorch 1.13.0+cu117 - Datasets 2.6.1 - Tokenizers 0.13.1
ai-forever/ReadingPipeline-Peter
ai-forever
2022-11-23T08:41:06Z
0
1
null
[ "onnx", "PyTorch", "OCR", "Segmentation", "HTR", "ru", "dataset:sberbank-ai/Peter", "license:mit", "region:us" ]
null
2022-08-09T09:07:45Z
--- language: - ru tags: - PyTorch - OCR - Segmentation - HTR datasets: - "sberbank-ai/Peter" license: mit --- This is a weights storage for models trained by [ReadingPipeline](https://github.com/ai-forever/ReadingPipeline) The weights are for ocr and segmentations models trained on [Peter dataset](https://huggingface.co/datasets/sberbank-ai/Peter)
mayank-soni/mt5-small-finetuned-amazon-en-es
mayank-soni
2022-11-23T08:16:42Z
64
0
transformers
[ "transformers", "tf", "mt5", "text2text-generation", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-11-23T07:23:42Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: mayank-soni/mt5-small-finetuned-amazon-en-es results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # mayank-soni/mt5-small-finetuned-amazon-en-es This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 4.0475 - Validation Loss: 3.3455 - Epoch: 7 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 9672, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 9.8713 | 4.1729 | 0 | | 5.8463 | 3.7092 | 1 | | 5.1036 | 3.5528 | 2 | | 4.7009 | 3.4817 | 3 | | 4.4143 | 3.4132 | 4 | | 4.2395 | 3.3689 | 5 | | 4.1259 | 3.3469 | 6 | | 4.0475 | 3.3455 | 7 | ### Framework versions - Transformers 4.24.0 - TensorFlow 2.9.2 - Datasets 2.7.1 - Tokenizers 0.13.2
xaeroq/dqn-Qbert-v5
xaeroq
2022-11-23T07:49:54Z
0
0
stable-baselines3
[ "stable-baselines3", "ALE/Qbert-v5", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-11-23T07:49:30Z
--- library_name: stable-baselines3 tags: - ALE/Qbert-v5 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: ALE/Qbert-v5 type: ALE/Qbert-v5 metrics: - type: mean_reward value: 6665.00 +/- 1973.49 name: mean_reward verified: false --- # **DQN** Agent playing **ALE/Qbert-v5** This is a trained model of a **DQN** agent playing **ALE/Qbert-v5** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env ALE/Qbert-v5 -orga xaeroq -f logs/ python enjoy.py --algo dqn --env ALE/Qbert-v5 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env ALE/Qbert-v5 -orga xaeroq -f logs/ rl_zoo3 enjoy --algo dqn --env ALE/Qbert-v5 -f logs/ ``` ## Training (with the RL Zoo) ``` python train.py --algo dqn --env ALE/Qbert-v5 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env ALE/Qbert-v5 -f logs/ -orga xaeroq ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
popolin52/q-FrozenLake-v1-4x4-noSlippery
popolin52
2022-11-23T05:39:48Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2022-11-23T05:39:41Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="popolin52/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
eikoenchine/xlm-roberta-base-finetuned-panx-de-fr
eikoenchine
2022-11-23T05:33:59Z
110
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-11-23T05:20:19Z
--- license: mit tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de-fr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1651 - F1: 0.8578 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.211 | 1.0 | 715 | 0.1834 | 0.8266 | | 0.1447 | 2.0 | 1430 | 0.1624 | 0.8464 | | 0.0933 | 3.0 | 2145 | 0.1651 | 0.8578 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0 - Datasets 2.7.0 - Tokenizers 0.12.1
caffeinism/ddpm-butterflies-128
caffeinism
2022-11-23T04:20:02Z
2
0
diffusers
[ "diffusers", "tensorboard", "en", "dataset:huggan/smithsonian_butterflies_subset", "license:apache-2.0", "diffusers:DDPMPipeline", "region:us" ]
null
2022-11-21T09:47:12Z
--- language: en license: apache-2.0 library_name: diffusers tags: [] datasets: huggan/smithsonian_butterflies_subset metrics: [] --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # ddpm-butterflies-128 ## Model description This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library on the `huggan/smithsonian_butterflies_subset` dataset. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training data [TODO: describe the data used to train the model] ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - gradient_accumulation_steps: 1 - optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None - lr_scheduler: None - lr_warmup_steps: 500 - ema_inv_gamma: None - ema_inv_gamma: None - ema_inv_gamma: None - mixed_precision: fp16 ### Training results 📈 [TensorBoard logs](https://huggingface.co/caffeinism/ddpm-butterflies-128/tensorboard?#scalars)
espnet/simpleoier_librimix_asr_train_asr_transformer_multispkr_raw_en_char_sp
espnet
2022-11-23T03:43:50Z
0
0
espnet
[ "espnet", "audio", "automatic-speech-recognition", "en", "dataset:librimix", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
automatic-speech-recognition
2022-11-10T19:50:37Z
--- tags: - espnet - audio - automatic-speech-recognition language: en datasets: - librimix license: cc-by-4.0 --- ## ESPnet2 ASR model ### `espnet/simpleoier_librimix_asr_train_asr_transformer_multispkr_raw_en_char_sp` This model was trained by simpleoier using librimix recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html) if you haven't done that already. ```bash cd espnet git checkout 28695114f2771ac3d2a9cc0b5fb30a2c3262e49a pip install -e . cd egs2/librimix/asr1 ./run.sh --skip_data_prep false --skip_train true --download_model espnet/simpleoier_librimix_asr_train_asr_transformer_multispkr_raw_en_char_sp ``` <!-- Generated by scripts/utils/show_asr_result.sh --> # RESULTS ## Environments - date: `Thu Nov 10 14:58:09 EST 2022` - python version: `3.9.13 (main, Aug 25 2022, 23:26:10) [GCC 11.2.0]` - espnet version: `espnet 202209` - pytorch version: `pytorch 1.12.1` - Git hash: `b3c185d5d707bb385b74f42df2cc59bcf7d7e754` - Commit date: `Wed Nov 9 22:00:30 2022 -0500` ## asr_train_asr_transformer_multispkr_raw_en_char_sp ### WER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_multi_asrtrue_lm_lm_train_lm_transformer_en_char_valid.loss.ave_asr_model_valid.acc.ave/test|6000|111243|80.4|17.4|2.2|3.8|23.5|88.0| ### CER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_multi_asrtrue_lm_lm_train_lm_transformer_en_char_valid.loss.ave_asr_model_valid.acc.ave/test|6000|590408|90.5|6.1|3.5|3.9|13.5|88.0| ## ASR config <details><summary>expand</summary> ``` config: conf/tuning/train_asr_transformer_multispkr.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/asr_train_asr_transformer_multispkr_raw_en_char_sp ngpu: 1 seed: 0 num_workers: 1 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: null dist_rank: null local_rank: 0 dist_master_addr: null dist_master_port: null dist_launcher: null multiprocessing_distributed: false unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 45 patience: null val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - acc - max keep_nbest_models: 10 nbest_averaging_interval: 0 grad_clip: 5.0 grad_clip_type: 2.0 grad_noise: false accum_grad: 1 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null use_matplotlib: true use_tensorboard: true create_graph_in_tensorboard: false use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: [] num_iters_per_epoch: null batch_size: 20 valid_batch_size: null batch_bins: 5000000 valid_batch_bins: null train_shape_file: - exp/asr_stats_raw_en_char_sp/train/speech_shape - exp/asr_stats_raw_en_char_sp/train/text_shape.char - exp/asr_stats_raw_en_char_sp/train/text_spk2_shape.char valid_shape_file: - exp/asr_stats_raw_en_char_sp/valid/speech_shape - exp/asr_stats_raw_en_char_sp/valid/text_shape.char - exp/asr_stats_raw_en_char_sp/valid/text_spk2_shape.char batch_type: numel valid_batch_type: null fold_length: - 80000 - 150 - 150 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/train_sp/wav.scp - speech - sound - - dump/raw/train_sp/text_spk1 - text - text - - dump/raw/train_sp/text_spk2 - text_spk2 - text valid_data_path_and_name_and_type: - - dump/raw/dev/wav.scp - speech - sound - - dump/raw/dev/text_spk1 - text - text - - dump/raw/dev/text_spk2 - text_spk2 - text allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adam optim_conf: lr: 0.001 scheduler: warmuplr scheduler_conf: warmup_steps: 25000 token_list: - <blank> - <unk> - <space> - E - T - A - O - N - I - H - S - R - D - L - U - M - C - W - F - G - Y - P - B - V - K - '''' - X - J - Q - Z - <sos/eos> init: xavier_uniform input_size: null ctc_conf: reduce: false joint_net_conf: null use_preprocessor: true token_type: char bpemodel: null non_linguistic_symbols: null cleaner: null g2p: null speech_volume_normalize: null rir_scp: null rir_apply_prob: 1.0 noise_scp: null noise_apply_prob: 1.0 noise_db_range: '13_15' short_noise_thres: 0.5 frontend: default frontend_conf: fs: 16k specaug: null specaug_conf: {} normalize: global_mvn normalize_conf: stats_file: exp/asr_stats_raw_en_char_sp/train/feats_stats.npz model: pit_espnet model_conf: ctc_weight: 0.2 lsm_weight: 0.1 length_normalized_loss: false num_inf: 2 num_ref: 2 preencoder: null preencoder_conf: {} encoder: transformer_multispkr encoder_conf: output_size: 256 attention_heads: 4 linear_units: 2048 num_blocks: 8 num_blocks_sd: 4 dropout_rate: 0.1 positional_dropout_rate: 0.1 attention_dropout_rate: 0.1 input_layer: conv2d normalize_before: true num_inf: 2 postencoder: null postencoder_conf: {} decoder: transformer decoder_conf: attention_heads: 4 linear_units: 2048 num_blocks: 6 dropout_rate: 0.1 positional_dropout_rate: 0.1 self_attention_dropout_rate: 0.1 src_attention_dropout_rate: 0.1 preprocessor: multi preprocessor_conf: text_name: - text - text_spk2 required: - output_dir - token_list version: '202209' distributed: false ``` </details> ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
Migueluao123/roberta-base-bne-finetuned-amazon_reviews_multi
Migueluao123
2022-11-23T03:30:00Z
106
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-11-23T02:45:02Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: roberta-base-bne-finetuned-amazon_reviews_multi results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-bne-finetuned-amazon_reviews_multi This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2215 - Accuracy: 0.9343 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1948 | 1.0 | 1250 | 0.1743 | 0.933 | | 0.0979 | 2.0 | 2500 | 0.2215 | 0.9343 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Tokenizers 0.13.2
sd-concepts-library/dreams
sd-concepts-library
2022-11-23T03:28:49Z
0
3
null
[ "license:mit", "region:us" ]
null
2022-11-23T03:28:44Z
--- license: mit --- ### Dreams on Stable Diffusion This is the `<meeg>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<meeg> 0](https://huggingface.co/sd-concepts-library/dreams/resolve/main/concept_images/3.jpeg) ![<meeg> 1](https://huggingface.co/sd-concepts-library/dreams/resolve/main/concept_images/0.jpeg) ![<meeg> 2](https://huggingface.co/sd-concepts-library/dreams/resolve/main/concept_images/2.jpeg) ![<meeg> 3](https://huggingface.co/sd-concepts-library/dreams/resolve/main/concept_images/1.jpeg) ![<meeg> 4](https://huggingface.co/sd-concepts-library/dreams/resolve/main/concept_images/4.jpeg)
Egrt/Luuuu
Egrt
2022-11-23T02:54:17Z
0
0
null
[ "license:apache-2.0", "region:us" ]
null
2022-03-20T12:11:42Z
--- license: apache-2.0 ---
jeveloper/sd-v1-4
jeveloper
2022-11-23T02:50:59Z
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2022-11-23T02:50:59Z
--- license: creativeml-openrail-m ---
nhanv/ner_cv
nhanv
2022-11-23T01:27:32Z
112
0
transformers
[ "transformers", "pytorch", "deberta-v2", "token-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-11-23T01:25:59Z
--- license: mit tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: reco-ner results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # reco-ner This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0668 - Precision: 0.8125 - Recall: 0.8790 - F1: 0.8444 - Accuracy: 0.9819 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.4516 | 1.0 | 626 | 0.4047 | 0.4332 | 0.4564 | 0.4445 | 0.8980 | | 0.3677 | 2.0 | 1252 | 0.2774 | 0.4918 | 0.5731 | 0.5293 | 0.9193 | | 0.2892 | 3.0 | 1878 | 0.2133 | 0.6139 | 0.6581 | 0.6353 | 0.9384 | | 0.2736 | 4.0 | 2504 | 0.1772 | 0.6248 | 0.6854 | 0.6537 | 0.9488 | | 0.221 | 5.0 | 3130 | 0.1503 | 0.6295 | 0.7328 | 0.6772 | 0.9560 | | 0.1569 | 6.0 | 3756 | 0.1283 | 0.6821 | 0.8108 | 0.7409 | 0.9623 | | 0.1534 | 7.0 | 4382 | 0.0995 | 0.7412 | 0.8119 | 0.7749 | 0.9708 | | 0.089 | 8.0 | 5008 | 0.0846 | 0.7695 | 0.8353 | 0.8010 | 0.9760 | | 0.0923 | 9.0 | 5634 | 0.0743 | 0.7881 | 0.8740 | 0.8289 | 0.9789 | | 0.0711 | 10.0 | 6260 | 0.0668 | 0.8125 | 0.8790 | 0.8444 | 0.9819 | ### Framework versions - Transformers 4.22.2 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
AlekseyKorshuk/6.7b-dalio-principles-book-1-epoch-1-gas-6e-6-lr
AlekseyKorshuk
2022-11-23T00:59:42Z
5
0
transformers
[ "transformers", "pytorch", "opt", "text-generation", "generated_from_trainer", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-11-22T12:39:25Z
--- license: other tags: - generated_from_trainer metrics: - accuracy model-index: - name: 6.7b-dalio-principles-book-1-epoch-1-gas-6e-6-lr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 6.7b-dalio-principles-book-1-epoch-1-gas-6e-6-lr This model is a fine-tuned version of [facebook/opt-6.7b](https://huggingface.co/facebook/opt-6.7b) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.4121 - Accuracy: 0.3487 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-06 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 32 - total_eval_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 1.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.4875 | 0.11 | 1 | 2.5059 | 0.3397 | | 2.5339 | 0.22 | 2 | 2.5059 | 0.3397 | | 2.5161 | 0.33 | 3 | 2.5059 | 0.3397 | | 2.4524 | 0.44 | 4 | 2.5059 | 0.3397 | | 2.554 | 0.56 | 5 | 2.4785 | 0.3416 | | 2.4678 | 0.67 | 6 | 2.4785 | 0.3416 | | 2.4836 | 0.78 | 7 | 2.4473 | 0.3458 | | 2.4138 | 0.89 | 8 | 2.4297 | 0.3473 | | 2.4551 | 1.0 | 9 | 2.4121 | 0.3487 | ### Framework versions - Transformers 4.25.0.dev0 - Pytorch 1.12.1+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
Gobee/Wav2vec2-Large-XLSR-Tamil
Gobee
2022-11-23T00:41:22Z
133
0
transformers
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "hf-asr-leaderboard", "tamil language", "ta", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-11-18T16:07:57Z
--- license: apache-2.0 language: ta tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week - hf-asr-leaderboard - tamil language model-index: - name: XLSR Wav2Vec2 Tamil by Manan Dey results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice ta type: common_voice args: ta metrics: - name: Test WER type: wer value: 57.004356 --- # Wav2Vec2-Large-XLSR-Tamil When using this model, make sure that your speech input is sampled at 16kHz. ## Inference The model can be used directly as follows: ```python !pip install datasets !pip install transformers from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import torch import librosa from datasets import load_dataset test_dataset = load_dataset("common_voice", "ta", split="test[:2%]"). processor = Wav2Vec2Processor.from_pretrained("Gobee/Wav2vec2-Large-XLSR-Tamil") model = Wav2Vec2ForCTC.from_pretrained("Gobee/Wav2vec2-Large-XLSR-Tamil") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the audio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000) batch["speech"] = speech_array batch["sentence"] = batch["sentence"].upper() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the {language} test data of Common Voice. ```python !pip install datasets !pip install transformers !pip install jiwer from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import torch import librosa from datasets import load_dataset, load_metric import re test_dataset = load_dataset("common_voice", "ta", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("Gobee/Wav2vec2-Large-XLSR-Tamil") model = Wav2Vec2ForCTC.from_pretrained("Gobee/Wav2vec2-Large-XLSR-Tamil") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\ \’\–\(\)]' # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000) batch["speech"] = speech_array return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 57.004356 % ## Usage and Evaluation script The script used for usage and evaluation can be found [here](https://colab.research.google.com/drive/1dyDe14iOmoNoVHDJTkg-hAgLnrGdI-Dk?usp=share_link) ## Training The Common Voice `train`, `validation` datasets were used for training. The script used for training can be found [here](https://colab.research.google.com/drive/1-Klkgr4f-C9SanHfVC5RhP0ELUH6TYlN?usp=sharing)
manirai91/mbert-conll2003
manirai91
2022-11-23T00:19:30Z
119
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:conll2003", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-11-22T23:16:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - conll2003 model-index: - name: mbert-conll2003 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mbert-conll2003 This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the conll2003 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.06 - num_epochs: 10 ### Training results ### Framework versions - Transformers 4.24.0 - Pytorch 1.11.0 - Datasets 2.7.0 - Tokenizers 0.13.2
jacobthebanana/galactica-30b
jacobthebanana
2022-11-22T23:16:04Z
7
1
transformers
[ "transformers", "jax", "opt", "text-generation", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2022-11-18T15:10:33Z
--- license: cc-by-nc-4.0 --- JAX weights converted from Torch checkpoint at `facebook/galactica-30b`. ```python (env) ubuntu@vm:~$ JAX_PLATFORM_NAME=cpu python3 >>> import jax >>> print(jax.devices()) [CpuDevice(id=0)] # Ensure that model weights are loaded into CPU RAM, not accelerator memory. >>> from transformers import FlaxOPTForCausalLM >>> model = FlaxOPTForCausalLM.from_pretrained("facebook/galactica-30b", from_pt=True) >>> model.push_to_hub(hf_model_repo) ``` ## Citation and Attribution Citation from the original repo is reproduced below as per the cc-by-nc-4.0 licsense. ```bibtex @inproceedings{GALACTICA, title={GALACTICA: A Large Language Model for Science}, author={Ross Taylor and Marcin Kardas and Guillem Cucurull and Thomas Scialom and Anthony Hartshorn and Elvis Saravia and Andrew Poulton and Viktor Kerkez and Robert Stojnic}, year={2022} } ``` > Research supported with Cloud TPUs from Google's TPU Research Cloud (TRC)
manirai91/mbert-imdb
manirai91
2022-11-22T23:08:42Z
101
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-11-22T08:42:34Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb model-index: - name: mbert-imdb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mbert-imdb This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the imdb dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.06 - num_epochs: 10 ### Training results ### Framework versions - Transformers 4.24.0 - Pytorch 1.11.0 - Datasets 2.7.0 - Tokenizers 0.13.2
unza/xls-r-300m-nyanja-fullset
unza
2022-11-22T23:02:48Z
163
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "NyanjaSpeech", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-11-22T10:28:07Z
--- license: apache-2.0 tags: - automatic-speech-recognition - NyanjaSpeech - generated_from_trainer metrics: - wer model-index: - name: xls-r-300m-nyanja-fullset results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xls-r-300m-nyanja-fullset This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the NYANJASPEECH - NYA dataset. It achieves the following results on the evaluation set: - Loss: 3.1987 - Wer: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1500 - num_epochs: 2.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:---:| | 3.3815 | 1.58 | 500 | 3.1987 | 1.0 | ### Framework versions - Transformers 4.25.0.dev0 - Pytorch 1.12.1+cu113 - Datasets 2.7.1 - Tokenizers 0.13.2
monakth/distilbert-base-multilingual-cased-sv2
monakth
2022-11-22T22:26:39Z
105
0
transformers
[ "transformers", "pytorch", "distilbert", "question-answering", "generated_from_trainer", "dataset:squad_v2", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-11-22T22:24:13Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad_v2 model-index: - name: distilbert-base-multilingual-cased-sv2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-multilingual-cased-sv2 This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the squad_v2 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0 - Datasets 2.1.0 - Tokenizers 0.12.1
sacculifer/dimbat_disaster_type_distilbert
sacculifer
2022-11-22T22:07:32Z
61
0
transformers
[ "transformers", "tf", "distilbert", "text-classification", "generated_from_keras_callback", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-08-05T19:36:01Z
--- tags: - generated_from_keras_callback model-index: - name: tmpzujlpono results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Tweets disaster type classification model This model was trained from part of Disaster Tweet Corpus 2020 (Analysis of Filtering Models for Disaster-Related Tweets, Wiegmann,M. et al, 2020) dataset It achieves the following results on the evaluation set: - Train Loss: 0.0875 - Train Accuracy: 0.8783 - Validation Loss: 0.2980 - Validation Accuracy: 0.8133 - Epoch: 5 ## Model description Labels <br> disease --- 1 <br> earthquake --- 2 <br> flood --- 3 <br> hurricane & tornado --- 4 <br> wildfire --- 5 <br> industrial accident --- 6 <br> societal crime --- 7 <br> transportation accident --- 8 <br> meteor crash --- 9 <br> haze --- 0 ## Intended uses & limitation This model is able to detect 10 different type of disaster (nature and human-made), but it shows problem to detect the type 0 disaster due to the insignificant tweets and similarity to type 5 in the training dataset ### Training hyperparameters The following hyperparameters were used during training: - optimizer: <br> batch_size = 16 <br> num_epochs = 5 <br> batches_per_epoch = len(tokenized_tweet["train"])//batch_size <br> total_train_steps = int(batches_per_epoch * num_epochs) <br> optimizer, schedule = create_optimizer(init_lr=2e-5, num_warmup_steps=0, num_train_steps=total_train_steps) - training_precision: float32 ### Framework versions - Transformers 4.16.2 - TensorFlow 2.9.2 - Datasets 2.4.0 - Tokenizers 0.12.1 ### How to use it from transformers import AutoTokenizer, TFAutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("sacculifer/dimbat_disaster_type_distilbert") model = TFAutoModelForSequenceClassification.from_pretrained("sacculifer/dimbat_disaster_type_distilbert")
manirai91/enlm-roberta-imdb
manirai91
2022-11-22T20:43:14Z
113
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "text-classification", "generated_from_trainer", "dataset:imdb", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-11-22T16:57:28Z
--- tags: - generated_from_trainer datasets: - imdb model-index: - name: enlmr-imdb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # enlmr-imdb This model is a fine-tuned version of [manirai91/enlm-r-final](https://huggingface.co/manirai91/enlm-r-final) on the imdb dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.06 - num_epochs: 10 ### Training results ### Framework versions - Transformers 4.24.0 - Pytorch 1.11.0 - Datasets 2.7.0 - Tokenizers 0.13.2
manirai91/xlm-roberta-imdb
manirai91
2022-11-22T20:36:34Z
126
1
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "text-classification", "generated_from_trainer", "dataset:imdb", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-11-22T16:42:44Z
--- license: mit tags: - generated_from_trainer datasets: - imdb model-index: - name: xlm-roberta-imdb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-imdb This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the imdb dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.06 - num_epochs: 10 ### Training results ### Framework versions - Transformers 4.24.0 - Pytorch 1.11.0 - Datasets 2.7.0 - Tokenizers 0.13.2
research-backup/relbert-roberta-base-semeval2012-v6-average-prompt-e-triplet-2
research-backup
2022-11-22T20:25:41Z
103
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity_v6", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-11-22T07:40:00Z
--- datasets: - relbert/semeval2012_relational_similarity_v6 model-index: - name: relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-triplet-2 results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.790515873015873 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.37967914438502676 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3857566765578635 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5063924402445803 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.646 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.4517543859649123 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.42824074074074076 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8080458038270304 - name: F1 (macro) type: f1_macro value: 0.7357565896819839 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.7894366197183098 - name: F1 (macro) type: f1_macro value: 0.4680529848631216 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.5520043336944745 - name: F1 (macro) type: f1_macro value: 0.5647005456999193 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9177157960631565 - name: F1 (macro) type: f1_macro value: 0.7991809595622609 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.770918207458477 - name: F1 (macro) type: f1_macro value: 0.701131895018139 --- # relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-triplet-2 RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on [relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-triplet-2/raw/main/analogy.json)): - Accuracy on SAT (full): 0.37967914438502676 - Accuracy on SAT: 0.3857566765578635 - Accuracy on BATS: 0.5063924402445803 - Accuracy on U2: 0.4517543859649123 - Accuracy on U4: 0.42824074074074076 - Accuracy on Google: 0.646 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-triplet-2/raw/main/classification.json)): - Micro F1 score on BLESS: 0.8080458038270304 - Micro F1 score on CogALexV: 0.7894366197183098 - Micro F1 score on EVALution: 0.5520043336944745 - Micro F1 score on K&H+N: 0.9177157960631565 - Micro F1 score on ROOT09: 0.770918207458477 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-triplet-2/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.790515873015873 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-triplet-2") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-base - max_length: 64 - mode: average - data: relbert/semeval2012_relational_similarity_v6 - split: train - split_eval: validation - template_mode: manual - loss_function: triplet - classification_loss: False - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 10 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 2 - exclude_relation: None - n_sample: 320 - gradient_accumulation: 8 - relation_level: None The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-triplet-2/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
research-backup/relbert-roberta-base-semeval2012-v6-average-prompt-d-triplet-2
research-backup
2022-11-22T20:13:56Z
103
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity_v6", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-11-22T07:38:18Z
--- datasets: - relbert/semeval2012_relational_similarity_v6 model-index: - name: relbert/relbert-roberta-base-semeval2012-v6-average-prompt-d-triplet-2 results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.805515873015873 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.4090909090909091 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.4035608308605341 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5225125069483046 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.74 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.41228070175438597 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.42824074074074076 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8717794184119331 - name: F1 (macro) type: f1_macro value: 0.870030953695602 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8208920187793427 - name: F1 (macro) type: f1_macro value: 0.592691470700563 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6619718309859155 - name: F1 (macro) type: f1_macro value: 0.6506618969149585 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9419906795576267 - name: F1 (macro) type: f1_macro value: 0.8428425111163615 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8326543403321841 - name: F1 (macro) type: f1_macro value: 0.8036471917183251 --- # relbert/relbert-roberta-base-semeval2012-v6-average-prompt-d-triplet-2 RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on [relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-d-triplet-2/raw/main/analogy.json)): - Accuracy on SAT (full): 0.4090909090909091 - Accuracy on SAT: 0.4035608308605341 - Accuracy on BATS: 0.5225125069483046 - Accuracy on U2: 0.41228070175438597 - Accuracy on U4: 0.42824074074074076 - Accuracy on Google: 0.74 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-d-triplet-2/raw/main/classification.json)): - Micro F1 score on BLESS: 0.8717794184119331 - Micro F1 score on CogALexV: 0.8208920187793427 - Micro F1 score on EVALution: 0.6619718309859155 - Micro F1 score on K&H+N: 0.9419906795576267 - Micro F1 score on ROOT09: 0.8326543403321841 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-d-triplet-2/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.805515873015873 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-average-prompt-d-triplet-2") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-base - max_length: 64 - mode: average - data: relbert/semeval2012_relational_similarity_v6 - split: train - split_eval: validation - template_mode: manual - loss_function: triplet - classification_loss: False - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 6 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 2 - exclude_relation: None - n_sample: 320 - gradient_accumulation: 8 - relation_level: None The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-d-triplet-2/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
research-backup/relbert-roberta-base-semeval2012-v6-average-prompt-c-triplet-2
research-backup
2022-11-22T19:57:35Z
103
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity_v6", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-11-22T07:36:40Z
--- datasets: - relbert/semeval2012_relational_similarity_v6 model-index: - name: relbert/relbert-roberta-base-semeval2012-v6-average-prompt-c-triplet-2 results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.8335714285714285 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.38235294117647056 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3798219584569733 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5336297943301834 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.662 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.4473684210526316 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.4166666666666667 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8625885189091457 - name: F1 (macro) type: f1_macro value: 0.8603027072164148 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8065727699530516 - name: F1 (macro) type: f1_macro value: 0.5506373401584694 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6175514626218852 - name: F1 (macro) type: f1_macro value: 0.6052063445391235 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9263406830354037 - name: F1 (macro) type: f1_macro value: 0.8061025838390545 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8373550611093701 - name: F1 (macro) type: f1_macro value: 0.837629132435287 --- # relbert/relbert-roberta-base-semeval2012-v6-average-prompt-c-triplet-2 RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on [relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-c-triplet-2/raw/main/analogy.json)): - Accuracy on SAT (full): 0.38235294117647056 - Accuracy on SAT: 0.3798219584569733 - Accuracy on BATS: 0.5336297943301834 - Accuracy on U2: 0.4473684210526316 - Accuracy on U4: 0.4166666666666667 - Accuracy on Google: 0.662 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-c-triplet-2/raw/main/classification.json)): - Micro F1 score on BLESS: 0.8625885189091457 - Micro F1 score on CogALexV: 0.8065727699530516 - Micro F1 score on EVALution: 0.6175514626218852 - Micro F1 score on K&H+N: 0.9263406830354037 - Micro F1 score on ROOT09: 0.8373550611093701 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-c-triplet-2/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.8335714285714285 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-average-prompt-c-triplet-2") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-base - max_length: 64 - mode: average - data: relbert/semeval2012_relational_similarity_v6 - split: train - split_eval: validation - template_mode: manual - loss_function: triplet - classification_loss: False - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 10 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 2 - exclude_relation: None - n_sample: 320 - gradient_accumulation: 8 - relation_level: None The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-c-triplet-2/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
research-backup/relbert-roberta-base-semeval2012-v6-average-prompt-b-triplet-2
research-backup
2022-11-22T19:43:03Z
103
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity_v6", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-11-22T07:34:42Z
--- datasets: - relbert/semeval2012_relational_similarity_v6 model-index: - name: relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-triplet-2 results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.7143253968253969 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.30213903743315507 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.29673590504451036 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.41078376876042244 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.444 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3508771929824561 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.35185185185185186 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8389332529757421 - name: F1 (macro) type: f1_macro value: 0.8320870274406121 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8110328638497653 - name: F1 (macro) type: f1_macro value: 0.558175722976752 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6397616468039004 - name: F1 (macro) type: f1_macro value: 0.6018197960350038 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.936495791889824 - name: F1 (macro) type: f1_macro value: 0.8329891004271437 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8574114697586963 - name: F1 (macro) type: f1_macro value: 0.859031346414651 --- # relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-triplet-2 RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on [relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-triplet-2/raw/main/analogy.json)): - Accuracy on SAT (full): 0.30213903743315507 - Accuracy on SAT: 0.29673590504451036 - Accuracy on BATS: 0.41078376876042244 - Accuracy on U2: 0.3508771929824561 - Accuracy on U4: 0.35185185185185186 - Accuracy on Google: 0.444 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-triplet-2/raw/main/classification.json)): - Micro F1 score on BLESS: 0.8389332529757421 - Micro F1 score on CogALexV: 0.8110328638497653 - Micro F1 score on EVALution: 0.6397616468039004 - Micro F1 score on K&H+N: 0.936495791889824 - Micro F1 score on ROOT09: 0.8574114697586963 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-triplet-2/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.7143253968253969 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-triplet-2") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-base - max_length: 64 - mode: average - data: relbert/semeval2012_relational_similarity_v6 - split: train - split_eval: validation - template_mode: manual - loss_function: triplet - classification_loss: False - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 10 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 2 - exclude_relation: None - n_sample: 320 - gradient_accumulation: 8 - relation_level: None The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-triplet-2/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
research-backup/relbert-roberta-base-semeval2012-v6-average-prompt-a-triplet-2
research-backup
2022-11-22T19:24:57Z
103
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity_v6", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-11-22T07:32:33Z
--- datasets: - relbert/semeval2012_relational_similarity_v6 model-index: - name: relbert/relbert-roberta-base-semeval2012-v6-average-prompt-a-triplet-2 results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.7209126984126984 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3770053475935829 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3798219584569733 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.47971095052807117 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.59 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.40789473684210525 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.4097222222222222 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8154286575259906 - name: F1 (macro) type: f1_macro value: 0.7865332203445532 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.7894366197183098 - name: F1 (macro) type: f1_macro value: 0.5861443079105217 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.5926327193932828 - name: F1 (macro) type: f1_macro value: 0.570784404874489 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9367740140502191 - name: F1 (macro) type: f1_macro value: 0.8321272383660834 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8492635537449076 - name: F1 (macro) type: f1_macro value: 0.831350713957581 --- # relbert/relbert-roberta-base-semeval2012-v6-average-prompt-a-triplet-2 RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on [relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-a-triplet-2/raw/main/analogy.json)): - Accuracy on SAT (full): 0.3770053475935829 - Accuracy on SAT: 0.3798219584569733 - Accuracy on BATS: 0.47971095052807117 - Accuracy on U2: 0.40789473684210525 - Accuracy on U4: 0.4097222222222222 - Accuracy on Google: 0.59 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-a-triplet-2/raw/main/classification.json)): - Micro F1 score on BLESS: 0.8154286575259906 - Micro F1 score on CogALexV: 0.7894366197183098 - Micro F1 score on EVALution: 0.5926327193932828 - Micro F1 score on K&H+N: 0.9367740140502191 - Micro F1 score on ROOT09: 0.8492635537449076 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-a-triplet-2/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.7209126984126984 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-average-prompt-a-triplet-2") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-base - max_length: 64 - mode: average - data: relbert/semeval2012_relational_similarity_v6 - split: train - split_eval: validation - template_mode: manual - loss_function: triplet - classification_loss: False - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 10 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 2 - exclude_relation: None - n_sample: 320 - gradient_accumulation: 8 - relation_level: None The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-a-triplet-2/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
research-backup/relbert-roberta-base-semeval2012-v6-mask-prompt-e-triplet-2
research-backup
2022-11-22T19:10:57Z
103
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity_v6", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-11-22T07:30:50Z
--- datasets: - relbert/semeval2012_relational_similarity_v6 model-index: - name: relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-triplet-2 results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.47160714285714284 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.34759358288770054 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3590504451038576 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.4980544747081712 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.544 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.38596491228070173 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.38657407407407407 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.835919843302697 - name: F1 (macro) type: f1_macro value: 0.8291105198617971 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.7704225352112676 - name: F1 (macro) type: f1_macro value: 0.4170022869326865 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6121343445287107 - name: F1 (macro) type: f1_macro value: 0.5765221107709003 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9145162412186131 - name: F1 (macro) type: f1_macro value: 0.783440515726974 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8223127546223754 - name: F1 (macro) type: f1_macro value: 0.8219042972063227 --- # relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-triplet-2 RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on [relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-triplet-2/raw/main/analogy.json)): - Accuracy on SAT (full): 0.34759358288770054 - Accuracy on SAT: 0.3590504451038576 - Accuracy on BATS: 0.4980544747081712 - Accuracy on U2: 0.38596491228070173 - Accuracy on U4: 0.38657407407407407 - Accuracy on Google: 0.544 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-triplet-2/raw/main/classification.json)): - Micro F1 score on BLESS: 0.835919843302697 - Micro F1 score on CogALexV: 0.7704225352112676 - Micro F1 score on EVALution: 0.6121343445287107 - Micro F1 score on K&H+N: 0.9145162412186131 - Micro F1 score on ROOT09: 0.8223127546223754 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-triplet-2/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.47160714285714284 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-triplet-2") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-base - max_length: 64 - mode: mask - data: relbert/semeval2012_relational_similarity_v6 - split: train - split_eval: validation - template_mode: manual - loss_function: triplet - classification_loss: False - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 4 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 2 - exclude_relation: None - n_sample: 320 - gradient_accumulation: 8 - relation_level: None The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-triplet-2/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
research-backup/relbert-roberta-base-semeval2012-v6-mask-prompt-d-triplet-2
research-backup
2022-11-22T18:49:15Z
104
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity_v6", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-11-22T07:28:58Z
--- datasets: - relbert/semeval2012_relational_similarity_v6 model-index: - name: relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-d-triplet-2 results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.6089087301587301 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.43315508021390375 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.44510385756676557 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6120066703724292 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.878 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.4473684210526316 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.49537037037037035 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8796142835618503 - name: F1 (macro) type: f1_macro value: 0.8747731277585521 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8394366197183099 - name: F1 (macro) type: f1_macro value: 0.6300385764057015 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6749729144095341 - name: F1 (macro) type: f1_macro value: 0.6626586846228053 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9191069068651319 - name: F1 (macro) type: f1_macro value: 0.8114897599095089 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8921968035098715 - name: F1 (macro) type: f1_macro value: 0.8854495217016495 --- # relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-d-triplet-2 RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on [relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-d-triplet-2/raw/main/analogy.json)): - Accuracy on SAT (full): 0.43315508021390375 - Accuracy on SAT: 0.44510385756676557 - Accuracy on BATS: 0.6120066703724292 - Accuracy on U2: 0.4473684210526316 - Accuracy on U4: 0.49537037037037035 - Accuracy on Google: 0.878 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-d-triplet-2/raw/main/classification.json)): - Micro F1 score on BLESS: 0.8796142835618503 - Micro F1 score on CogALexV: 0.8394366197183099 - Micro F1 score on EVALution: 0.6749729144095341 - Micro F1 score on K&H+N: 0.9191069068651319 - Micro F1 score on ROOT09: 0.8921968035098715 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-d-triplet-2/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.6089087301587301 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-d-triplet-2") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-base - max_length: 64 - mode: mask - data: relbert/semeval2012_relational_similarity_v6 - split: train - split_eval: validation - template_mode: manual - loss_function: triplet - classification_loss: False - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 9 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 2 - exclude_relation: None - n_sample: 320 - gradient_accumulation: 8 - relation_level: None The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-d-triplet-2/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
HarshitaDiddee/AmericasNLP_Guarani
HarshitaDiddee
2022-11-22T18:46:18Z
4
0
transformers
[ "transformers", "wav2vec2", "automatic-speech-recognition", "license:cc-by-4.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-11-22T18:45:05Z
--- license: cc-by-4.0 --- ASR for Guarani ( Source: AmericasNLP Shared Task for Low-Resource ASR)
HarshitaDiddee/AmericasNLP_Bribri
HarshitaDiddee
2022-11-22T18:35:11Z
91
0
transformers
[ "transformers", "wav2vec2", "automatic-speech-recognition", "license:cc-by-4.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-11-22T18:24:40Z
--- license: cc-by-4.0 --- ASR Model for Bribri ( Source: AmericasNLP Shared Task 2022 )
umairalipathan/finetuning-sentiment-model-surrender-final
umairalipathan
2022-11-22T18:17:49Z
107
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-11-22T18:08:12Z
--- tags: - generated_from_trainer model-index: - name: finetuning-sentiment-model-surrender-final results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-surrender-final This model is a fine-tuned version of [umairalipathan/autotrain-sisu_surrender-2206370778](https://huggingface.co/umairalipathan/autotrain-sisu_surrender-2206370778) on an unknown dataset. It achieves the following results on the evaluation set: - eval_loss: 0.2072 - eval_accuracy: 0.9556 - eval_f1: 0.9714 - eval_runtime: 8.4 - eval_samples_per_second: 5.357 - eval_steps_per_second: 0.357 - step: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Framework versions - Transformers 4.24.0 - Pytorch 1.13.0+cpu - Datasets 2.6.1 - Tokenizers 0.13.2
motmono/Modified-Reinforce-PixelCopter
motmono
2022-11-22T18:16:23Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2022-11-22T18:13:52Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Modified-Reinforce-PixelCopter results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 16.10 +/- 10.73 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
renjithman/finetuning-sentiment-model-3000-samples
renjithman
2022-11-22T17:43:52Z
105
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-11-22T17:30:07Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb metrics: - accuracy - f1 model-index: - name: finetuning-sentiment-model-3000-samples results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb config: plain_text split: train args: plain_text metrics: - name: Accuracy type: accuracy value: 0.87 - name: F1 type: f1 value: 0.8704318936877077 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3099 - Accuracy: 0.87 - F1: 0.8704 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.7.0 - Tokenizers 0.13.2
datasciencemmw/old-beta2
datasciencemmw
2022-11-22T17:37:01Z
101
1
transformers
[ "transformers", "pytorch", "autotrain", "text-classification", "en", "dataset:LiveEvil/autotrain-data-copuml-la-beta-demo", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
text-classification
2022-11-22T17:35:39Z
--- tags: - autotrain - text-classification language: - en widget: - text: "I love AutoTrain 🤗" datasets: - LiveEvil/autotrain-data-copuml-la-beta-demo co2_eq_emissions: emissions: 1.2815143214785873 --- # Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 2205770755 - CO2 Emissions (in grams): 1.2815 ## Validation Metrics - Loss: 1.085 - Accuracy: 0.747 - Macro F1: 0.513 - Micro F1: 0.747 - Weighted F1: 0.715 - Macro Precision: 0.533 - Micro Precision: 0.747 - Weighted Precision: 0.691 - Macro Recall: 0.515 - Micro Recall: 0.747 - Weighted Recall: 0.747 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/LiveEvil/autotrain-copuml-la-beta-demo-2205770755 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("LiveEvil/autotrain-copuml-la-beta-demo-2205770755", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("LiveEvil/autotrain-copuml-la-beta-demo-2205770755", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
research-backup/relbert-roberta-base-semeval2012-v6-average-prompt-e-triplet-1
research-backup
2022-11-22T17:34:18Z
103
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity_v6", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-11-22T07:40:04Z
--- datasets: - relbert/semeval2012_relational_similarity_v6 model-index: - name: relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-triplet-1 results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.8018650793650793 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3502673796791444 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.35014836795252224 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5202890494719289 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.644 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.39035087719298245 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.43287037037037035 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8461654361910502 - name: F1 (macro) type: f1_macro value: 0.8411664963735426 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8145539906103286 - name: F1 (macro) type: f1_macro value: 0.5873414064116238 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6505958829902492 - name: F1 (macro) type: f1_macro value: 0.6269958308732405 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9319051262433052 - name: F1 (macro) type: f1_macro value: 0.8393686548194149 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.7511751801942964 - name: F1 (macro) type: f1_macro value: 0.6464435364634403 --- # relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-triplet-1 RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on [relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-triplet-1/raw/main/analogy.json)): - Accuracy on SAT (full): 0.3502673796791444 - Accuracy on SAT: 0.35014836795252224 - Accuracy on BATS: 0.5202890494719289 - Accuracy on U2: 0.39035087719298245 - Accuracy on U4: 0.43287037037037035 - Accuracy on Google: 0.644 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-triplet-1/raw/main/classification.json)): - Micro F1 score on BLESS: 0.8461654361910502 - Micro F1 score on CogALexV: 0.8145539906103286 - Micro F1 score on EVALution: 0.6505958829902492 - Micro F1 score on K&H+N: 0.9319051262433052 - Micro F1 score on ROOT09: 0.7511751801942964 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-triplet-1/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.8018650793650793 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-triplet-1") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-base - max_length: 64 - mode: average - data: relbert/semeval2012_relational_similarity_v6 - split: train - split_eval: validation - template_mode: manual - loss_function: triplet - classification_loss: False - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 9 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 1 - exclude_relation: None - n_sample: 320 - gradient_accumulation: 8 - relation_level: None The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-triplet-1/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
research-backup/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-2
research-backup
2022-11-22T17:33:29Z
103
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity_v6", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-11-22T07:22:15Z
--- datasets: - relbert/semeval2012_relational_similarity_v6 model-index: - name: relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-2 results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.7463293650793651 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.34759358288770054 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3590504451038576 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.481378543635353 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.494 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3991228070175439 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.35648148148148145 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8610818140726232 - name: F1 (macro) type: f1_macro value: 0.8525458448699613 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8171361502347417 - name: F1 (macro) type: f1_macro value: 0.5610856949320919 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6229685807150596 - name: F1 (macro) type: f1_macro value: 0.6126645128177534 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9215413507685887 - name: F1 (macro) type: f1_macro value: 0.8042276096823726 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.857724851143842 - name: F1 (macro) type: f1_macro value: 0.8472661094927697 --- # relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-2 RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on [relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-2/raw/main/analogy.json)): - Accuracy on SAT (full): 0.34759358288770054 - Accuracy on SAT: 0.3590504451038576 - Accuracy on BATS: 0.481378543635353 - Accuracy on U2: 0.3991228070175439 - Accuracy on U4: 0.35648148148148145 - Accuracy on Google: 0.494 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-2/raw/main/classification.json)): - Micro F1 score on BLESS: 0.8610818140726232 - Micro F1 score on CogALexV: 0.8171361502347417 - Micro F1 score on EVALution: 0.6229685807150596 - Micro F1 score on K&H+N: 0.9215413507685887 - Micro F1 score on ROOT09: 0.857724851143842 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-2/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.7463293650793651 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-2") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-base - max_length: 64 - mode: mask - data: relbert/semeval2012_relational_similarity_v6 - split: train - split_eval: validation - template_mode: manual - loss_function: triplet - classification_loss: False - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 9 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 2 - exclude_relation: None - n_sample: 320 - gradient_accumulation: 8 - relation_level: None The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-2/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
research-backup/relbert-roberta-base-semeval2012-v6-average-prompt-d-triplet-1
research-backup
2022-11-22T17:31:19Z
103
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity_v6", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-11-22T07:38:22Z
--- datasets: - relbert/semeval2012_relational_similarity_v6 model-index: - name: relbert/relbert-roberta-base-semeval2012-v6-average-prompt-d-triplet-1 results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.7048015873015873 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.37967914438502676 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3916913946587537 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5347415230683713 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.69 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.41228070175438597 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3888888888888889 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.853246948922706 - name: F1 (macro) type: f1_macro value: 0.8485536876305343 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8044600938967136 - name: F1 (macro) type: f1_macro value: 0.5726819680585065 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.5839653304442037 - name: F1 (macro) type: f1_macro value: 0.5524953070884607 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.934687347847256 - name: F1 (macro) type: f1_macro value: 0.8063588254058023 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8279536195549985 - name: F1 (macro) type: f1_macro value: 0.7955713493721125 --- # relbert/relbert-roberta-base-semeval2012-v6-average-prompt-d-triplet-1 RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on [relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-d-triplet-1/raw/main/analogy.json)): - Accuracy on SAT (full): 0.37967914438502676 - Accuracy on SAT: 0.3916913946587537 - Accuracy on BATS: 0.5347415230683713 - Accuracy on U2: 0.41228070175438597 - Accuracy on U4: 0.3888888888888889 - Accuracy on Google: 0.69 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-d-triplet-1/raw/main/classification.json)): - Micro F1 score on BLESS: 0.853246948922706 - Micro F1 score on CogALexV: 0.8044600938967136 - Micro F1 score on EVALution: 0.5839653304442037 - Micro F1 score on K&H+N: 0.934687347847256 - Micro F1 score on ROOT09: 0.8279536195549985 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-d-triplet-1/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.7048015873015873 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-average-prompt-d-triplet-1") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-base - max_length: 64 - mode: average - data: relbert/semeval2012_relational_similarity_v6 - split: train - split_eval: validation - template_mode: manual - loss_function: triplet - classification_loss: False - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 10 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 1 - exclude_relation: None - n_sample: 320 - gradient_accumulation: 8 - relation_level: None The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-d-triplet-1/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
alanoix/whisper-small-br
alanoix
2022-11-22T17:26:31Z
80
1
transformers
[ "transformers", "pytorch", "tensorboard", "whisper", "automatic-speech-recognition", "hf-asr-leaderboard", "generated_from_trainer", "br", "dataset:mozilla-foundation/common_voice_11_0", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-11-22T09:51:24Z
--- language: - br license: apache-2.0 tags: - hf-asr-leaderboard - generated_from_trainer datasets: - mozilla-foundation/common_voice_11_0 metrics: - wer model-index: - name: whisper-small-br results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 11.0 type: mozilla-foundation/common_voice_11_0 args: 'config: br, split: test' metrics: - name: Wer type: wer value: 49.98168162667155 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-small-br This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.8542 - Wer: 49.9817 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.1415 | 3.36 | 1000 | 0.7406 | 54.0117 | | 0.0147 | 6.71 | 2000 | 0.7909 | 51.5479 | | 0.0011 | 10.07 | 3000 | 0.8368 | 49.7710 | | 0.0007 | 13.42 | 4000 | 0.8542 | 49.9817 | ### Framework versions - Transformers 4.25.0.dev0 - Pytorch 1.12.1+cu113 - Datasets 2.7.0 - Tokenizers 0.13.2
dung1308/dung_NT_model_save
dung1308
2022-11-22T17:22:09Z
65
0
transformers
[ "transformers", "tf", "roberta", "fill-mask", "generated_from_keras_callback", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-11-22T01:33:27Z
--- tags: - generated_from_keras_callback model-index: - name: dung1308/dung_NT_model_save results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # dung1308/dung_NT_model_save This model is a fine-tuned version of [vinai/phobert-base](https://huggingface.co/vinai/phobert-base) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 3.8144 - Validation Loss: 3.6030 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 4.4431 | 3.9985 | 0 | | 3.9986 | 3.8016 | 1 | | 3.8144 | 3.6030 | 2 | ### Framework versions - Transformers 4.18.0 - TensorFlow 2.8.0 - Datasets 2.7.0 - Tokenizers 0.11.0
research-backup/relbert-roberta-base-semeval2012-v6-mask-prompt-e-triplet-1
research-backup
2022-11-22T17:13:57Z
103
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity_v6", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-11-22T07:30:48Z
--- datasets: - relbert/semeval2012_relational_similarity_v6 model-index: - name: relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-triplet-1 results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.7387698412698412 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3342245989304813 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.34718100890207715 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5441912173429683 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.644 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.35526315789473684 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.37962962962962965 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8145246346240772 - name: F1 (macro) type: f1_macro value: 0.801802054210856 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.7774647887323943 - name: F1 (macro) type: f1_macro value: 0.5026184700694826 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.5980498374864572 - name: F1 (macro) type: f1_macro value: 0.5765100456864519 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8878069138206858 - name: F1 (macro) type: f1_macro value: 0.7711282513838499 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.827326856784707 - name: F1 (macro) type: f1_macro value: 0.824410778730745 --- # relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-triplet-1 RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on [relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-triplet-1/raw/main/analogy.json)): - Accuracy on SAT (full): 0.3342245989304813 - Accuracy on SAT: 0.34718100890207715 - Accuracy on BATS: 0.5441912173429683 - Accuracy on U2: 0.35526315789473684 - Accuracy on U4: 0.37962962962962965 - Accuracy on Google: 0.644 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-triplet-1/raw/main/classification.json)): - Micro F1 score on BLESS: 0.8145246346240772 - Micro F1 score on CogALexV: 0.7774647887323943 - Micro F1 score on EVALution: 0.5980498374864572 - Micro F1 score on K&H+N: 0.8878069138206858 - Micro F1 score on ROOT09: 0.827326856784707 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-triplet-1/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.7387698412698412 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-triplet-1") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-base - max_length: 64 - mode: mask - data: relbert/semeval2012_relational_similarity_v6 - split: train - split_eval: validation - template_mode: manual - loss_function: triplet - classification_loss: False - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 9 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 1 - exclude_relation: None - n_sample: 320 - gradient_accumulation: 8 - relation_level: None The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-triplet-1/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
research-backup/relbert-roberta-base-semeval2012-v6-mask-prompt-d-triplet-1
research-backup
2022-11-22T17:10:39Z
103
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity_v6", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-11-22T07:28:59Z
--- datasets: - relbert/semeval2012_relational_similarity_v6 model-index: - name: relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-d-triplet-1 results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.7853174603174603 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.4197860962566845 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.42433234421364985 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5619788771539744 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.744 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.43859649122807015 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.4351851851851852 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8895585354828989 - name: F1 (macro) type: f1_macro value: 0.8809341644131754 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8453051643192488 - name: F1 (macro) type: f1_macro value: 0.624040279392662 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6793066088840737 - name: F1 (macro) type: f1_macro value: 0.6602046108703392 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9344786812269598 - name: F1 (macro) type: f1_macro value: 0.8375382298577612 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8658727671576308 - name: F1 (macro) type: f1_macro value: 0.8645267089284405 --- # relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-d-triplet-1 RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on [relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-d-triplet-1/raw/main/analogy.json)): - Accuracy on SAT (full): 0.4197860962566845 - Accuracy on SAT: 0.42433234421364985 - Accuracy on BATS: 0.5619788771539744 - Accuracy on U2: 0.43859649122807015 - Accuracy on U4: 0.4351851851851852 - Accuracy on Google: 0.744 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-d-triplet-1/raw/main/classification.json)): - Micro F1 score on BLESS: 0.8895585354828989 - Micro F1 score on CogALexV: 0.8453051643192488 - Micro F1 score on EVALution: 0.6793066088840737 - Micro F1 score on K&H+N: 0.9344786812269598 - Micro F1 score on ROOT09: 0.8658727671576308 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-d-triplet-1/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.7853174603174603 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-d-triplet-1") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-base - max_length: 64 - mode: mask - data: relbert/semeval2012_relational_similarity_v6 - split: train - split_eval: validation - template_mode: manual - loss_function: triplet - classification_loss: False - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 9 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 1 - exclude_relation: None - n_sample: 320 - gradient_accumulation: 8 - relation_level: None The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-d-triplet-1/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
research-backup/relbert-roberta-base-semeval2012-v6-mask-prompt-c-triplet-1
research-backup
2022-11-22T17:06:31Z
103
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity_v6", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-11-22T07:26:58Z
--- datasets: - relbert/semeval2012_relational_similarity_v6 model-index: - name: relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-triplet-1 results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.7 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.35561497326203206 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.34718100890207715 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.48526959421901056 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.618 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.39473684210526316 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3541666666666667 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8442067199035708 - name: F1 (macro) type: f1_macro value: 0.823901479879959 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8110328638497653 - name: F1 (macro) type: f1_macro value: 0.5472550813103398 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.5769230769230769 - name: F1 (macro) type: f1_macro value: 0.5466975926628965 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9118035751547611 - name: F1 (macro) type: f1_macro value: 0.7693980437177949 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8564713256032591 - name: F1 (macro) type: f1_macro value: 0.851273747817193 --- # relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-triplet-1 RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on [relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-triplet-1/raw/main/analogy.json)): - Accuracy on SAT (full): 0.35561497326203206 - Accuracy on SAT: 0.34718100890207715 - Accuracy on BATS: 0.48526959421901056 - Accuracy on U2: 0.39473684210526316 - Accuracy on U4: 0.3541666666666667 - Accuracy on Google: 0.618 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-triplet-1/raw/main/classification.json)): - Micro F1 score on BLESS: 0.8442067199035708 - Micro F1 score on CogALexV: 0.8110328638497653 - Micro F1 score on EVALution: 0.5769230769230769 - Micro F1 score on K&H+N: 0.9118035751547611 - Micro F1 score on ROOT09: 0.8564713256032591 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-triplet-1/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.7 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-triplet-1") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-base - max_length: 64 - mode: mask - data: relbert/semeval2012_relational_similarity_v6 - split: train - split_eval: validation - template_mode: manual - loss_function: triplet - classification_loss: False - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 5 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 1 - exclude_relation: None - n_sample: 320 - gradient_accumulation: 8 - relation_level: None The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-triplet-1/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
research-backup/relbert-roberta-base-semeval2012-v6-mask-prompt-b-triplet-1
research-backup
2022-11-22T17:03:22Z
103
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity_v6", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-11-22T07:24:49Z
--- datasets: - relbert/semeval2012_relational_similarity_v6 model-index: - name: relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-b-triplet-1 results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.7523809523809524 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.35294117647058826 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.35014836795252224 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.4191217342968316 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.554 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.41228070175438597 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.4050925925925926 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8410426397468735 - name: F1 (macro) type: f1_macro value: 0.8153049654017815 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.7981220657276996 - name: F1 (macro) type: f1_macro value: 0.5156838585733334 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.605092091007584 - name: F1 (macro) type: f1_macro value: 0.5707468312851958 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9076997982889338 - name: F1 (macro) type: f1_macro value: 0.7719219859032024 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.855531181447822 - name: F1 (macro) type: f1_macro value: 0.8548547221202175 --- # relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-b-triplet-1 RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on [relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-b-triplet-1/raw/main/analogy.json)): - Accuracy on SAT (full): 0.35294117647058826 - Accuracy on SAT: 0.35014836795252224 - Accuracy on BATS: 0.4191217342968316 - Accuracy on U2: 0.41228070175438597 - Accuracy on U4: 0.4050925925925926 - Accuracy on Google: 0.554 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-b-triplet-1/raw/main/classification.json)): - Micro F1 score on BLESS: 0.8410426397468735 - Micro F1 score on CogALexV: 0.7981220657276996 - Micro F1 score on EVALution: 0.605092091007584 - Micro F1 score on K&H+N: 0.9076997982889338 - Micro F1 score on ROOT09: 0.855531181447822 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-b-triplet-1/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.7523809523809524 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-b-triplet-1") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-base - max_length: 64 - mode: mask - data: relbert/semeval2012_relational_similarity_v6 - split: train - split_eval: validation - template_mode: manual - loss_function: triplet - classification_loss: False - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 9 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 1 - exclude_relation: None - n_sample: 320 - gradient_accumulation: 8 - relation_level: None The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-b-triplet-1/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
research-backup/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-1
research-backup
2022-11-22T17:00:21Z
107
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity_v6", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-11-22T07:22:15Z
--- datasets: - relbert/semeval2012_relational_similarity_v6 model-index: - name: relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-1 results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.8430952380952381 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3582887700534759 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3649851632047478 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.4280155642023346 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.532 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3333333333333333 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3101851851851852 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8460147657073979 - name: F1 (macro) type: f1_macro value: 0.8315897128108677 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8084507042253521 - name: F1 (macro) type: f1_macro value: 0.5269777075808457 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6424702058504875 - name: F1 (macro) type: f1_macro value: 0.6178608994596904 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.913612019197329 - name: F1 (macro) type: f1_macro value: 0.7738790468743169 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8693199623942337 - name: F1 (macro) type: f1_macro value: 0.864532922094076 --- # relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-1 RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on [relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-1/raw/main/analogy.json)): - Accuracy on SAT (full): 0.3582887700534759 - Accuracy on SAT: 0.3649851632047478 - Accuracy on BATS: 0.4280155642023346 - Accuracy on U2: 0.3333333333333333 - Accuracy on U4: 0.3101851851851852 - Accuracy on Google: 0.532 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-1/raw/main/classification.json)): - Micro F1 score on BLESS: 0.8460147657073979 - Micro F1 score on CogALexV: 0.8084507042253521 - Micro F1 score on EVALution: 0.6424702058504875 - Micro F1 score on K&H+N: 0.913612019197329 - Micro F1 score on ROOT09: 0.8693199623942337 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-1/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.8430952380952381 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-1") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-base - max_length: 64 - mode: mask - data: relbert/semeval2012_relational_similarity_v6 - split: train - split_eval: validation - template_mode: manual - loss_function: triplet - classification_loss: False - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 9 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 1 - exclude_relation: None - n_sample: 320 - gradient_accumulation: 8 - relation_level: None The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-triplet-1/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
jpcompartir/579-private-v3
jpcompartir
2022-11-22T16:58:43Z
1
0
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-11-22T16:58:31Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 3000 with parameters: ``` {'batch_size': 4, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2.9621969030370343e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 3000, "warmup_steps": 300, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
SweepCake/LunarLander-v2-PPO-HFcourse
SweepCake
2022-11-22T15:44:29Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-11-22T15:44:07Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 239.22 +/- 13.04 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
huggingtweets/oryxspioenkop
huggingtweets
2022-11-22T15:10:21Z
111
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-11-22T15:09:05Z
--- language: en thumbnail: http://www.huggingtweets.com/oryxspioenkop/1669129816805/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/929707102083395584/tCWiYbO1_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Oryx</div> <div style="text-align: center; font-size: 14px;">@oryxspioenkop</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Oryx. | Data | Oryx | | --- | --- | | Tweets downloaded | 3246 | | Retweets | 2219 | | Short tweets | 266 | | Tweets kept | 761 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/qbqfz863/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @oryxspioenkop's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2es3q78b) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2es3q78b/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/oryxspioenkop') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
Dundalia/lfqa_covid
Dundalia
2022-11-22T15:07:37Z
105
0
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-11-22T14:39:45Z
--- license: mit tags: - generated_from_trainer metrics: - bleu model-index: - name: lfqa_covid results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # lfqa_covid This model is a fine-tuned version of [vblagoje/bart_lfqa](https://huggingface.co/vblagoje/bart_lfqa) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1028 - Bleu: 0.0 - Gen Len: 19.8564 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:----:|:-------:| | 1.5923 | 1.0 | 808 | 0.1028 | 0.0 | 19.8564 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.7.0 - Tokenizers 0.13.2
jjjunyeong/bart-finetuned-squad
jjjunyeong
2022-11-22T14:42:07Z
106
0
transformers
[ "transformers", "pytorch", "tensorboard", "bart", "text2text-generation", "generated_from_trainer", "dataset:squad", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-11-22T12:27:04Z
--- tags: - generated_from_trainer datasets: - squad metrics: - rouge model-index: - name: bart-finetuned-squad results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: squad type: squad config: plain_text split: train args: plain_text metrics: - name: Rouge1 type: rouge value: 50.1505 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-finetuned-squad This model is a fine-tuned version of [p208p2002/bart-squad-qg-hl](https://huggingface.co/p208p2002/bart-squad-qg-hl) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.8813 - Rouge1: 50.1505 - Rouge2: 26.8606 - Rougel: 46.0203 - Rougelsum: 46.0242 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.6e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:| | 1.5702 | 1.0 | 125 | 1.4266 | 49.7474 | 26.6965 | 46.3227 | 46.342 | | 0.84 | 2.0 | 250 | 1.4845 | 49.8379 | 26.3973 | 45.126 | 45.1791 | | 0.535 | 3.0 | 375 | 1.6037 | 50.1413 | 27.4581 | 46.7795 | 46.8001 | | 0.3621 | 4.0 | 500 | 1.6899 | 49.6087 | 25.9818 | 45.0914 | 45.1004 | | 0.2448 | 5.0 | 625 | 1.7540 | 49.7468 | 26.5312 | 45.5623 | 45.5296 | | 0.1756 | 6.0 | 750 | 1.8287 | 49.4987 | 26.2315 | 45.3515 | 45.4214 | | 0.13 | 7.0 | 875 | 1.8809 | 49.6426 | 26.4688 | 45.5167 | 45.5427 | | 0.1016 | 8.0 | 1000 | 1.8813 | 50.1505 | 26.8606 | 46.0203 | 46.0242 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1+cu113 - Datasets 2.7.0 - Tokenizers 0.13.2
sd-concepts-library/ugly_sonic_enhanced
sd-concepts-library
2022-11-22T13:46:22Z
0
2
null
[ "license:openrail", "region:us" ]
null
2022-11-22T13:25:22Z
--- license: openrail --- Yes, he is back, better than ever. And with a beautiful Green Hill Zone. Renders in Automatic1111 ![04428-3036068214-uglyzonix.png](https://s3.amazonaws.com/moonup/production/uploads/1669124772659-630406f20907b9a115c620e6.png) ![04427-970404119-uglyzonix.png](https://s3.amazonaws.com/moonup/production/uploads/1669124772661-630406f20907b9a115c620e6.png) ![04426-3850462960-uglyzonix.png](https://s3.amazonaws.com/moonup/production/uploads/1669124772658-630406f20907b9a115c620e6.png)
adrianccy/donut-base-sroie-fine-tuned
adrianccy
2022-11-22T13:41:56Z
45
0
transformers
[ "transformers", "pytorch", "tensorboard", "vision-encoder-decoder", "image-text-to-text", "generated_from_trainer", "dataset:imagefolder", "license:mit", "endpoints_compatible", "region:us" ]
image-text-to-text
2022-11-22T10:33:43Z
--- license: mit tags: - generated_from_trainer datasets: - imagefolder model-index: - name: donut-base-sroie-fine-tuned results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # donut-base-sroie-fine-tuned This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.24.0 - Pytorch 1.10.0 - Datasets 2.7.0 - Tokenizers 0.13.2
Ngit/fail-detect
Ngit
2022-11-22T13:03:29Z
1
0
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-11-22T13:03:15Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 625 with parameters: ``` {'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 625, "warmup_steps": 63, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
merve/model-broken-config
merve
2022-11-22T13:03:15Z
0
0
sklearn
[ "sklearn", "skops", "tabular-classification", "region:us" ]
tabular-classification
2022-11-22T13:01:31Z
--- library_name: sklearn tags: - sklearn - skops - tabular-classification widget: structuredData: attribute_0: - material_7 - material_7 - material_7 attribute_1: - material_8 - material_8 - material_6 attribute_2: - 5 - 5 - 6 attribute_3: - 8 - 8 - 9 loading: - 154.02 - 108.73 - 99.84 measurement_0: - 14 - 4 - 6 measurement_1: - 6 - 7 - 7 measurement_10: - 16.637 - 16.207 - 17.17 measurement_11: - 20.719 - 20.058 - 20.858 measurement_12: - 12.824 - 11.898 - 10.968 measurement_13: - 16.067 - 13.871 - 16.448 measurement_14: - 15.181 - 14.266 - 15.6 measurement_15: - 18.546 - 15.734 - 14.637 measurement_16: - 19.402 - 16.886 - 13.86 measurement_17: - 643.086 - 642.533 - 673.545 measurement_2: - 6 - 9 - 6 measurement_3: - 19.532 - 18.128 - NaN measurement_4: - 11.017 - 11.866 - 10.064 measurement_5: - 15.639 - 17.891 - 16.287 measurement_6: - 16.709 - 20.302 - 17.445 measurement_7: - 10.057 - NaN - 12.117 measurement_8: - 20.201 - 18.148 - 20.659 measurement_9: - 11.106 - 10.221 - 11.999 product_code: - C - C - E --- # Model description This is a DecisionTreeClassifier model built for Kaggle Tabular Playground Series August 2022, trained on supersoaker production failures dataset. ## Intended uses & limitations This model is not ready to be used in production. ## Training Procedure ### Hyperparameters The model is trained with below hyperparameters. <details> <summary> Click to expand </summary> | Hyperparameter | Value | |-----------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | memory | | | steps | [('transformation', ColumnTransformer(transformers=[('loading_missing_value_imputer', SimpleImputer(), ['loading']), ('numerical_missing_value_imputer', SimpleImputer(),['loading', 'measurement_3', 'measurement_4','measurement_5', 'measurement_6','measurement_7', 'measurement_8','measurement_9', 'measurement_10','measurement_11', 'measurement_12','measurement_13', 'measurement_14','measurement_15', 'measurement_16','measurement_17']),('attribute_0_encoder', OneHotEncoder(),['attribute_0']),('attribute_1_encoder', OneHotEncoder(),['attribute_1']),('product_code_encoder', OneHotEncoder(),['product_code'])])), ('model', DecisionTreeClassifier(max_depth=4))] | | verbose | False | | transformation | ColumnTransformer(transformers=[('loading_missing_value_imputer',SimpleImputer(), ['loading']),('numerical_missing_value_imputer',SimpleImputer(),['loading', 'measurement_3', 'measurement_4','measurement_5', 'measurement_6','measurement_7', 'measurement_8','measurement_9', 'measurement_10','measurement_11', 'measurement_12','measurement_13', 'measurement_14','measurement_15', 'measurement_16','measurement_17']),('attribute_0_encoder', OneHotEncoder(),['attribute_0']),('attribute_1_encoder', OneHotEncoder(), 'attribute_1']),('product_code_encoder', OneHotEncoder(),['product_code'])]) | | model | DecisionTreeClassifier(max_depth=4) | | transformation__n_jobs | | | transformation__remainder | drop | | transformation__sparse_threshold | 0.3 | | transformation__transformer_weights | | | transformation__transformers | [('loading_missing_value_imputer', SimpleImputer(), ['loading']), ('numerical_missing_value_imputer', SimpleImputer(), ['loading', 'measurement_3', 'measurement_4', 'measurement_5', 'measurement_6', 'measurement_7', 'measurement_8', 'measurement_9', 'measurement_10', 'measurement_11', 'measurement_12', 'measurement_13', 'measurement_14', 'measurement_15', 'measurement_16', 'measurement_17']), ('attribute_0_encoder', OneHotEncoder(), ['attribute_0']), ('attribute_1_encoder', OneHotEncoder(), ['attribute_1']), ('product_code_encoder', OneHotEncoder(),['product_code'])] | | transformation__verbose | False | | transformation__verbose_feature_names_out | True | | transformation__loading_missing_value_imputer | SimpleImputer() | | transformation__numerical_missing_value_imputer | SimpleImputer() | | transformation__attribute_0_encoder | OneHotEncoder() | | transformation__attribute_1_encoder | OneHotEncoder() | | transformation__product_code_encoder | OneHotEncoder() | | transformation__loading_missing_value_imputer__add_indicator | False | | transformation__loading_missing_value_imputer__copy | True | | transformation__loading_missing_value_imputer__fill_value | | | transformation__loading_missing_value_imputer__missing_values | nan | | transformation__loading_missing_value_imputer__strategy | mean | | transformation__loading_missing_value_imputer__verbose | 0 | | transformation__numerical_missing_value_imputer__add_indicator | False | | transformation__numerical_missing_value_imputer__copy | True | | transformation__numerical_missing_value_imputer__fill_value | | | transformation__numerical_missing_value_imputer__missing_values | nan | | transformation__numerical_missing_value_imputer__strategy | mean | | transformation__numerical_missing_value_imputer__verbose | 0 | | transformation__attribute_0_encoder__categories | auto | | transformation__attribute_0_encoder__drop | | | transformation__attribute_0_encoder__dtype | <class 'numpy.float64'> | | transformation__attribute_0_encoder__handle_unknown | error | | transformation__attribute_0_encoder__sparse | True | | transformation__attribute_1_encoder__categories | auto | | transformation__attribute_1_encoder__drop | | | transformation__attribute_1_encoder__dtype | <class 'numpy.float64'> | | transformation__attribute_1_encoder__handle_unknown | error | | transformation__attribute_1_encoder__sparse | True | | transformation__product_code_encoder__categories | auto | | transformation__product_code_encoder__drop | | | transformation__product_code_encoder__dtype | <class 'numpy.float64'> | | transformation__product_code_encoder__handle_unknown | error | | transformation__product_code_encoder__sparse | True | | model__ccp_alpha | 0.0 | | model__class_weight | | | model__criterion | gini | | model__max_depth | 4 | | model__max_features | | | model__max_leaf_nodes | | | model__min_impurity_decrease | 0.0 | | model__min_samples_leaf | 1 | | model__min_samples_split | 2 | | model__min_weight_fraction_leaf | 0.0 | | model__random_state | | | model__splitter | best | </details> ### Model Plot The model plot is below. <style>#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f {color: black;background-color: white;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f pre{padding: 0;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f div.sk-toggleable {background-color: white;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f label.sk-toggleable__label {cursor: pointer;display: block;width: 100%;margin-bottom: 0;padding: 0.3em;box-sizing: border-box;text-align: center;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f label.sk-toggleable__label-arrow:before {content: "▸";float: left;margin-right: 0.25em;color: #696969;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f label.sk-toggleable__label-arrow:hover:before {color: black;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f div.sk-estimator:hover label.sk-toggleable__label-arrow:before {color: black;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f div.sk-toggleable__content {max-height: 0;max-width: 0;overflow: hidden;text-align: left;background-color: #f0f8ff;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f div.sk-toggleable__content pre {margin: 0.2em;color: black;border-radius: 0.25em;background-color: #f0f8ff;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f input.sk-toggleable__control:checked~div.sk-toggleable__content {max-height: 200px;max-width: 100%;overflow: auto;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f input.sk-toggleable__control:checked~label.sk-toggleable__label-arrow:before {content: "▾";}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f div.sk-estimator input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f div.sk-label input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f input.sk-hidden--visually {border: 0;clip: rect(1px 1px 1px 1px);clip: rect(1px, 1px, 1px, 1px);height: 1px;margin: -1px;overflow: hidden;padding: 0;position: absolute;width: 1px;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f div.sk-estimator {font-family: monospace;background-color: #f0f8ff;border: 1px dotted black;border-radius: 0.25em;box-sizing: border-box;margin-bottom: 0.5em;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f div.sk-estimator:hover {background-color: #d4ebff;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f div.sk-parallel-item::after {content: "";width: 100%;border-bottom: 1px solid gray;flex-grow: 1;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f div.sk-label:hover label.sk-toggleable__label {background-color: #d4ebff;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f div.sk-serial::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 2em;bottom: 0;left: 50%;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f div.sk-serial {display: flex;flex-direction: column;align-items: center;background-color: white;padding-right: 0.2em;padding-left: 0.2em;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f div.sk-item {z-index: 1;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f div.sk-parallel {display: flex;align-items: stretch;justify-content: center;background-color: white;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f div.sk-parallel::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 2em;bottom: 0;left: 50%;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f div.sk-parallel-item {display: flex;flex-direction: column;position: relative;background-color: white;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f div.sk-parallel-item:first-child::after {align-self: flex-end;width: 50%;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f div.sk-parallel-item:last-child::after {align-self: flex-start;width: 50%;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f div.sk-parallel-item:only-child::after {width: 0;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f div.sk-dashed-wrapped {border: 1px dashed gray;margin: 0 0.4em 0.5em 0.4em;box-sizing: border-box;padding-bottom: 0.4em;background-color: white;position: relative;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f div.sk-label label {font-family: monospace;font-weight: bold;background-color: white;display: inline-block;line-height: 1.2em;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f div.sk-label-container {position: relative;z-index: 2;text-align: center;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f div.sk-container {/* jupyter's `normalize.less` sets `[hidden] { display: none; }` but bootstrap.min.css set `[hidden] { display: none !important; }` so we also need the `!important` here to be able to override the default hidden behavior on the sphinx rendered scikit-learn.org. See: https://github.com/scikit-learn/scikit-learn/issues/21755 */display: inline-block !important;position: relative;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f div.sk-text-repr-fallback {display: none;}</style><div id="sk-b8914d13-cacb-404b-89fd-48f0ed8d671f" class="sk-top-container" width="100%"><div class="sk-text-repr-fallback"><pre>Pipeline(steps=[(&#x27;transformation&#x27;,ColumnTransformer(transformers=[(&#x27;loading_missing_value_imputer&#x27;,SimpleImputer(),[&#x27;loading&#x27;]),(&#x27;numerical_missing_value_imputer&#x27;,SimpleImputer(),[&#x27;loading&#x27;, &#x27;measurement_3&#x27;,&#x27;measurement_4&#x27;,&#x27;measurement_5&#x27;,&#x27;measurement_6&#x27;,&#x27;measurement_7&#x27;,&#x27;measurement_8&#x27;,&#x27;measurement_9&#x27;,&#x27;measurement_10&#x27;,&#x27;measurement_11&#x27;,&#x27;measurement_12&#x27;,&#x27;measurement_13&#x27;,&#x27;measurement_14&#x27;,&#x27;measurement_15&#x27;,&#x27;measurement_16&#x27;,&#x27;measurement_17&#x27;]),(&#x27;attribute_0_encoder&#x27;,OneHotEncoder(),[&#x27;attribute_0&#x27;]),(&#x27;attribute_1_encoder&#x27;,OneHotEncoder(),[&#x27;attribute_1&#x27;]),(&#x27;product_code_encoder&#x27;,OneHotEncoder(),[&#x27;product_code&#x27;])])),(&#x27;model&#x27;, DecisionTreeClassifier(max_depth=4))])</pre><b>Please rerun this cell to show the HTML repr or trust the notebook.</b></div><div class="sk-container" hidden width="100%"><div class="sk-item sk-dashed-wrapped" width="100%"><div class="sk-label-container" width="100%"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="fe201304-214c-493b-8896-11cea0894f6e" type="checkbox" ><label for="fe201304-214c-493b-8896-11cea0894f6e" class="sk-toggleable__label sk-toggleable__label-arrow">Pipeline</label><div class="sk-toggleable__content"><pre>Pipeline(steps=[(&#x27;transformation&#x27;,ColumnTransformer(transformers=[(&#x27;loading_missing_value_imputer&#x27;,SimpleImputer(),[&#x27;loading&#x27;]),(&#x27;numerical_missing_value_imputer&#x27;,SimpleImputer(),[&#x27;loading&#x27;, &#x27;measurement_3&#x27;,&#x27;measurement_4&#x27;,&#x27;measurement_5&#x27;,&#x27;measurement_6&#x27;,&#x27;measurement_7&#x27;,&#x27;measurement_8&#x27;,&#x27;measurement_9&#x27;,&#x27;measurement_10&#x27;,&#x27;measurement_11&#x27;,&#x27;measurement_12&#x27;,&#x27;measurement_13&#x27;,&#x27;measurement_14&#x27;,&#x27;measurement_15&#x27;,&#x27;measurement_16&#x27;,&#x27;measurement_17&#x27;]),(&#x27;attribute_0_encoder&#x27;,OneHotEncoder(),[&#x27;attribute_0&#x27;]),(&#x27;attribute_1_encoder&#x27;,OneHotEncoder(),[&#x27;attribute_1&#x27;]),(&#x27;product_code_encoder&#x27;,OneHotEncoder(),[&#x27;product_code&#x27;])])),(&#x27;model&#x27;, DecisionTreeClassifier(max_depth=4))])</pre></div></div></div><div class="sk-serial"><div class="sk-item sk-dashed-wrapped"><div class="sk-label-container" width="100%"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="19136b49-925c-40a2-b4d1-37039bb014a9" type="checkbox" ><label for="19136b49-925c-40a2-b4d1-37039bb014a9" class="sk-toggleable__label sk-toggleable__label-arrow">transformation: ColumnTransformer</label><div class="sk-toggleable__content"><pre>ColumnTransformer(transformers=[(&#x27;loading_missing_value_imputer&#x27;,SimpleImputer(), [&#x27;loading&#x27;]),(&#x27;numerical_missing_value_imputer&#x27;,SimpleImputer(),[&#x27;loading&#x27;, &#x27;measurement_3&#x27;, &#x27;measurement_4&#x27;,&#x27;measurement_5&#x27;, &#x27;measurement_6&#x27;,&#x27;measurement_7&#x27;, &#x27;measurement_8&#x27;,&#x27;measurement_9&#x27;, &#x27;measurement_10&#x27;,&#x27;measurement_11&#x27;, &#x27;measurement_12&#x27;,&#x27;measurement_13&#x27;, &#x27;measurement_14&#x27;,&#x27;measurement_15&#x27;, &#x27;measurement_16&#x27;,&#x27;measurement_17&#x27;]),(&#x27;attribute_0_encoder&#x27;, OneHotEncoder(),[&#x27;attribute_0&#x27;]),(&#x27;attribute_1_encoder&#x27;, OneHotEncoder(),[&#x27;attribute_1&#x27;]),(&#x27;product_code_encoder&#x27;, OneHotEncoder(),[&#x27;product_code&#x27;])])</pre></div></div></div><div class="sk-parallel"><div class="sk-parallel-item"><div class="sk-item"><div class="sk-label-container" width="100%"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="c8ec7f92-b10a-41e7-b673-1239572ea00e" type="checkbox" ><label for="c8ec7f92-b10a-41e7-b673-1239572ea00e" class="sk-toggleable__label sk-toggleable__label-arrow">loading_missing_value_imputer</label><div class="sk-toggleable__content"><pre>[&#x27;loading&#x27;]</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="70fec50e-9c49-4818-a58f-ef8de932035c" type="checkbox" ><label for="70fec50e-9c49-4818-a58f-ef8de932035c" class="sk-toggleable__label sk-toggleable__label-arrow">SimpleImputer</label><div class="sk-toggleable__content"><pre>SimpleImputer()</pre></div></div></div></div></div></div><div class="sk-parallel-item"><div class="sk-item"><div class="sk-label-container" width="100%"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="ac8a6641-4222-4b12-b691-928201d9af73" type="checkbox" ><label for="ac8a6641-4222-4b12-b691-928201d9af73" class="sk-toggleable__label sk-toggleable__label-arrow">numerical_missing_value_imputer</label><div class="sk-toggleable__content"><pre>[&#x27;loading&#x27;, &#x27;measurement_3&#x27;, &#x27;measurement_4&#x27;, &#x27;measurement_5&#x27;, &#x27;measurement_6&#x27;, &#x27;measurement_7&#x27;, &#x27;measurement_8&#x27;, &#x27;measurement_9&#x27;, &#x27;measurement_10&#x27;, &#x27;measurement_11&#x27;, &#x27;measurement_12&#x27;, &#x27;measurement_13&#x27;, &#x27;measurement_14&#x27;, &#x27;measurement_15&#x27;, &#x27;measurement_16&#x27;, &#x27;measurement_17&#x27;]</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="a14b63c1-fecb-445e-9a74-8229a531f0ea" type="checkbox" ><label for="a14b63c1-fecb-445e-9a74-8229a531f0ea" class="sk-toggleable__label sk-toggleable__label-arrow">SimpleImputer</label><div class="sk-toggleable__content"><pre>SimpleImputer()</pre></div></div></div></div></div></div><div class="sk-parallel-item"><div class="sk-item"><div class="sk-label-container" width="100%"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="80227cfc-e001-4c0d-b495-e4e0631a49d5" type="checkbox" ><label for="80227cfc-e001-4c0d-b495-e4e0631a49d5" class="sk-toggleable__label sk-toggleable__label-arrow">attribute_0_encoder</label><div class="sk-toggleable__content"><pre>[&#x27;attribute_0&#x27;]</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="c52efc0c-08b7-467a-a0a1-f07cb6cecebc" type="checkbox" ><label for="c52efc0c-08b7-467a-a0a1-f07cb6cecebc" class="sk-toggleable__label sk-toggleable__label-arrow">OneHotEncoder</label><div class="sk-toggleable__content"><pre>OneHotEncoder()</pre></div></div></div></div></div></div><div class="sk-parallel-item"><div class="sk-item"><div class="sk-label-container" width="100%"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="6da0ab07-3d41-459c-a8a6-a56960b775f2" type="checkbox" ><label for="6da0ab07-3d41-459c-a8a6-a56960b775f2" class="sk-toggleable__label sk-toggleable__label-arrow">attribute_1_encoder</label><div class="sk-toggleable__content"><pre>[&#x27;attribute_1&#x27;]</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="b515fbe5-466a-4eb7-84d9-35227a1e862a" type="checkbox" ><label for="b515fbe5-466a-4eb7-84d9-35227a1e862a" class="sk-toggleable__label sk-toggleable__label-arrow">OneHotEncoder</label><div class="sk-toggleable__content"><pre>OneHotEncoder()</pre></div></div></div></div></div></div><div class="sk-parallel-item"><div class="sk-item"><div class="sk-label-container" width="100%"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="72c4b8e6-3110-486f-8b33-a7db1f5e822f" type="checkbox" ><label for="72c4b8e6-3110-486f-8b33-a7db1f5e822f" class="sk-toggleable__label sk-toggleable__label-arrow">product_code_encoder</label><div class="sk-toggleable__content"><pre>[&#x27;product_code&#x27;]</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="f3bfb5a1-317d-4ff4-8dd0-804ef1d7fd61" type="checkbox" ><label for="f3bfb5a1-317d-4ff4-8dd0-804ef1d7fd61" class="sk-toggleable__label sk-toggleable__label-arrow">OneHotEncoder</label><div class="sk-toggleable__content"><pre>OneHotEncoder()</pre></div></div></div></div></div></div></div></div><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="dbcb65f9-3068-4263-9c1c-2e6413804681" type="checkbox" ><label for="dbcb65f9-3068-4263-9c1c-2e6413804681" class="sk-toggleable__label sk-toggleable__label-arrow">DecisionTreeClassifier</label><div class="sk-toggleable__content"><pre>DecisionTreeClassifier(max_depth=4)</pre></div></div></div></div></div></div></div> Evaluation Results You can find the details about evaluation process and the evaluation results. | Metric | Value | |----------|---------| | accuracy | 0.7888 | | f1 score | 0.7888 | # How to Get Started with the Model Use the code below to get started with the model. <details> <summary> Click to expand </summary> ```python import pickle with open(decision-tree-playground-kaggle/model.pkl, 'rb') as file: clf = pickle.load(file) ``` </details> # Model Card Authors This model card is written by following authors: huggingface # Model Card Contact You can contact the model card authors through following channels: [More Information Needed] # Citation Below you can find information related to citation. **BibTeX:** ``` [More Information Needed] ``` Tree Plot ![Tree Plot](tree.png) Confusion Matrix ![Confusion Matrix](confusion_matrix.png)
research-backup/relbert-roberta-base-semeval2012-v6-average-prompt-e-nce-2
research-backup
2022-11-22T12:57:06Z
104
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity_v6", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-11-22T07:39:41Z
--- datasets: - relbert/semeval2012_relational_similarity_v6 model-index: - name: relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-nce-2 results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.6670436507936508 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3770053475935829 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.37388724035608306 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.4802668148971651 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.558 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.33771929824561403 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.34953703703703703 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.893174627090553 - name: F1 (macro) type: f1_macro value: 0.8866591988732194 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.7863849765258216 - name: F1 (macro) type: f1_macro value: 0.5308624907920565 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.5704225352112676 - name: F1 (macro) type: f1_macro value: 0.5510856788391408 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9581275648605412 - name: F1 (macro) type: f1_macro value: 0.8644516035001516 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8523973675963648 - name: F1 (macro) type: f1_macro value: 0.8523947470987124 --- # relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-nce-2 RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on [relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-nce-2/raw/main/analogy.json)): - Accuracy on SAT (full): 0.3770053475935829 - Accuracy on SAT: 0.37388724035608306 - Accuracy on BATS: 0.4802668148971651 - Accuracy on U2: 0.33771929824561403 - Accuracy on U4: 0.34953703703703703 - Accuracy on Google: 0.558 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-nce-2/raw/main/classification.json)): - Micro F1 score on BLESS: 0.893174627090553 - Micro F1 score on CogALexV: 0.7863849765258216 - Micro F1 score on EVALution: 0.5704225352112676 - Micro F1 score on K&H+N: 0.9581275648605412 - Micro F1 score on ROOT09: 0.8523973675963648 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-nce-2/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.6670436507936508 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-nce-2") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-base - max_length: 64 - mode: average - data: relbert/semeval2012_relational_similarity_v6 - split: train - split_eval: validation - template_mode: manual - loss_function: nce_logout - classification_loss: False - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 5 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 2 - exclude_relation: None - n_sample: 320 - gradient_accumulation: 8 - relation_level: None The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-nce-2/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
research-backup/relbert-roberta-base-semeval2012-v6-average-prompt-e-nce-1
research-backup
2022-11-22T12:57:03Z
103
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity_v6", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-11-22T07:40:04Z
--- datasets: - relbert/semeval2012_relational_similarity_v6 model-index: - name: relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-nce-1 results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.7409920634920635 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.45454545454545453 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.456973293768546 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5591995553085047 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.756 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.4780701754385965 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5023148148148148 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9014615036914269 - name: F1 (macro) type: f1_macro value: 0.8968728791615505 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8147887323943661 - name: F1 (macro) type: f1_macro value: 0.5972801854618999 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6224268689057422 - name: F1 (macro) type: f1_macro value: 0.6036991967237103 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.957571120539751 - name: F1 (macro) type: f1_macro value: 0.8755641184265396 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8812284550297712 - name: F1 (macro) type: f1_macro value: 0.8804686727120723 --- # relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-nce-1 RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on [relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-nce-1/raw/main/analogy.json)): - Accuracy on SAT (full): 0.45454545454545453 - Accuracy on SAT: 0.456973293768546 - Accuracy on BATS: 0.5591995553085047 - Accuracy on U2: 0.4780701754385965 - Accuracy on U4: 0.5023148148148148 - Accuracy on Google: 0.756 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-nce-1/raw/main/classification.json)): - Micro F1 score on BLESS: 0.9014615036914269 - Micro F1 score on CogALexV: 0.8147887323943661 - Micro F1 score on EVALution: 0.6224268689057422 - Micro F1 score on K&H+N: 0.957571120539751 - Micro F1 score on ROOT09: 0.8812284550297712 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-nce-1/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.7409920634920635 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-nce-1") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-base - max_length: 64 - mode: average - data: relbert/semeval2012_relational_similarity_v6 - split: train - split_eval: validation - template_mode: manual - loss_function: nce_logout - classification_loss: False - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 9 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 1 - exclude_relation: None - n_sample: 320 - gradient_accumulation: 8 - relation_level: None The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-e-nce-1/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
research-backup/relbert-roberta-base-semeval2012-v6-average-prompt-d-nce-1
research-backup
2022-11-22T12:14:16Z
103
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity_v6", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-11-22T07:38:20Z
--- datasets: - relbert/semeval2012_relational_similarity_v6 model-index: - name: relbert/relbert-roberta-base-semeval2012-v6-average-prompt-d-nce-1 results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.743095238095238 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.4839572192513369 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.4896142433234421 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6375764313507504 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.862 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.4868421052631579 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5046296296296297 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8862437848425494 - name: F1 (macro) type: f1_macro value: 0.8821974165746824 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8199530516431925 - name: F1 (macro) type: f1_macro value: 0.6171125235158227 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6153846153846154 - name: F1 (macro) type: f1_macro value: 0.6078721080640733 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9533977881338248 - name: F1 (macro) type: f1_macro value: 0.8639519260786466 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8752742087120025 - name: F1 (macro) type: f1_macro value: 0.8711564298029004 --- # relbert/relbert-roberta-base-semeval2012-v6-average-prompt-d-nce-1 RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on [relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-d-nce-1/raw/main/analogy.json)): - Accuracy on SAT (full): 0.4839572192513369 - Accuracy on SAT: 0.4896142433234421 - Accuracy on BATS: 0.6375764313507504 - Accuracy on U2: 0.4868421052631579 - Accuracy on U4: 0.5046296296296297 - Accuracy on Google: 0.862 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-d-nce-1/raw/main/classification.json)): - Micro F1 score on BLESS: 0.8862437848425494 - Micro F1 score on CogALexV: 0.8199530516431925 - Micro F1 score on EVALution: 0.6153846153846154 - Micro F1 score on K&H+N: 0.9533977881338248 - Micro F1 score on ROOT09: 0.8752742087120025 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-d-nce-1/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.743095238095238 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-average-prompt-d-nce-1") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-base - max_length: 64 - mode: average - data: relbert/semeval2012_relational_similarity_v6 - split: train - split_eval: validation - template_mode: manual - loss_function: nce_logout - classification_loss: False - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 5 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 1 - exclude_relation: None - n_sample: 320 - gradient_accumulation: 8 - relation_level: None The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-d-nce-1/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
research-backup/relbert-roberta-base-semeval2012-v6-average-prompt-c-nce-2
research-backup
2022-11-22T11:44:04Z
103
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity_v6", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-11-22T07:36:15Z
--- datasets: - relbert/semeval2012_relational_similarity_v6 model-index: - name: relbert/relbert-roberta-base-semeval2012-v6-average-prompt-c-nce-2 results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.7630555555555556 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.47058823529411764 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.486646884272997 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5380767092829349 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.656 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.4342105263157895 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.4398148148148148 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8963387072472503 - name: F1 (macro) type: f1_macro value: 0.8890262114768261 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8199530516431925 - name: F1 (macro) type: f1_macro value: 0.6026484757530925 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6078006500541712 - name: F1 (macro) type: f1_macro value: 0.5904448977927308 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9474855672254295 - name: F1 (macro) type: f1_macro value: 0.8533808851230016 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8787214039486054 - name: F1 (macro) type: f1_macro value: 0.8756278739592934 --- # relbert/relbert-roberta-base-semeval2012-v6-average-prompt-c-nce-2 RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on [relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-c-nce-2/raw/main/analogy.json)): - Accuracy on SAT (full): 0.47058823529411764 - Accuracy on SAT: 0.486646884272997 - Accuracy on BATS: 0.5380767092829349 - Accuracy on U2: 0.4342105263157895 - Accuracy on U4: 0.4398148148148148 - Accuracy on Google: 0.656 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-c-nce-2/raw/main/classification.json)): - Micro F1 score on BLESS: 0.8963387072472503 - Micro F1 score on CogALexV: 0.8199530516431925 - Micro F1 score on EVALution: 0.6078006500541712 - Micro F1 score on K&H+N: 0.9474855672254295 - Micro F1 score on ROOT09: 0.8787214039486054 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-c-nce-2/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.7630555555555556 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-average-prompt-c-nce-2") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-base - max_length: 64 - mode: average - data: relbert/semeval2012_relational_similarity_v6 - split: train - split_eval: validation - template_mode: manual - loss_function: nce_logout - classification_loss: False - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 6 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 2 - exclude_relation: None - n_sample: 320 - gradient_accumulation: 8 - relation_level: None The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-c-nce-2/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
research-backup/relbert-roberta-base-semeval2012-v6-average-prompt-b-nce-2
research-backup
2022-11-22T11:13:05Z
103
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity_v6", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-11-22T07:34:15Z
--- datasets: - relbert/semeval2012_relational_similarity_v6 model-index: - name: relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-nce-2 results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.682936507936508 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.4117647058823529 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.4065281899109792 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.44580322401334077 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.618 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.42543859649122806 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.4351851851851852 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.889709205966551 - name: F1 (macro) type: f1_macro value: 0.8856371272538675 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.7983568075117371 - name: F1 (macro) type: f1_macro value: 0.5722493642763411 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6034669555796316 - name: F1 (macro) type: f1_macro value: 0.5834867979418635 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9533977881338248 - name: F1 (macro) type: f1_macro value: 0.848937537646962 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8718270134753996 - name: F1 (macro) type: f1_macro value: 0.8714610694444686 --- # relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-nce-2 RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on [relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-nce-2/raw/main/analogy.json)): - Accuracy on SAT (full): 0.4117647058823529 - Accuracy on SAT: 0.4065281899109792 - Accuracy on BATS: 0.44580322401334077 - Accuracy on U2: 0.42543859649122806 - Accuracy on U4: 0.4351851851851852 - Accuracy on Google: 0.618 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-nce-2/raw/main/classification.json)): - Micro F1 score on BLESS: 0.889709205966551 - Micro F1 score on CogALexV: 0.7983568075117371 - Micro F1 score on EVALution: 0.6034669555796316 - Micro F1 score on K&H+N: 0.9533977881338248 - Micro F1 score on ROOT09: 0.8718270134753996 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-nce-2/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.682936507936508 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-nce-2") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-base - max_length: 64 - mode: average - data: relbert/semeval2012_relational_similarity_v6 - split: train - split_eval: validation - template_mode: manual - loss_function: nce_logout - classification_loss: False - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 6 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 2 - exclude_relation: None - n_sample: 320 - gradient_accumulation: 8 - relation_level: None The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-nce-2/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
research-backup/relbert-roberta-base-semeval2012-v6-average-prompt-b-nce-1
research-backup
2022-11-22T11:10:41Z
103
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity_v6", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-11-22T07:34:44Z
--- datasets: - relbert/semeval2012_relational_similarity_v6 model-index: - name: relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-nce-1 results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.8926984126984127 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.4572192513368984 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.4599406528189911 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5369649805447471 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.748 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.4298245614035088 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.4375 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8945306614434232 - name: F1 (macro) type: f1_macro value: 0.8889050346897381 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.7887323943661971 - name: F1 (macro) type: f1_macro value: 0.5429622796506292 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6132177681473456 - name: F1 (macro) type: f1_macro value: 0.5967298388536921 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9580580093204424 - name: F1 (macro) type: f1_macro value: 0.8772669717354012 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8733939204011282 - name: F1 (macro) type: f1_macro value: 0.865464870691388 --- # relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-nce-1 RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on [relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-nce-1/raw/main/analogy.json)): - Accuracy on SAT (full): 0.4572192513368984 - Accuracy on SAT: 0.4599406528189911 - Accuracy on BATS: 0.5369649805447471 - Accuracy on U2: 0.4298245614035088 - Accuracy on U4: 0.4375 - Accuracy on Google: 0.748 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-nce-1/raw/main/classification.json)): - Micro F1 score on BLESS: 0.8945306614434232 - Micro F1 score on CogALexV: 0.7887323943661971 - Micro F1 score on EVALution: 0.6132177681473456 - Micro F1 score on K&H+N: 0.9580580093204424 - Micro F1 score on ROOT09: 0.8733939204011282 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-nce-1/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.8926984126984127 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-nce-1") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-base - max_length: 64 - mode: average - data: relbert/semeval2012_relational_similarity_v6 - split: train - split_eval: validation - template_mode: manual - loss_function: nce_logout - classification_loss: False - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 9 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 1 - exclude_relation: None - n_sample: 320 - gradient_accumulation: 8 - relation_level: None The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-b-nce-1/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
research-backup/relbert-roberta-base-semeval2012-v6-mask-prompt-e-nce-2
research-backup
2022-11-22T10:11:20Z
103
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity_v6", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-11-22T07:30:34Z
--- datasets: - relbert/semeval2012_relational_similarity_v6 model-index: - name: relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-nce-2 results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.8311904761904761 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.47058823529411764 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.47774480712166173 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5630906058921623 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.746 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.4605263157894737 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.48148148148148145 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9136658128672593 - name: F1 (macro) type: f1_macro value: 0.9119300574747814 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8356807511737089 - name: F1 (macro) type: f1_macro value: 0.6445552217787743 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6598049837486457 - name: F1 (macro) type: f1_macro value: 0.6390833044290024 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9680740070946651 - name: F1 (macro) type: f1_macro value: 0.9022447613880005 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.880288310874334 - name: F1 (macro) type: f1_macro value: 0.8774948713508829 --- # relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-nce-2 RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on [relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-nce-2/raw/main/analogy.json)): - Accuracy on SAT (full): 0.47058823529411764 - Accuracy on SAT: 0.47774480712166173 - Accuracy on BATS: 0.5630906058921623 - Accuracy on U2: 0.4605263157894737 - Accuracy on U4: 0.48148148148148145 - Accuracy on Google: 0.746 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-nce-2/raw/main/classification.json)): - Micro F1 score on BLESS: 0.9136658128672593 - Micro F1 score on CogALexV: 0.8356807511737089 - Micro F1 score on EVALution: 0.6598049837486457 - Micro F1 score on K&H+N: 0.9680740070946651 - Micro F1 score on ROOT09: 0.880288310874334 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-nce-2/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.8311904761904761 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-nce-2") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-base - max_length: 64 - mode: mask - data: relbert/semeval2012_relational_similarity_v6 - split: train - split_eval: validation - template_mode: manual - loss_function: nce_logout - classification_loss: False - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 9 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 2 - exclude_relation: None - n_sample: 320 - gradient_accumulation: 8 - relation_level: None The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-nce-2/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
research-backup/relbert-roberta-base-semeval2012-v6-mask-prompt-e-nce-1
research-backup
2022-11-22T10:08:40Z
103
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity_v6", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-11-22T07:30:53Z
--- datasets: - relbert/semeval2012_relational_similarity_v6 model-index: - name: relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-nce-1 results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.808968253968254 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.4839572192513369 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.4896142433234421 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6264591439688716 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.748 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.36403508771929827 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.43287037037037035 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9186379388277837 - name: F1 (macro) type: f1_macro value: 0.9146569952039126 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8244131455399061 - name: F1 (macro) type: f1_macro value: 0.6192186484290235 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6511375947995667 - name: F1 (macro) type: f1_macro value: 0.6358411811809679 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9683522292550601 - name: F1 (macro) type: f1_macro value: 0.9036902248765999 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8824819805703541 - name: F1 (macro) type: f1_macro value: 0.8801659277988089 --- # relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-nce-1 RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on [relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-nce-1/raw/main/analogy.json)): - Accuracy on SAT (full): 0.4839572192513369 - Accuracy on SAT: 0.4896142433234421 - Accuracy on BATS: 0.6264591439688716 - Accuracy on U2: 0.36403508771929827 - Accuracy on U4: 0.43287037037037035 - Accuracy on Google: 0.748 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-nce-1/raw/main/classification.json)): - Micro F1 score on BLESS: 0.9186379388277837 - Micro F1 score on CogALexV: 0.8244131455399061 - Micro F1 score on EVALution: 0.6511375947995667 - Micro F1 score on K&H+N: 0.9683522292550601 - Micro F1 score on ROOT09: 0.8824819805703541 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-nce-1/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.808968253968254 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-nce-1") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-base - max_length: 64 - mode: mask - data: relbert/semeval2012_relational_similarity_v6 - split: train - split_eval: validation - template_mode: manual - loss_function: nce_logout - classification_loss: False - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 5 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 1 - exclude_relation: None - n_sample: 320 - gradient_accumulation: 8 - relation_level: None The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-e-nce-1/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
DigitalUmuganda/lingala_vits_tts
DigitalUmuganda
2022-11-22T10:08:11Z
0
1
null
[ "region:us" ]
null
2022-11-21T22:12:13Z
# Lingala Text-to-Speech This model was trained on the OpenSLR's 71.6 hours aligned lingala bible dataset. ## Model description A Conditional Variational Autoencoder with Adversarial Learning(VITS), which is an end-to-end approach to the text-to-speech task. To train the model, we used the espnet2 toolkit. ## Usage First install espnet2 ``` sh pip install espnet ``` Download the model and the config files from this repo. To generate a wav file using this model, run the following: ``` sh from espnet2.bin.tts_inference import Text2Speech import soundfile as sf text2speech = Text2Speech(train_config="config.yaml",model_file="train.total_count.best.pth") wav = text2speech("oyo kati na Ye ozwi lisiko mpe bolimbisi ya masumu")["wav"] sf.write("outfile.wav", wav.numpy(), text2speech.fs, "PCM_16") ```
Vandita/distilroberta-base-finetuned-SarcojiComplEmojisDistilRoberta-baseMLM1
Vandita
2022-11-22T10:00:23Z
210
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-11-22T09:46:40Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilroberta-base-finetuned-SarcojiComplEmojisDistilRoberta-baseMLM1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilroberta-base-finetuned-SarcojiComplEmojisDistilRoberta-baseMLM1 This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.8333 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.2176 | 1.0 | 768 | 2.9178 | | 2.9632 | 2.0 | 1536 | 2.8355 | | 2.9201 | 3.0 | 2304 | 2.8462 | ### Framework versions - Transformers 4.25.0.dev0 - Pytorch 1.12.1+cu113 - Datasets 2.7.0 - Tokenizers 0.13.2
research-backup/relbert-roberta-base-semeval2012-v6-mask-prompt-d-nce-1
research-backup
2022-11-22T09:36:49Z
103
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity_v6", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-11-22T07:29:06Z
--- datasets: - relbert/semeval2012_relational_similarity_v6 model-index: - name: relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-d-nce-1 results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.8196825396825397 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.56951871657754 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5667655786350149 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.7048360200111173 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.928 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5219298245614035 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5254629629629629 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9171312339912611 - name: F1 (macro) type: f1_macro value: 0.9144097053161149 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8591549295774648 - name: F1 (macro) type: f1_macro value: 0.6897906667708522 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6598049837486457 - name: F1 (macro) type: f1_macro value: 0.6435072053448491 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9591708979620227 - name: F1 (macro) type: f1_macro value: 0.8844226567513357 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8990911939830774 - name: F1 (macro) type: f1_macro value: 0.8971436130443764 --- # relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-d-nce-1 RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on [relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-d-nce-1/raw/main/analogy.json)): - Accuracy on SAT (full): 0.56951871657754 - Accuracy on SAT: 0.5667655786350149 - Accuracy on BATS: 0.7048360200111173 - Accuracy on U2: 0.5219298245614035 - Accuracy on U4: 0.5254629629629629 - Accuracy on Google: 0.928 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-d-nce-1/raw/main/classification.json)): - Micro F1 score on BLESS: 0.9171312339912611 - Micro F1 score on CogALexV: 0.8591549295774648 - Micro F1 score on EVALution: 0.6598049837486457 - Micro F1 score on K&H+N: 0.9591708979620227 - Micro F1 score on ROOT09: 0.8990911939830774 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-d-nce-1/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.8196825396825397 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-d-nce-1") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-base - max_length: 64 - mode: mask - data: relbert/semeval2012_relational_similarity_v6 - split: train - split_eval: validation - template_mode: manual - loss_function: nce_logout - classification_loss: False - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 9 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 1 - exclude_relation: None - n_sample: 320 - gradient_accumulation: 8 - relation_level: None The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-d-nce-1/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
m-aliabbas/wav2vec2-base-timit-demo-idrak-paperspace1
m-aliabbas
2022-11-22T09:36:03Z
109
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-11-22T09:17:56Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-timit-demo-idrak-paperspace1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-idrak-paperspace1 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3623 - Wer: 0.3471 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.1034 | 0.87 | 500 | 0.3623 | 0.3471 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.12.0+cu116 - Datasets 1.18.3 - Tokenizers 0.12.1
research-backup/relbert-roberta-base-semeval2012-v6-mask-prompt-c-nce-2
research-backup
2022-11-22T09:14:07Z
103
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity_v6", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-11-22T07:26:54Z
--- datasets: - relbert/semeval2012_relational_similarity_v6 model-index: - name: relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-nce-2 results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.5858333333333333 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3235294117647059 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3264094955489614 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.40355753196220123 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.454 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3991228070175439 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3680555555555556 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8927226156395962 - name: F1 (macro) type: f1_macro value: 0.8860530490594479 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.7772300469483568 - name: F1 (macro) type: f1_macro value: 0.49603297373551636 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.5910075839653305 - name: F1 (macro) type: f1_macro value: 0.5855884123582632 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9483897892467135 - name: F1 (macro) type: f1_macro value: 0.8589949863564919 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8614854277655907 - name: F1 (macro) type: f1_macro value: 0.8600976443012404 --- # relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-nce-2 RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on [relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-nce-2/raw/main/analogy.json)): - Accuracy on SAT (full): 0.3235294117647059 - Accuracy on SAT: 0.3264094955489614 - Accuracy on BATS: 0.40355753196220123 - Accuracy on U2: 0.3991228070175439 - Accuracy on U4: 0.3680555555555556 - Accuracy on Google: 0.454 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-nce-2/raw/main/classification.json)): - Micro F1 score on BLESS: 0.8927226156395962 - Micro F1 score on CogALexV: 0.7772300469483568 - Micro F1 score on EVALution: 0.5910075839653305 - Micro F1 score on K&H+N: 0.9483897892467135 - Micro F1 score on ROOT09: 0.8614854277655907 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-nce-2/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.5858333333333333 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-nce-2") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-base - max_length: 64 - mode: mask - data: relbert/semeval2012_relational_similarity_v6 - split: train - split_eval: validation - template_mode: manual - loss_function: nce_logout - classification_loss: False - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 9 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 2 - exclude_relation: None - n_sample: 320 - gradient_accumulation: 8 - relation_level: None The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-c-nce-2/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
research-backup/relbert-roberta-base-semeval2012-v6-mask-prompt-b-nce-1
research-backup
2022-11-22T08:29:46Z
106
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity_v6", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-11-22T07:25:26Z
--- datasets: - relbert/semeval2012_relational_similarity_v6 model-index: - name: relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-b-nce-1 results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.8494444444444444 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.46524064171123 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.4540059347181009 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5469705391884381 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.718 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.40789473684210525 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.47685185185185186 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9189392797950882 - name: F1 (macro) type: f1_macro value: 0.9111037247922152 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8328638497652581 - name: F1 (macro) type: f1_macro value: 0.6407705654112161 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6511375947995667 - name: F1 (macro) type: f1_macro value: 0.6314510440381573 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9667524518327885 - name: F1 (macro) type: f1_macro value: 0.8976834598713519 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8890629896584142 - name: F1 (macro) type: f1_macro value: 0.8850843021734317 --- # relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-b-nce-1 RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on [relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-b-nce-1/raw/main/analogy.json)): - Accuracy on SAT (full): 0.46524064171123 - Accuracy on SAT: 0.4540059347181009 - Accuracy on BATS: 0.5469705391884381 - Accuracy on U2: 0.40789473684210525 - Accuracy on U4: 0.47685185185185186 - Accuracy on Google: 0.718 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-b-nce-1/raw/main/classification.json)): - Micro F1 score on BLESS: 0.9189392797950882 - Micro F1 score on CogALexV: 0.8328638497652581 - Micro F1 score on EVALution: 0.6511375947995667 - Micro F1 score on K&H+N: 0.9667524518327885 - Micro F1 score on ROOT09: 0.8890629896584142 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-b-nce-1/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.8494444444444444 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-b-nce-1") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-base - max_length: 64 - mode: mask - data: relbert/semeval2012_relational_similarity_v6 - split: train - split_eval: validation - template_mode: manual - loss_function: nce_logout - classification_loss: False - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 5 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 1 - exclude_relation: None - n_sample: 320 - gradient_accumulation: 8 - relation_level: None The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-b-nce-1/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
research-backup/relbert-roberta-base-semeval2012-v6-mask-prompt-a-nce-2
research-backup
2022-11-22T08:05:07Z
104
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity_v6", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-11-22T07:22:22Z
--- datasets: - relbert/semeval2012_relational_similarity_v6 model-index: - name: relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-nce-2 results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.7766666666666666 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5106951871657754 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5192878338278932 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6336853807670928 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.836 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.4956140350877193 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.4675925925925926 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9102003917432575 - name: F1 (macro) type: f1_macro value: 0.9048163432871014 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8446009389671362 - name: F1 (macro) type: f1_macro value: 0.6555540264317997 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6646803900325027 - name: F1 (macro) type: f1_macro value: 0.6451731192779142 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9483897892467135 - name: F1 (macro) type: f1_macro value: 0.8625789446469025 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8799749294891883 - name: F1 (macro) type: f1_macro value: 0.879747290888475 --- # relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-nce-2 RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on [relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-nce-2/raw/main/analogy.json)): - Accuracy on SAT (full): 0.5106951871657754 - Accuracy on SAT: 0.5192878338278932 - Accuracy on BATS: 0.6336853807670928 - Accuracy on U2: 0.4956140350877193 - Accuracy on U4: 0.4675925925925926 - Accuracy on Google: 0.836 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-nce-2/raw/main/classification.json)): - Micro F1 score on BLESS: 0.9102003917432575 - Micro F1 score on CogALexV: 0.8446009389671362 - Micro F1 score on EVALution: 0.6646803900325027 - Micro F1 score on K&H+N: 0.9483897892467135 - Micro F1 score on ROOT09: 0.8799749294891883 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-nce-2/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.7766666666666666 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-nce-2") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-base - max_length: 64 - mode: mask - data: relbert/semeval2012_relational_similarity_v6 - split: train - split_eval: validation - template_mode: manual - loss_function: nce_logout - classification_loss: False - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 9 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 2 - exclude_relation: None - n_sample: 320 - gradient_accumulation: 8 - relation_level: None The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-mask-prompt-a-nce-2/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
research-backup/relbert-roberta-base-semeval2012-v6-average-prompt-d-triplet-0
research-backup
2022-11-22T07:33:39Z
103
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity_v6", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-11-21T15:16:41Z
--- datasets: - relbert/semeval2012_relational_similarity_v6 model-index: - name: relbert/relbert-roberta-base-semeval2012-v6-average-prompt-d-triplet-0 results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.7536111111111111 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.4090909090909091 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.41543026706231456 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3979988882712618 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.536 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.40350877192982454 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3726851851851852 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8173873738134699 - name: F1 (macro) type: f1_macro value: 0.7651315053116533 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.7805164319248827 - name: F1 (macro) type: f1_macro value: 0.37750596878875403 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6132177681473456 - name: F1 (macro) type: f1_macro value: 0.5897140261349971 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8569938095569312 - name: F1 (macro) type: f1_macro value: 0.5489295192723234 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8533375117518018 - name: F1 (macro) type: f1_macro value: 0.8386152954603926 --- # relbert/relbert-roberta-base-semeval2012-v6-average-prompt-d-triplet-0 RelBERT fine-tuned from [roberta-base](https://huggingface.co/roberta-base) on [relbert/semeval2012_relational_similarity_v6](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v6). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-d-triplet-0/raw/main/analogy.json)): - Accuracy on SAT (full): 0.4090909090909091 - Accuracy on SAT: 0.41543026706231456 - Accuracy on BATS: 0.3979988882712618 - Accuracy on U2: 0.40350877192982454 - Accuracy on U4: 0.3726851851851852 - Accuracy on Google: 0.536 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-d-triplet-0/raw/main/classification.json)): - Micro F1 score on BLESS: 0.8173873738134699 - Micro F1 score on CogALexV: 0.7805164319248827 - Micro F1 score on EVALution: 0.6132177681473456 - Micro F1 score on K&H+N: 0.8569938095569312 - Micro F1 score on ROOT09: 0.8533375117518018 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-d-triplet-0/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.7536111111111111 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/relbert-roberta-base-semeval2012-v6-average-prompt-d-triplet-0") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-base - max_length: 64 - mode: average - data: relbert/semeval2012_relational_similarity_v6 - split: train - split_eval: validation - template_mode: manual - loss_function: triplet - classification_loss: False - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 10 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 0 - exclude_relation: None - n_sample: 320 - gradient_accumulation: 8 - relation_level: None The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-base-semeval2012-v6-average-prompt-d-triplet-0/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```