modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-05 06:27:37
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
539 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-05 06:27:15
card
stringlengths
11
1.01M
gcmsrc/xlm-roberta-base-finetuned-panx-it
gcmsrc
2022-09-20T09:19:59Z
105
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-09-20T09:17:52Z
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-it results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.it metrics: - name: F1 type: f1 value: 0.8207236842105264 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-it This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.2571 - F1: 0.8207 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.8262 | 1.0 | 70 | 0.3182 | 0.7502 | | 0.2785 | 2.0 | 140 | 0.2685 | 0.7966 | | 0.1816 | 3.0 | 210 | 0.2571 | 0.8207 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.12.1+cu102 - Datasets 1.16.1 - Tokenizers 0.10.3
gcmsrc/xlm-roberta-base-finetuned-panx-de-fr
gcmsrc
2022-09-20T09:11:56Z
108
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-09-19T16:28:48Z
--- license: mit tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de-fr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1642 - F1: 0.8589 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2886 | 1.0 | 715 | 0.1804 | 0.8293 | | 0.1458 | 2.0 | 1430 | 0.1574 | 0.8494 | | 0.0931 | 3.0 | 2145 | 0.1642 | 0.8589 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.12.1+cu102 - Datasets 1.16.1 - Tokenizers 0.10.3
qunaieer/distilbert-base-uncased-finetuned-emotion
qunaieer
2022-09-20T08:23:41Z
103
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-09-20T08:13:34Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.926 - name: F1 type: f1 value: 0.9259893400415584 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2248 - Accuracy: 0.926 - F1: 0.9260 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8323 | 1.0 | 250 | 0.3238 | 0.908 | 0.9053 | | 0.2571 | 2.0 | 500 | 0.2248 | 0.926 | 0.9260 | ### Framework versions - Transformers 4.13.0 - Pytorch 1.12.1+cu113 - Datasets 1.16.1 - Tokenizers 0.10.3
khave/anirec-model-v1
khave
2022-09-20T08:16:05Z
1
0
sentence-transformers
[ "sentence-transformers", "pytorch", "distilbert", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-09-20T08:11:59Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 100000 with parameters: ``` {'batch_size': 64, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `gpl.toolkit.loss.MarginDistillationLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 100000, "warmup_steps": 1000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
MiguelCosta/distilbert-1-finetuned-cisco
MiguelCosta
2022-09-20T08:02:41Z
74
0
transformers
[ "transformers", "tf", "distilbert", "fill-mask", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-09-20T07:35:16Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: MiguelCosta/distilbert-1-finetuned-cisco results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # MiguelCosta/distilbert-1-finetuned-cisco This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.2723 - Validation Loss: 2.4284 - Epoch: 39 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -964, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 4.4357 | 4.3213 | 0 | | 4.1763 | 3.9111 | 1 | | 3.8803 | 3.6751 | 2 | | 3.7135 | 3.5458 | 3 | | 3.5861 | 3.4489 | 4 | | 3.5176 | 3.4323 | 5 | | 3.4022 | 3.3658 | 6 | | 3.3259 | 3.2113 | 7 | | 3.2499 | 3.0623 | 8 | | 3.2129 | 3.0298 | 9 | | 3.1177 | 2.9181 | 10 | | 3.0144 | 2.9550 | 11 | | 2.9502 | 2.8758 | 12 | | 2.9074 | 2.8674 | 13 | | 2.8922 | 2.7877 | 14 | | 2.8333 | 2.8283 | 15 | | 2.7982 | 2.7717 | 16 | | 2.7453 | 2.7578 | 17 | | 2.6611 | 2.5425 | 18 | | 2.6330 | 2.6145 | 19 | | 2.5642 | 2.5415 | 20 | | 2.5352 | 2.5437 | 21 | | 2.4939 | 2.4214 | 22 | | 2.4287 | 2.4882 | 23 | | 2.4142 | 2.5091 | 24 | | 2.3676 | 2.3997 | 25 | | 2.3121 | 2.4515 | 26 | | 2.3085 | 2.2349 | 27 | | 2.2839 | 2.3205 | 28 | | 2.3248 | 2.3273 | 29 | | 2.2763 | 2.2583 | 30 | | 2.2710 | 2.3896 | 31 | | 2.2950 | 2.3224 | 32 | | 2.3026 | 2.3910 | 33 | | 2.3116 | 2.3255 | 34 | | 2.2640 | 2.3186 | 35 | | 2.2958 | 2.3332 | 36 | | 2.3256 | 2.3646 | 37 | | 2.2831 | 2.3751 | 38 | | 2.2723 | 2.4284 | 39 | ### Framework versions - Transformers 4.22.1 - TensorFlow 2.8.2 - Datasets 2.4.0 - Tokenizers 0.12.1
michael20at/testpyramidsrnd
michael20at
2022-09-20T07:59:27Z
8
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "unity-ml-agents", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2022-09-20T07:59:19Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids library_name: ml-agents --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids 2. Step 1: Write your model_id: michael20at/testpyramidsrnd 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
jonas/sdg_classifier_osdg
jonas
2022-09-20T06:46:22Z
134
7
transformers
[ "transformers", "pytorch", "bert", "text-classification", "en", "dataset:jonas/osdg_sdg_data_processed", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-05-24T11:49:08Z
--- language: en widget: - text: "Ending all forms of discrimination against women and girls is not only a basic human right, but it also crucial to accelerating sustainable development. It has been proven time and again, that empowering women and girls has a multiplier effect, and helps drive up economic growth and development across the board. Since 2000, UNDP, together with our UN partners and the rest of the global community, has made gender equality central to our work. We have seen remarkable progress since then. More girls are now in school compared to 15 years ago, and most regions have reached gender parity in primary education. Women now make up to 41 percent of paid workers outside of agriculture, compared to 35 percent in 1990." datasets: - jonas/osdg_sdg_data_processed co2_eq_emissions: 0.0653263174784986 --- # About Machine Learning model for classifying text according to the first 15 of the 17 Sustainable Development Goals from the United Nations. Note that model is trained on quite short paragraphs (around 100 words) and performs best with similar input sizes. Data comes from the amazing https://osdg.ai/ community! * There is an improved version (finetuned Roberta) of the model available here: https://huggingface.co/jonas/roberta-base-finetuned-sdg # Model Training Specifics - Problem type: Multi-class Classification - Model ID: 900229515 - CO2 Emissions (in grams): 0.0653263174784986 ## Validation Metrics - Loss: 0.3644874095916748 - Accuracy: 0.8972544579677328 - Macro F1: 0.8500873710954522 - Micro F1: 0.8972544579677328 - Weighted F1: 0.8937529692986061 - Macro Precision: 0.8694369727467804 - Micro Precision: 0.8972544579677328 - Weighted Precision: 0.8946984684977016 - Macro Recall: 0.8405065997404059 - Micro Recall: 0.8972544579677328 - Weighted Recall: 0.8972544579677328 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/jonas/autotrain-osdg-sdg-classifier-900229515 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("jonas/sdg_classifier_osdg", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("jonas/sdg_classifier_osdg", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
MiguelCosta/distilbert-finetuned-cisco
MiguelCosta
2022-09-20T05:49:33Z
65
0
transformers
[ "transformers", "tf", "distilbert", "fill-mask", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-09-17T07:33:44Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: MiguelCosta/distilbert-finetuned-cisco results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # MiguelCosta/distilbert-finetuned-cisco This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 4.4181 - Validation Loss: 4.2370 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -964, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 4.4181 | 4.2370 | 0 | ### Framework versions - Transformers 4.22.1 - TensorFlow 2.8.2 - Datasets 2.4.0 - Tokenizers 0.12.1
wormed/DialoGPT-small-denai
wormed
2022-09-20T03:53:22Z
0
0
null
[ "conversational", "region:us" ]
null
2022-09-20T03:42:39Z
--- tags: - conversational ---
sd-concepts-library/bloo
sd-concepts-library
2022-09-20T03:24:28Z
0
0
null
[ "license:mit", "region:us" ]
null
2022-09-20T03:24:23Z
--- license: mit --- ### Bloo on Stable Diffusion This is the `<owl-guy>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<owl-guy> 0](https://huggingface.co/sd-concepts-library/bloo/resolve/main/concept_images/6.jpeg) ![<owl-guy> 1](https://huggingface.co/sd-concepts-library/bloo/resolve/main/concept_images/5.jpeg) ![<owl-guy> 2](https://huggingface.co/sd-concepts-library/bloo/resolve/main/concept_images/0.jpeg) ![<owl-guy> 3](https://huggingface.co/sd-concepts-library/bloo/resolve/main/concept_images/4.jpeg) ![<owl-guy> 4](https://huggingface.co/sd-concepts-library/bloo/resolve/main/concept_images/1.jpeg) ![<owl-guy> 5](https://huggingface.co/sd-concepts-library/bloo/resolve/main/concept_images/3.jpeg) ![<owl-guy> 6](https://huggingface.co/sd-concepts-library/bloo/resolve/main/concept_images/2.jpeg)
sd-concepts-library/cumbia-peruana
sd-concepts-library
2022-09-20T03:14:35Z
0
3
null
[ "license:mit", "region:us" ]
null
2022-09-20T03:14:28Z
--- license: mit --- ### cumbia peruana on Stable Diffusion This is the `<cumbia-peru>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<cumbia-peru> 0](https://huggingface.co/sd-concepts-library/cumbia-peruana/resolve/main/concept_images/0.jpeg) ![<cumbia-peru> 1](https://huggingface.co/sd-concepts-library/cumbia-peruana/resolve/main/concept_images/4.jpeg) ![<cumbia-peru> 2](https://huggingface.co/sd-concepts-library/cumbia-peruana/resolve/main/concept_images/1.jpeg) ![<cumbia-peru> 3](https://huggingface.co/sd-concepts-library/cumbia-peruana/resolve/main/concept_images/3.jpeg) ![<cumbia-peru> 4](https://huggingface.co/sd-concepts-library/cumbia-peruana/resolve/main/concept_images/2.jpeg)
rram12/q-Taxi-v3
rram12
2022-09-20T02:39:05Z
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2022-09-20T02:15:13Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="rram12/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
rram12/q-FrozenLake-v1-4x4-noSlippery
rram12
2022-09-20T02:26:17Z
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
2022-09-20T02:26:11Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="rram12/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
weirdguitarist/wav2vec2-base-stac-local
weirdguitarist
2022-09-20T01:58:36Z
19
0
transformers
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-09-13T10:27:39Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-stac-local results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-stac-local This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.9746 - Wer: 0.7828 - Cer: 0.3202 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 2 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:-----:|:-----:|:---------------:|:------:|:------:| | 2.0603 | 1.0 | 2369 | 2.1282 | 0.9517 | 0.5485 | | 1.6155 | 2.0 | 4738 | 1.6196 | 0.9060 | 0.4565 | | 1.3462 | 3.0 | 7107 | 1.4331 | 0.8379 | 0.3983 | | 1.1819 | 4.0 | 9476 | 1.3872 | 0.8233 | 0.3717 | | 1.0189 | 5.0 | 11845 | 1.4066 | 0.8328 | 0.3660 | | 0.9026 | 6.0 | 14214 | 1.3502 | 0.8198 | 0.3508 | | 0.777 | 7.0 | 16583 | 1.3016 | 0.7922 | 0.3433 | | 0.7109 | 8.0 | 18952 | 1.2662 | 0.8302 | 0.3510 | | 0.6766 | 9.0 | 21321 | 1.4321 | 0.8103 | 0.3368 | | 0.6078 | 10.0 | 23690 | 1.3592 | 0.7871 | 0.3360 | | 0.5958 | 11.0 | 26059 | 1.4389 | 0.7819 | 0.3397 | | 0.5094 | 12.0 | 28428 | 1.3391 | 0.8017 | 0.3239 | | 0.4567 | 13.0 | 30797 | 1.4718 | 0.8026 | 0.3347 | | 0.4448 | 14.0 | 33166 | 1.7450 | 0.8043 | 0.3424 | | 0.3976 | 15.0 | 35535 | 1.4581 | 0.7888 | 0.3283 | | 0.3449 | 16.0 | 37904 | 1.5688 | 0.8078 | 0.3397 | | 0.3046 | 17.0 | 40273 | 1.8630 | 0.8060 | 0.3448 | | 0.2983 | 18.0 | 42642 | 1.8400 | 0.8190 | 0.3425 | | 0.2728 | 19.0 | 45011 | 1.6726 | 0.8034 | 0.3280 | | 0.2579 | 20.0 | 47380 | 1.6661 | 0.8138 | 0.3249 | | 0.2169 | 21.0 | 49749 | 1.7389 | 0.8138 | 0.3277 | | 0.2498 | 22.0 | 52118 | 1.7205 | 0.7948 | 0.3207 | | 0.1831 | 23.0 | 54487 | 1.8641 | 0.8103 | 0.3229 | | 0.1927 | 24.0 | 56856 | 1.8724 | 0.7784 | 0.3251 | | 0.1649 | 25.0 | 59225 | 1.9187 | 0.7974 | 0.3277 | | 0.1594 | 26.0 | 61594 | 1.9022 | 0.7828 | 0.3220 | | 0.1338 | 27.0 | 63963 | 1.9303 | 0.7862 | 0.3212 | | 0.1441 | 28.0 | 66332 | 1.9528 | 0.7845 | 0.3207 | | 0.129 | 29.0 | 68701 | 1.9676 | 0.7819 | 0.3212 | | 0.1169 | 30.0 | 71070 | 1.9746 | 0.7828 | 0.3202 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.8.1+cu102 - Datasets 1.18.3 - Tokenizers 0.12.1
huggingtweets/markiplier-mrbeast-xqc
huggingtweets
2022-09-20T00:43:59Z
110
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-09-20T00:43:52Z
--- language: en thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/994592419705274369/RLplF55e_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1571030673078591490/TqoPeGER_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1511102924310544387/j6E29xq6_400x400.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">MrBeast & xQc & Mark</div> <div style="text-align: center; font-size: 14px;">@markiplier-mrbeast-xqc</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from MrBeast & xQc & Mark. | Data | MrBeast | xQc | Mark | | --- | --- | --- | --- | | Tweets downloaded | 3248 | 3241 | 3226 | | Retweets | 119 | 116 | 306 | | Short tweets | 725 | 410 | 392 | | Tweets kept | 2404 | 2715 | 2528 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3p1p4x3v/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @markiplier-mrbeast-xqc's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/13fbl2ac) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/13fbl2ac/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/markiplier-mrbeast-xqc') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
sd-concepts-library/wojaks-now
sd-concepts-library
2022-09-20T00:19:17Z
0
4
null
[ "license:mit", "region:us" ]
null
2022-09-20T00:19:10Z
--- license: mit --- ### wojaks-now on Stable Diffusion This is the `<red-wojak>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<red-wojak> 0](https://huggingface.co/sd-concepts-library/wojaks-now/resolve/main/concept_images/0.jpeg) ![<red-wojak> 1](https://huggingface.co/sd-concepts-library/wojaks-now/resolve/main/concept_images/1.jpeg) ![<red-wojak> 2](https://huggingface.co/sd-concepts-library/wojaks-now/resolve/main/concept_images/2.jpeg)
sd-concepts-library/all-rings-albuns
sd-concepts-library
2022-09-19T23:53:52Z
0
2
null
[ "license:mit", "region:us" ]
null
2022-09-19T23:53:38Z
--- license: mit --- ### all rings albuns on Stable Diffusion This is the `<rings-all-albuns>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<rings-all-albuns> 0](https://huggingface.co/sd-concepts-library/all-rings-albuns/resolve/main/concept_images/6.jpeg) ![<rings-all-albuns> 1](https://huggingface.co/sd-concepts-library/all-rings-albuns/resolve/main/concept_images/5.jpeg) ![<rings-all-albuns> 2](https://huggingface.co/sd-concepts-library/all-rings-albuns/resolve/main/concept_images/0.jpeg) ![<rings-all-albuns> 3](https://huggingface.co/sd-concepts-library/all-rings-albuns/resolve/main/concept_images/4.jpeg) ![<rings-all-albuns> 4](https://huggingface.co/sd-concepts-library/all-rings-albuns/resolve/main/concept_images/1.jpeg) ![<rings-all-albuns> 5](https://huggingface.co/sd-concepts-library/all-rings-albuns/resolve/main/concept_images/3.jpeg) ![<rings-all-albuns> 6](https://huggingface.co/sd-concepts-library/all-rings-albuns/resolve/main/concept_images/2.jpeg)
SandraB/mt5-small-mlsum_training_sample
SandraB
2022-09-19T23:36:24Z
111
0
transformers
[ "transformers", "pytorch", "tensorboard", "mt5", "text2text-generation", "summarization", "generated_from_trainer", "dataset:mlsum", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
summarization
2022-09-19T13:17:26Z
--- license: apache-2.0 tags: - summarization - generated_from_trainer datasets: - mlsum metrics: - rouge model-index: - name: mt5-small-mlsum_training_sample results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: mlsum type: mlsum config: de split: train args: de metrics: - name: Rouge1 type: rouge value: 28.2078 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mt5-small-mlsum_training_sample This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the mlsum dataset. It achieves the following results on the evaluation set: - Loss: 1.9727 - Rouge1: 28.2078 - Rouge2: 19.0712 - Rougel: 26.2267 - Rougelsum: 26.9462 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:| | 1.3193 | 1.0 | 6875 | 2.1352 | 25.8941 | 17.4672 | 24.2858 | 24.924 | | 1.2413 | 2.0 | 13750 | 2.0528 | 26.6221 | 18.1166 | 24.8233 | 25.5111 | | 1.1844 | 3.0 | 20625 | 1.9783 | 27.0518 | 18.3457 | 25.2288 | 25.8919 | | 1.0403 | 4.0 | 27500 | 1.9487 | 27.8154 | 18.9701 | 25.9435 | 26.6578 | | 0.9582 | 5.0 | 34375 | 1.9374 | 27.6863 | 18.7723 | 25.7667 | 26.4694 | | 0.8992 | 6.0 | 41250 | 1.9353 | 27.8959 | 18.919 | 26.0434 | 26.7262 | | 0.8109 | 7.0 | 48125 | 1.9492 | 28.0644 | 18.8873 | 26.0628 | 26.757 | | 0.7705 | 8.0 | 55000 | 1.9727 | 28.2078 | 19.0712 | 26.2267 | 26.9462 | ### Framework versions - Transformers 4.22.1 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
Isaacp/xlm-roberta-base-finetuned-panx-de-fr
Isaacp
2022-09-19T23:18:16Z
103
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-09-15T21:55:15Z
--- license: mit tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de-fr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1637 - F1: 0.8599 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2897 | 1.0 | 715 | 0.1759 | 0.8369 | | 0.1462 | 2.0 | 1430 | 0.1587 | 0.8506 | | 0.0931 | 3.0 | 2145 | 0.1637 | 0.8599 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.12.1+cu113 - Datasets 1.16.1 - Tokenizers 0.10.3
sd-concepts-library/kawaii-colors
sd-concepts-library
2022-09-19T23:08:01Z
0
26
null
[ "license:mit", "region:us" ]
null
2022-09-15T20:07:40Z
--- license: mit --- ### Kawaii Colors on Stable Diffusion This is the `<kawaii-colors-style>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<kawaii-colors-style 0](https://huggingface.co/sd-concepts-library/kawaii-colors/resolve/main/concept_images/0.jpeg) ![<kawaii-colors-style 1](https://huggingface.co/sd-concepts-library/kawaii-colors/resolve/main/concept_images/3.jpeg) ![<kawaii-colors-style 2](https://huggingface.co/sd-concepts-library/kawaii-colors/resolve/main/concept_images/1.jpeg) ![<kawaii-colors-style 3](https://huggingface.co/sd-concepts-library/kawaii-colors/resolve/main/concept_images/2.jpeg) ![<kawaii-colors-style 4](https://huggingface.co/sd-concepts-library/kawaii-colors/resolve/main/concept_images/4.jpeg)
research-backup/roberta-large-semeval2012-average-no-mask-prompt-e-nce-conceptnet-validated
research-backup
2022-09-19T21:47:15Z
104
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-09-17T11:45:40Z
--- datasets: - relbert/semeval2012_relational_similarity model-index: - name: relbert/roberta-large-semeval2012-average-no-mask-prompt-e-nce-conceptnet-validated results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.8450793650793651 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6176470588235294 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6261127596439169 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.7498610339077265 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.886 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.618421052631579 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6203703703703703 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9199939731806539 - name: F1 (macro) type: f1_macro value: 0.9158483158560947 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8457746478873239 - name: F1 (macro) type: f1_macro value: 0.6760195209742395 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6684723726977249 - name: F1 (macro) type: f1_macro value: 0.65910797043685 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.959379564582319 - name: F1 (macro) type: f1_macro value: 0.8779321856206035 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9031651519899718 - name: F1 (macro) type: f1_macro value: 0.9015700872047177 --- # relbert/roberta-large-semeval2012-average-no-mask-prompt-e-nce-conceptnet-validated RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on [relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-e-nce-conceptnet-validated/raw/main/analogy.json)): - Accuracy on SAT (full): 0.6176470588235294 - Accuracy on SAT: 0.6261127596439169 - Accuracy on BATS: 0.7498610339077265 - Accuracy on U2: 0.618421052631579 - Accuracy on U4: 0.6203703703703703 - Accuracy on Google: 0.886 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-e-nce-conceptnet-validated/raw/main/classification.json)): - Micro F1 score on BLESS: 0.9199939731806539 - Micro F1 score on CogALexV: 0.8457746478873239 - Micro F1 score on EVALution: 0.6684723726977249 - Micro F1 score on K&H+N: 0.959379564582319 - Micro F1 score on ROOT09: 0.9031651519899718 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-e-nce-conceptnet-validated/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.8450793650793651 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/roberta-large-semeval2012-average-no-mask-prompt-e-nce-conceptnet-validated") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-large - max_length: 64 - mode: average_no_mask - data: relbert/semeval2012_relational_similarity - split: train - data_eval: relbert/conceptnet_high_confidence - split_eval: full - template_mode: manual - template: I wasn’t aware of this relationship, but I just read in the encyclopedia that <obj> is <subj>’s <mask> - loss_function: nce_logout - classification_loss: False - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 29 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 0 - exclude_relation: None - exclude_relation_eval: None - n_sample: 640 - gradient_accumulation: 8 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-e-nce-conceptnet-validated/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
research-backup/roberta-large-semeval2012-average-no-mask-prompt-b-nce-conceptnet-validated
research-backup
2022-09-19T21:39:01Z
103
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-09-16T13:22:04Z
--- datasets: - relbert/semeval2012_relational_similarity model-index: - name: relbert/roberta-large-semeval2012-average-no-mask-prompt-b-nce-conceptnet-validated results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.8476587301587302 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5962566844919787 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5964391691394659 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.7559755419677598 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.87 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5043859649122807 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5902777777777778 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9135151423836071 - name: F1 (macro) type: f1_macro value: 0.9077476621792441 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8568075117370892 - name: F1 (macro) type: f1_macro value: 0.6862949146842514 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6793066088840737 - name: F1 (macro) type: f1_macro value: 0.6733689760415943 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9559713431174793 - name: F1 (macro) type: f1_macro value: 0.8691131481598299 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8934503290504543 - name: F1 (macro) type: f1_macro value: 0.8925413349776822 --- # relbert/roberta-large-semeval2012-average-no-mask-prompt-b-nce-conceptnet-validated RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on [relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-b-nce-conceptnet-validated/raw/main/analogy.json)): - Accuracy on SAT (full): 0.5962566844919787 - Accuracy on SAT: 0.5964391691394659 - Accuracy on BATS: 0.7559755419677598 - Accuracy on U2: 0.5043859649122807 - Accuracy on U4: 0.5902777777777778 - Accuracy on Google: 0.87 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-b-nce-conceptnet-validated/raw/main/classification.json)): - Micro F1 score on BLESS: 0.9135151423836071 - Micro F1 score on CogALexV: 0.8568075117370892 - Micro F1 score on EVALution: 0.6793066088840737 - Micro F1 score on K&H+N: 0.9559713431174793 - Micro F1 score on ROOT09: 0.8934503290504543 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-b-nce-conceptnet-validated/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.8476587301587302 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/roberta-large-semeval2012-average-no-mask-prompt-b-nce-conceptnet-validated") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-large - max_length: 64 - mode: average_no_mask - data: relbert/semeval2012_relational_similarity - split: train - data_eval: relbert/conceptnet_high_confidence - split_eval: full - template_mode: manual - template: Today, I finally discovered the relation between <subj> and <obj> : <obj> is <subj>'s <mask> - loss_function: nce_logout - classification_loss: False - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 29 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 0 - exclude_relation: None - exclude_relation_eval: None - n_sample: 640 - gradient_accumulation: 8 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-b-nce-conceptnet-validated/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
research-backup/roberta-large-semeval2012-average-no-mask-prompt-a-nce-conceptnet-validated
research-backup
2022-09-19T21:35:10Z
104
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-09-16T05:55:39Z
--- datasets: - relbert/semeval2012_relational_similarity model-index: - name: relbert/roberta-large-semeval2012-average-no-mask-prompt-a-nce-conceptnet-validated results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.9175 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6657754010695187 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.658753709198813 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.783212896053363 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.922 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6008771929824561 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6481481481481481 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9198433026970017 - name: F1 (macro) type: f1_macro value: 0.9160770870840453 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8568075117370892 - name: F1 (macro) type: f1_macro value: 0.6976908408325354 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.685807150595883 - name: F1 (macro) type: f1_macro value: 0.6809745362689802 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9570842317590597 - name: F1 (macro) type: f1_macro value: 0.8743204606688812 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9059855844562832 - name: F1 (macro) type: f1_macro value: 0.9055132716987447 --- # relbert/roberta-large-semeval2012-average-no-mask-prompt-a-nce-conceptnet-validated RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on [relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-a-nce-conceptnet-validated/raw/main/analogy.json)): - Accuracy on SAT (full): 0.6657754010695187 - Accuracy on SAT: 0.658753709198813 - Accuracy on BATS: 0.783212896053363 - Accuracy on U2: 0.6008771929824561 - Accuracy on U4: 0.6481481481481481 - Accuracy on Google: 0.922 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-a-nce-conceptnet-validated/raw/main/classification.json)): - Micro F1 score on BLESS: 0.9198433026970017 - Micro F1 score on CogALexV: 0.8568075117370892 - Micro F1 score on EVALution: 0.685807150595883 - Micro F1 score on K&H+N: 0.9570842317590597 - Micro F1 score on ROOT09: 0.9059855844562832 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-a-nce-conceptnet-validated/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.9175 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/roberta-large-semeval2012-average-no-mask-prompt-a-nce-conceptnet-validated") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-large - max_length: 64 - mode: average_no_mask - data: relbert/semeval2012_relational_similarity - split: train - data_eval: relbert/conceptnet_high_confidence - split_eval: full - template_mode: manual - template: Today, I finally discovered the relation between <subj> and <obj> : <subj> is the <mask> of <obj> - loss_function: nce_logout - classification_loss: False - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 27 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 0 - exclude_relation: None - exclude_relation_eval: None - n_sample: 640 - gradient_accumulation: 8 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-a-nce-conceptnet-validated/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
research-backup/roberta-large-semeval2012-average-prompt-d-nce-conceptnet-validated
research-backup
2022-09-19T21:27:36Z
104
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-09-15T14:31:09Z
--- datasets: - relbert/semeval2012_relational_similarity model-index: - name: relbert/roberta-large-semeval2012-average-prompt-d-nce-conceptnet-validated results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.8137698412698413 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6898395721925134 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6884272997032641 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.8293496386881601 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.958 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6666666666666666 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6597222222222222 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9187886093114359 - name: F1 (macro) type: f1_macro value: 0.9155322599832632 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.865962441314554 - name: F1 (macro) type: f1_macro value: 0.7168264001292298 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6879739978331527 - name: F1 (macro) type: f1_macro value: 0.6688500009556503 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.955762676497183 - name: F1 (macro) type: f1_macro value: 0.8742975353162309 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9169539329363836 - name: F1 (macro) type: f1_macro value: 0.9152963472472981 --- # relbert/roberta-large-semeval2012-average-prompt-d-nce-conceptnet-validated RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on [relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-d-nce-conceptnet-validated/raw/main/analogy.json)): - Accuracy on SAT (full): 0.6898395721925134 - Accuracy on SAT: 0.6884272997032641 - Accuracy on BATS: 0.8293496386881601 - Accuracy on U2: 0.6666666666666666 - Accuracy on U4: 0.6597222222222222 - Accuracy on Google: 0.958 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-d-nce-conceptnet-validated/raw/main/classification.json)): - Micro F1 score on BLESS: 0.9187886093114359 - Micro F1 score on CogALexV: 0.865962441314554 - Micro F1 score on EVALution: 0.6879739978331527 - Micro F1 score on K&H+N: 0.955762676497183 - Micro F1 score on ROOT09: 0.9169539329363836 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-d-nce-conceptnet-validated/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.8137698412698413 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/roberta-large-semeval2012-average-prompt-d-nce-conceptnet-validated") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-large - max_length: 64 - mode: average - data: relbert/semeval2012_relational_similarity - split: train - data_eval: relbert/conceptnet_high_confidence - split_eval: full - template_mode: manual - template: I wasn’t aware of this relationship, but I just read in the encyclopedia that <subj> is the <mask> of <obj> - loss_function: nce_logout - classification_loss: False - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 26 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 0 - exclude_relation: None - exclude_relation_eval: None - n_sample: 640 - gradient_accumulation: 8 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-d-nce-conceptnet-validated/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
research-backup/roberta-large-semeval2012-average-prompt-c-nce-conceptnet-validated
research-backup
2022-09-19T21:23:56Z
106
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-09-15T07:04:30Z
--- datasets: - relbert/semeval2012_relational_similarity model-index: - name: relbert/roberta-large-semeval2012-average-prompt-c-nce-conceptnet-validated results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.8667460317460317 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5935828877005348 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5964391691394659 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.7871039466370205 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.926 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5833333333333334 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6273148148148148 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.906282959168299 - name: F1 (macro) type: f1_macro value: 0.8990211032404914 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8690140845070422 - name: F1 (macro) type: f1_macro value: 0.7171278125163605 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6841820151679306 - name: F1 (macro) type: f1_macro value: 0.6771693785145466 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9607706753842944 - name: F1 (macro) type: f1_macro value: 0.8896360611848646 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9122532121591977 - name: F1 (macro) type: f1_macro value: 0.910658358719055 --- # relbert/roberta-large-semeval2012-average-prompt-c-nce-conceptnet-validated RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on [relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-c-nce-conceptnet-validated/raw/main/analogy.json)): - Accuracy on SAT (full): 0.5935828877005348 - Accuracy on SAT: 0.5964391691394659 - Accuracy on BATS: 0.7871039466370205 - Accuracy on U2: 0.5833333333333334 - Accuracy on U4: 0.6273148148148148 - Accuracy on Google: 0.926 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-c-nce-conceptnet-validated/raw/main/classification.json)): - Micro F1 score on BLESS: 0.906282959168299 - Micro F1 score on CogALexV: 0.8690140845070422 - Micro F1 score on EVALution: 0.6841820151679306 - Micro F1 score on K&H+N: 0.9607706753842944 - Micro F1 score on ROOT09: 0.9122532121591977 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-c-nce-conceptnet-validated/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.8667460317460317 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/roberta-large-semeval2012-average-prompt-c-nce-conceptnet-validated") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-large - max_length: 64 - mode: average - data: relbert/semeval2012_relational_similarity - split: train - data_eval: relbert/conceptnet_high_confidence - split_eval: full - template_mode: manual - template: Today, I finally discovered the relation between <subj> and <obj> : <mask> - loss_function: nce_logout - classification_loss: False - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 25 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 0 - exclude_relation: None - exclude_relation_eval: None - n_sample: 640 - gradient_accumulation: 8 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-c-nce-conceptnet-validated/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
research-backup/roberta-large-semeval2012-average-prompt-b-nce-conceptnet-validated
research-backup
2022-09-19T21:20:17Z
104
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-09-14T23:37:25Z
--- datasets: - relbert/semeval2012_relational_similarity model-index: - name: relbert/roberta-large-semeval2012-average-prompt-b-nce-conceptnet-validated results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.888095238095238 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6283422459893048 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.629080118694362 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.7959977765425236 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.92 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5701754385964912 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6134259259259259 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9172819044749133 - name: F1 (macro) type: f1_macro value: 0.9134777544987239 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8516431924882629 - name: F1 (macro) type: f1_macro value: 0.6909836328773065 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6738894907908992 - name: F1 (macro) type: f1_macro value: 0.6623942225782876 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9517284551714544 - name: F1 (macro) type: f1_macro value: 0.8593035416288995 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9000313381385145 - name: F1 (macro) type: f1_macro value: 0.8976663712913519 --- # relbert/roberta-large-semeval2012-average-prompt-b-nce-conceptnet-validated RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on [relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-b-nce-conceptnet-validated/raw/main/analogy.json)): - Accuracy on SAT (full): 0.6283422459893048 - Accuracy on SAT: 0.629080118694362 - Accuracy on BATS: 0.7959977765425236 - Accuracy on U2: 0.5701754385964912 - Accuracy on U4: 0.6134259259259259 - Accuracy on Google: 0.92 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-b-nce-conceptnet-validated/raw/main/classification.json)): - Micro F1 score on BLESS: 0.9172819044749133 - Micro F1 score on CogALexV: 0.8516431924882629 - Micro F1 score on EVALution: 0.6738894907908992 - Micro F1 score on K&H+N: 0.9517284551714544 - Micro F1 score on ROOT09: 0.9000313381385145 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-b-nce-conceptnet-validated/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.888095238095238 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/roberta-large-semeval2012-average-prompt-b-nce-conceptnet-validated") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-large - max_length: 64 - mode: average - data: relbert/semeval2012_relational_similarity - split: train - data_eval: relbert/conceptnet_high_confidence - split_eval: full - template_mode: manual - template: Today, I finally discovered the relation between <subj> and <obj> : <obj> is <subj>'s <mask> - loss_function: nce_logout - classification_loss: False - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 30 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 0 - exclude_relation: None - exclude_relation_eval: None - n_sample: 640 - gradient_accumulation: 8 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-b-nce-conceptnet-validated/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
research-backup/roberta-large-semeval2012-mask-prompt-a-nce-conceptnet-validated
research-backup
2022-09-19T20:56:55Z
103
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-09-13T02:16:48Z
--- datasets: - relbert/semeval2012_relational_similarity model-index: - name: relbert/roberta-large-semeval2012-mask-prompt-a-nce-conceptnet-validated results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.9188888888888889 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6764705882352942 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6824925816023739 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.783212896053363 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.952 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6228070175438597 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6481481481481481 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9156245291547386 - name: F1 (macro) type: f1_macro value: 0.9112259742347485 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8779342723004695 - name: F1 (macro) type: f1_macro value: 0.7367626946295457 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.7199349945828819 - name: F1 (macro) type: f1_macro value: 0.7167850316669694 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9597968978229116 - name: F1 (macro) type: f1_macro value: 0.8852759251683162 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9078658727671576 - name: F1 (macro) type: f1_macro value: 0.9061538163621959 --- # relbert/roberta-large-semeval2012-mask-prompt-a-nce-conceptnet-validated RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on [relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-a-nce-conceptnet-validated/raw/main/analogy.json)): - Accuracy on SAT (full): 0.6764705882352942 - Accuracy on SAT: 0.6824925816023739 - Accuracy on BATS: 0.783212896053363 - Accuracy on U2: 0.6228070175438597 - Accuracy on U4: 0.6481481481481481 - Accuracy on Google: 0.952 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-a-nce-conceptnet-validated/raw/main/classification.json)): - Micro F1 score on BLESS: 0.9156245291547386 - Micro F1 score on CogALexV: 0.8779342723004695 - Micro F1 score on EVALution: 0.7199349945828819 - Micro F1 score on K&H+N: 0.9597968978229116 - Micro F1 score on ROOT09: 0.9078658727671576 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-a-nce-conceptnet-validated/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.9188888888888889 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/roberta-large-semeval2012-mask-prompt-a-nce-conceptnet-validated") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-large - max_length: 64 - mode: mask - data: relbert/semeval2012_relational_similarity - split: train - data_eval: relbert/conceptnet_high_confidence - split_eval: full - template_mode: manual - template: Today, I finally discovered the relation between <subj> and <obj> : <subj> is the <mask> of <obj> - loss_function: nce_logout - classification_loss: False - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 30 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 0 - exclude_relation: None - exclude_relation_eval: None - n_sample: 640 - gradient_accumulation: 8 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-a-nce-conceptnet-validated/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
research-backup/roberta-large-semeval2012-average-prompt-c-nce-classification-conceptnet-validated
research-backup
2022-09-19T20:53:15Z
103
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-09-12T07:10:13Z
--- datasets: - relbert/semeval2012_relational_similarity model-index: - name: relbert/roberta-large-semeval2012-average-prompt-c-nce-classification-conceptnet-validated results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.6846626984126984 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.31283422459893045 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3086053412462908 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.46192329071706506 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.63 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.34649122807017546 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3611111111111111 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8457134247400934 - name: F1 (macro) type: f1_macro value: 0.8210817253537833 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.846244131455399 - name: F1 (macro) type: f1_macro value: 0.6205542192501825 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6262188515709642 - name: F1 (macro) type: f1_macro value: 0.6158702387251406 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9545802323155039 - name: F1 (macro) type: f1_macro value: 0.8851331276863854 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9044186775305547 - name: F1 (macro) type: f1_macro value: 0.9039135057812416 --- # relbert/roberta-large-semeval2012-average-prompt-c-nce-classification-conceptnet-validated RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on [relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-c-nce-classification-conceptnet-validated/raw/main/analogy.json)): - Accuracy on SAT (full): 0.31283422459893045 - Accuracy on SAT: 0.3086053412462908 - Accuracy on BATS: 0.46192329071706506 - Accuracy on U2: 0.34649122807017546 - Accuracy on U4: 0.3611111111111111 - Accuracy on Google: 0.63 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-c-nce-classification-conceptnet-validated/raw/main/classification.json)): - Micro F1 score on BLESS: 0.8457134247400934 - Micro F1 score on CogALexV: 0.846244131455399 - Micro F1 score on EVALution: 0.6262188515709642 - Micro F1 score on K&H+N: 0.9545802323155039 - Micro F1 score on ROOT09: 0.9044186775305547 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-c-nce-classification-conceptnet-validated/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.6846626984126984 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/roberta-large-semeval2012-average-prompt-c-nce-classification-conceptnet-validated") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-large - max_length: 64 - mode: average - data: relbert/semeval2012_relational_similarity - split: train - data_eval: relbert/conceptnet_high_confidence - split_eval: full - template_mode: manual - template: Today, I finally discovered the relation between <subj> and <obj> : <mask> - loss_function: nce_logout - classification_loss: True - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 1 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 0 - exclude_relation: None - exclude_relation_eval: None - n_sample: 640 - gradient_accumulation: 8 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-c-nce-classification-conceptnet-validated/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
sd-concepts-library/joe-mad
sd-concepts-library
2022-09-19T20:51:56Z
0
2
null
[ "license:mit", "region:us" ]
null
2022-09-19T20:51:52Z
--- license: mit --- ### Joe Mad on Stable Diffusion This is the `<joe-mad>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<joe-mad> 0](https://huggingface.co/sd-concepts-library/joe-mad/resolve/main/concept_images/3.jpeg) ![<joe-mad> 1](https://huggingface.co/sd-concepts-library/joe-mad/resolve/main/concept_images/0.jpeg) ![<joe-mad> 2](https://huggingface.co/sd-concepts-library/joe-mad/resolve/main/concept_images/1.jpeg) ![<joe-mad> 3](https://huggingface.co/sd-concepts-library/joe-mad/resolve/main/concept_images/2.jpeg)
research-backup/roberta-large-semeval2012-average-no-mask-prompt-e-nce-classification-conceptnet-validated
research-backup
2022-09-19T20:49:24Z
103
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-09-11T19:39:08Z
--- datasets: - relbert/semeval2012_relational_similarity model-index: - name: relbert/roberta-large-semeval2012-average-no-mask-prompt-e-nce-classification-conceptnet-validated results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.7637698412698413 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5133689839572193 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.516320474777448 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5958866036687048 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.748 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.4605263157894737 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5231481481481481 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9025161970769926 - name: F1 (macro) type: f1_macro value: 0.8979165451427438 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8328638497652581 - name: F1 (macro) type: f1_macro value: 0.6469572777603673 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6630552546045504 - name: F1 (macro) type: f1_macro value: 0.6493250582245075 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9562495652778744 - name: F1 (macro) type: f1_macro value: 0.8695137253747418 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8906298965841429 - name: F1 (macro) type: f1_macro value: 0.8885946595123109 --- # relbert/roberta-large-semeval2012-average-no-mask-prompt-e-nce-classification-conceptnet-validated RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on [relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-e-nce-classification-conceptnet-validated/raw/main/analogy.json)): - Accuracy on SAT (full): 0.5133689839572193 - Accuracy on SAT: 0.516320474777448 - Accuracy on BATS: 0.5958866036687048 - Accuracy on U2: 0.4605263157894737 - Accuracy on U4: 0.5231481481481481 - Accuracy on Google: 0.748 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-e-nce-classification-conceptnet-validated/raw/main/classification.json)): - Micro F1 score on BLESS: 0.9025161970769926 - Micro F1 score on CogALexV: 0.8328638497652581 - Micro F1 score on EVALution: 0.6630552546045504 - Micro F1 score on K&H+N: 0.9562495652778744 - Micro F1 score on ROOT09: 0.8906298965841429 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-e-nce-classification-conceptnet-validated/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.7637698412698413 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/roberta-large-semeval2012-average-no-mask-prompt-e-nce-classification-conceptnet-validated") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-large - max_length: 64 - mode: average_no_mask - data: relbert/semeval2012_relational_similarity - split: train - data_eval: relbert/conceptnet_high_confidence - split_eval: full - template_mode: manual - template: I wasn’t aware of this relationship, but I just read in the encyclopedia that <obj> is <subj>’s <mask> - loss_function: nce_logout - classification_loss: True - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 30 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 0 - exclude_relation: None - exclude_relation_eval: None - n_sample: 640 - gradient_accumulation: 8 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-e-nce-classification-conceptnet-validated/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
research-backup/roberta-large-semeval2012-average-no-mask-prompt-c-nce-classification-conceptnet-validated
research-backup
2022-09-19T20:41:36Z
103
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-09-11T02:05:54Z
--- datasets: - relbert/semeval2012_relational_similarity model-index: - name: relbert/roberta-large-semeval2012-average-no-mask-prompt-c-nce-classification-conceptnet-validated results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.6545238095238095 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.29411764705882354 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.29080118694362017 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.4641467481934408 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.614 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.32456140350877194 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3449074074074074 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8862437848425494 - name: F1 (macro) type: f1_macro value: 0.8781526549150734 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8370892018779342 - name: F1 (macro) type: f1_macro value: 0.6286516686265566 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.5384615384615384 - name: F1 (macro) type: f1_macro value: 0.5368027921312294 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9659177853516032 - name: F1 (macro) type: f1_macro value: 0.8925325170399768 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8567847069884049 - name: F1 (macro) type: f1_macro value: 0.8346603805121989 --- # relbert/roberta-large-semeval2012-average-no-mask-prompt-c-nce-classification-conceptnet-validated RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on [relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-c-nce-classification-conceptnet-validated/raw/main/analogy.json)): - Accuracy on SAT (full): 0.29411764705882354 - Accuracy on SAT: 0.29080118694362017 - Accuracy on BATS: 0.4641467481934408 - Accuracy on U2: 0.32456140350877194 - Accuracy on U4: 0.3449074074074074 - Accuracy on Google: 0.614 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-c-nce-classification-conceptnet-validated/raw/main/classification.json)): - Micro F1 score on BLESS: 0.8862437848425494 - Micro F1 score on CogALexV: 0.8370892018779342 - Micro F1 score on EVALution: 0.5384615384615384 - Micro F1 score on K&H+N: 0.9659177853516032 - Micro F1 score on ROOT09: 0.8567847069884049 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-c-nce-classification-conceptnet-validated/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.6545238095238095 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/roberta-large-semeval2012-average-no-mask-prompt-c-nce-classification-conceptnet-validated") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-large - max_length: 64 - mode: average_no_mask - data: relbert/semeval2012_relational_similarity - split: train - data_eval: relbert/conceptnet_high_confidence - split_eval: full - template_mode: manual - template: Today, I finally discovered the relation between <subj> and <obj> : <mask> - loss_function: nce_logout - classification_loss: True - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 1 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 0 - exclude_relation: None - exclude_relation_eval: None - n_sample: 640 - gradient_accumulation: 8 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-c-nce-classification-conceptnet-validated/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
research-backup/roberta-large-semeval2012-average-no-mask-prompt-b-nce-classification-conceptnet-validated
research-backup
2022-09-19T20:37:42Z
105
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-09-10T17:17:52Z
--- datasets: - relbert/semeval2012_relational_similarity model-index: - name: relbert/roberta-large-semeval2012-average-no-mask-prompt-b-nce-classification-conceptnet-validated results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.8167460317460318 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.516042780748663 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5281899109792285 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.632017787659811 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.724 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.4342105263157895 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5069444444444444 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9034202199789061 - name: F1 (macro) type: f1_macro value: 0.893273397921436 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8342723004694835 - name: F1 (macro) type: f1_macro value: 0.6453699846432566 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6581798483206934 - name: F1 (macro) type: f1_macro value: 0.640639393261134 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9604228976838005 - name: F1 (macro) type: f1_macro value: 0.8814339609725079 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8909432779692886 - name: F1 (macro) type: f1_macro value: 0.8914692333897629 --- # relbert/roberta-large-semeval2012-average-no-mask-prompt-b-nce-classification-conceptnet-validated RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on [relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-b-nce-classification-conceptnet-validated/raw/main/analogy.json)): - Accuracy on SAT (full): 0.516042780748663 - Accuracy on SAT: 0.5281899109792285 - Accuracy on BATS: 0.632017787659811 - Accuracy on U2: 0.4342105263157895 - Accuracy on U4: 0.5069444444444444 - Accuracy on Google: 0.724 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-b-nce-classification-conceptnet-validated/raw/main/classification.json)): - Micro F1 score on BLESS: 0.9034202199789061 - Micro F1 score on CogALexV: 0.8342723004694835 - Micro F1 score on EVALution: 0.6581798483206934 - Micro F1 score on K&H+N: 0.9604228976838005 - Micro F1 score on ROOT09: 0.8909432779692886 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-b-nce-classification-conceptnet-validated/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.8167460317460318 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/roberta-large-semeval2012-average-no-mask-prompt-b-nce-classification-conceptnet-validated") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-large - max_length: 64 - mode: average_no_mask - data: relbert/semeval2012_relational_similarity - split: train - data_eval: relbert/conceptnet_high_confidence - split_eval: full - template_mode: manual - template: Today, I finally discovered the relation between <subj> and <obj> : <obj> is <subj>'s <mask> - loss_function: nce_logout - classification_loss: True - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 30 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 0 - exclude_relation: None - exclude_relation_eval: None - n_sample: 640 - gradient_accumulation: 8 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-b-nce-classification-conceptnet-validated/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
research-backup/roberta-large-semeval2012-average-no-mask-prompt-a-nce-classification-conceptnet-validated
research-backup
2022-09-19T20:34:07Z
103
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-09-10T08:32:21Z
--- datasets: - relbert/semeval2012_relational_similarity model-index: - name: relbert/roberta-large-semeval2012-average-no-mask-prompt-a-nce-classification-conceptnet-validated results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.7367857142857143 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3342245989304813 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.33827893175074186 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3968871595330739 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.592 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3201754385964912 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3125 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9022148561096881 - name: F1 (macro) type: f1_macro value: 0.8962429050248129 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8049295774647888 - name: F1 (macro) type: f1_macro value: 0.6122481358269966 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.652762730227519 - name: F1 (macro) type: f1_macro value: 0.6101323743101166 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9603533421437018 - name: F1 (macro) type: f1_macro value: 0.8709644325592566 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8874960827326857 - name: F1 (macro) type: f1_macro value: 0.8864394662565577 --- # relbert/roberta-large-semeval2012-average-no-mask-prompt-a-nce-classification-conceptnet-validated RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on [relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-a-nce-classification-conceptnet-validated/raw/main/analogy.json)): - Accuracy on SAT (full): 0.3342245989304813 - Accuracy on SAT: 0.33827893175074186 - Accuracy on BATS: 0.3968871595330739 - Accuracy on U2: 0.3201754385964912 - Accuracy on U4: 0.3125 - Accuracy on Google: 0.592 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-a-nce-classification-conceptnet-validated/raw/main/classification.json)): - Micro F1 score on BLESS: 0.9022148561096881 - Micro F1 score on CogALexV: 0.8049295774647888 - Micro F1 score on EVALution: 0.652762730227519 - Micro F1 score on K&H+N: 0.9603533421437018 - Micro F1 score on ROOT09: 0.8874960827326857 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-a-nce-classification-conceptnet-validated/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.7367857142857143 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/roberta-large-semeval2012-average-no-mask-prompt-a-nce-classification-conceptnet-validated") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-large - max_length: 64 - mode: average_no_mask - data: relbert/semeval2012_relational_similarity - split: train - data_eval: relbert/conceptnet_high_confidence - split_eval: full - template_mode: manual - template: Today, I finally discovered the relation between <subj> and <obj> : <subj> is the <mask> of <obj> - loss_function: nce_logout - classification_loss: True - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 1 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 0 - exclude_relation: None - exclude_relation_eval: None - n_sample: 640 - gradient_accumulation: 8 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-a-nce-classification-conceptnet-validated/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
research-backup/roberta-large-semeval2012-average-prompt-b-nce-classification-conceptnet-validated
research-backup
2022-09-19T20:26:21Z
103
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-09-09T04:06:42Z
--- datasets: - relbert/semeval2012_relational_similarity model-index: - name: relbert/roberta-large-semeval2012-average-prompt-b-nce-classification-conceptnet-validated results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.8433333333333334 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.4732620320855615 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.49258160237388726 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5986659255141745 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.686 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.44298245614035087 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.4930555555555556 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9085430164230828 - name: F1 (macro) type: f1_macro value: 0.9029499017420614 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8359154929577466 - name: F1 (macro) type: f1_macro value: 0.6401332628753275 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6581798483206934 - name: F1 (macro) type: f1_macro value: 0.6411620033399844 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9586840091813313 - name: F1 (macro) type: f1_macro value: 0.8809925441051085 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8824819805703541 - name: F1 (macro) type: f1_macro value: 0.877314171779575 --- # relbert/roberta-large-semeval2012-average-prompt-b-nce-classification-conceptnet-validated RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on [relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-b-nce-classification-conceptnet-validated/raw/main/analogy.json)): - Accuracy on SAT (full): 0.4732620320855615 - Accuracy on SAT: 0.49258160237388726 - Accuracy on BATS: 0.5986659255141745 - Accuracy on U2: 0.44298245614035087 - Accuracy on U4: 0.4930555555555556 - Accuracy on Google: 0.686 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-b-nce-classification-conceptnet-validated/raw/main/classification.json)): - Micro F1 score on BLESS: 0.9085430164230828 - Micro F1 score on CogALexV: 0.8359154929577466 - Micro F1 score on EVALution: 0.6581798483206934 - Micro F1 score on K&H+N: 0.9586840091813313 - Micro F1 score on ROOT09: 0.8824819805703541 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-b-nce-classification-conceptnet-validated/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.8433333333333334 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/roberta-large-semeval2012-average-prompt-b-nce-classification-conceptnet-validated") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-large - max_length: 64 - mode: average - data: relbert/semeval2012_relational_similarity - split: train - data_eval: relbert/conceptnet_high_confidence - split_eval: full - template_mode: manual - template: Today, I finally discovered the relation between <subj> and <obj> : <obj> is <subj>'s <mask> - loss_function: nce_logout - classification_loss: True - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 30 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 0 - exclude_relation: None - exclude_relation_eval: None - n_sample: 640 - gradient_accumulation: 8 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-b-nce-classification-conceptnet-validated/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
research-backup/roberta-large-semeval2012-average-prompt-a-nce-classification-conceptnet-validated
research-backup
2022-09-19T20:22:11Z
103
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-09-08T19:21:31Z
--- datasets: - relbert/semeval2012_relational_similarity model-index: - name: relbert/roberta-large-semeval2012-average-prompt-a-nce-classification-conceptnet-validated results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.7666666666666666 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3342245989304813 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.33827893175074186 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3885491939966648 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.542 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3201754385964912 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.33564814814814814 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8865451258098539 - name: F1 (macro) type: f1_macro value: 0.8770785182418419 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8401408450704225 - name: F1 (macro) type: f1_macro value: 0.6242491296371133 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6749729144095341 - name: F1 (macro) type: f1_macro value: 0.6505812342477592 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9607706753842944 - name: F1 (macro) type: f1_macro value: 0.8781957733610742 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8994045753682232 - name: F1 (macro) type: f1_macro value: 0.8968786782259857 --- # relbert/roberta-large-semeval2012-average-prompt-a-nce-classification-conceptnet-validated RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on [relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-a-nce-classification-conceptnet-validated/raw/main/analogy.json)): - Accuracy on SAT (full): 0.3342245989304813 - Accuracy on SAT: 0.33827893175074186 - Accuracy on BATS: 0.3885491939966648 - Accuracy on U2: 0.3201754385964912 - Accuracy on U4: 0.33564814814814814 - Accuracy on Google: 0.542 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-a-nce-classification-conceptnet-validated/raw/main/classification.json)): - Micro F1 score on BLESS: 0.8865451258098539 - Micro F1 score on CogALexV: 0.8401408450704225 - Micro F1 score on EVALution: 0.6749729144095341 - Micro F1 score on K&H+N: 0.9607706753842944 - Micro F1 score on ROOT09: 0.8994045753682232 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-a-nce-classification-conceptnet-validated/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.7666666666666666 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/roberta-large-semeval2012-average-prompt-a-nce-classification-conceptnet-validated") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-large - max_length: 64 - mode: average - data: relbert/semeval2012_relational_similarity - split: train - data_eval: relbert/conceptnet_high_confidence - split_eval: full - template_mode: manual - template: Today, I finally discovered the relation between <subj> and <obj> : <subj> is the <mask> of <obj> - loss_function: nce_logout - classification_loss: True - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 1 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 0 - exclude_relation: None - exclude_relation_eval: None - n_sample: 640 - gradient_accumulation: 8 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-a-nce-classification-conceptnet-validated/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
research-backup/roberta-large-semeval2012-mask-prompt-d-nce-classification-conceptnet-validated
research-backup
2022-09-19T20:14:34Z
104
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-09-07T21:38:26Z
--- datasets: - relbert/semeval2012_relational_similarity model-index: - name: relbert/roberta-large-semeval2012-mask-prompt-d-nce-classification-conceptnet-validated results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.8544444444444445 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6524064171122995 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6498516320474778 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.7509727626459144 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.902 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6271929824561403 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.625 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9246647581738737 - name: F1 (macro) type: f1_macro value: 0.9201116139693363 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8826291079812206 - name: F1 (macro) type: f1_macro value: 0.74506786895136 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.7172264355362946 - name: F1 (macro) type: f1_macro value: 0.703292242462215 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9616748974055783 - name: F1 (macro) type: f1_macro value: 0.8934154139843127 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9094327796928863 - name: F1 (macro) type: f1_macro value: 0.906471425124189 --- # relbert/roberta-large-semeval2012-mask-prompt-d-nce-classification-conceptnet-validated RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on [relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-d-nce-classification-conceptnet-validated/raw/main/analogy.json)): - Accuracy on SAT (full): 0.6524064171122995 - Accuracy on SAT: 0.6498516320474778 - Accuracy on BATS: 0.7509727626459144 - Accuracy on U2: 0.6271929824561403 - Accuracy on U4: 0.625 - Accuracy on Google: 0.902 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-d-nce-classification-conceptnet-validated/raw/main/classification.json)): - Micro F1 score on BLESS: 0.9246647581738737 - Micro F1 score on CogALexV: 0.8826291079812206 - Micro F1 score on EVALution: 0.7172264355362946 - Micro F1 score on K&H+N: 0.9616748974055783 - Micro F1 score on ROOT09: 0.9094327796928863 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-d-nce-classification-conceptnet-validated/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.8544444444444445 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/roberta-large-semeval2012-mask-prompt-d-nce-classification-conceptnet-validated") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-large - max_length: 64 - mode: mask - data: relbert/semeval2012_relational_similarity - split: train - data_eval: relbert/conceptnet_high_confidence - split_eval: full - template_mode: manual - template: I wasn’t aware of this relationship, but I just read in the encyclopedia that <subj> is the <mask> of <obj> - loss_function: nce_logout - classification_loss: True - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 30 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 0 - exclude_relation: None - exclude_relation_eval: None - n_sample: 640 - gradient_accumulation: 8 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-d-nce-classification-conceptnet-validated/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
research-backup/roberta-large-semeval2012-mask-prompt-c-nce-classification-conceptnet-validated
research-backup
2022-09-19T20:10:10Z
102
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-09-07T10:51:00Z
--- datasets: - relbert/semeval2012_relational_similarity model-index: - name: relbert/roberta-large-semeval2012-mask-prompt-c-nce-classification-conceptnet-validated results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.5911706349206349 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3235294117647059 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.314540059347181 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.4118954974986103 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.43 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.34649122807017546 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.3125 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8142232936567726 - name: F1 (macro) type: f1_macro value: 0.7823150685401111 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.7779342723004695 - name: F1 (macro) type: f1_macro value: 0.4495225434483775 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.5357529794149513 - name: F1 (macro) type: f1_macro value: 0.45418166183928343 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8190164846630034 - name: F1 (macro) type: f1_macro value: 0.6465234410767566 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8834221247257913 - name: F1 (macro) type: f1_macro value: 0.8771202456083294 --- # relbert/roberta-large-semeval2012-mask-prompt-c-nce-classification-conceptnet-validated RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on [relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-c-nce-classification-conceptnet-validated/raw/main/analogy.json)): - Accuracy on SAT (full): 0.3235294117647059 - Accuracy on SAT: 0.314540059347181 - Accuracy on BATS: 0.4118954974986103 - Accuracy on U2: 0.34649122807017546 - Accuracy on U4: 0.3125 - Accuracy on Google: 0.43 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-c-nce-classification-conceptnet-validated/raw/main/classification.json)): - Micro F1 score on BLESS: 0.8142232936567726 - Micro F1 score on CogALexV: 0.7779342723004695 - Micro F1 score on EVALution: 0.5357529794149513 - Micro F1 score on K&H+N: 0.8190164846630034 - Micro F1 score on ROOT09: 0.8834221247257913 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-c-nce-classification-conceptnet-validated/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.5911706349206349 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/roberta-large-semeval2012-mask-prompt-c-nce-classification-conceptnet-validated") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-large - max_length: 64 - mode: mask - data: relbert/semeval2012_relational_similarity - split: train - data_eval: relbert/conceptnet_high_confidence - split_eval: full - template_mode: manual - template: Today, I finally discovered the relation between <subj> and <obj> : <mask> - loss_function: nce_logout - classification_loss: True - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 24 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 0 - exclude_relation: None - exclude_relation_eval: None - n_sample: 640 - gradient_accumulation: 8 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-c-nce-classification-conceptnet-validated/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
research-backup/roberta-large-semeval2012-average-prompt-d-nce-classification-conceptnet-validated
research-backup
2022-09-19T19:55:10Z
103
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-09-09T12:51:44Z
--- datasets: - relbert/semeval2012_relational_similarity model-index: - name: relbert/roberta-large-semeval2012-average-prompt-d-nce-classification-conceptnet-validated results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.8295436507936508 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5828877005347594 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6023738872403561 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6170094496942746 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.842 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5219298245614035 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5347222222222222 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9127617899653457 - name: F1 (macro) type: f1_macro value: 0.9077484042036353 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8523474178403756 - name: F1 (macro) type: f1_macro value: 0.6871561847645433 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.676056338028169 - name: F1 (macro) type: f1_macro value: 0.6699220665498732 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9604228976838005 - name: F1 (macro) type: f1_macro value: 0.8725502582807458 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8865559385772485 - name: F1 (macro) type: f1_macro value: 0.8814062245146053 --- # relbert/roberta-large-semeval2012-average-prompt-d-nce-classification-conceptnet-validated RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on [relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-d-nce-classification-conceptnet-validated/raw/main/analogy.json)): - Accuracy on SAT (full): 0.5828877005347594 - Accuracy on SAT: 0.6023738872403561 - Accuracy on BATS: 0.6170094496942746 - Accuracy on U2: 0.5219298245614035 - Accuracy on U4: 0.5347222222222222 - Accuracy on Google: 0.842 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-d-nce-classification-conceptnet-validated/raw/main/classification.json)): - Micro F1 score on BLESS: 0.9127617899653457 - Micro F1 score on CogALexV: 0.8523474178403756 - Micro F1 score on EVALution: 0.676056338028169 - Micro F1 score on K&H+N: 0.9604228976838005 - Micro F1 score on ROOT09: 0.8865559385772485 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-d-nce-classification-conceptnet-validated/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.8295436507936508 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/roberta-large-semeval2012-average-prompt-d-nce-classification-conceptnet-validated") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-large - max_length: 64 - mode: average - data: relbert/semeval2012_relational_similarity - split: train - data_eval: relbert/conceptnet_high_confidence - split_eval: full - template_mode: manual - template: I wasn’t aware of this relationship, but I just read in the encyclopedia that <subj> is the <mask> of <obj> - loss_function: nce_logout - classification_loss: True - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 30 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 0 - exclude_relation: None - exclude_relation_eval: None - n_sample: 640 - gradient_accumulation: 8 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-d-nce-classification-conceptnet-validated/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
research-backup/roberta-large-semeval2012-average-no-mask-prompt-e-loob
research-backup
2022-09-19T19:51:19Z
105
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-08-30T14:20:05Z
--- datasets: - relbert/semeval2012_relational_similarity model-index: - name: relbert/roberta-large-semeval2012-average-no-mask-prompt-e-loob results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.9032142857142857 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5721925133689839 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5667655786350149 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.7776542523624236 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.872 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5833333333333334 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6203703703703703 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9202953141479584 - name: F1 (macro) type: f1_macro value: 0.9155901755002147 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8467136150234742 - name: F1 (macro) type: f1_macro value: 0.6838421887453545 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6836403033586133 - name: F1 (macro) type: f1_macro value: 0.6705678270928033 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9625095638867636 - name: F1 (macro) type: f1_macro value: 0.8774656452359669 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9009714822939517 - name: F1 (macro) type: f1_macro value: 0.8985547104456186 --- # relbert/roberta-large-semeval2012-average-no-mask-prompt-e-loob RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on [relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-e-loob/raw/main/analogy.json)): - Accuracy on SAT (full): 0.5721925133689839 - Accuracy on SAT: 0.5667655786350149 - Accuracy on BATS: 0.7776542523624236 - Accuracy on U2: 0.5833333333333334 - Accuracy on U4: 0.6203703703703703 - Accuracy on Google: 0.872 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-e-loob/raw/main/classification.json)): - Micro F1 score on BLESS: 0.9202953141479584 - Micro F1 score on CogALexV: 0.8467136150234742 - Micro F1 score on EVALution: 0.6836403033586133 - Micro F1 score on K&H+N: 0.9625095638867636 - Micro F1 score on ROOT09: 0.9009714822939517 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-e-loob/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.9032142857142857 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/roberta-large-semeval2012-average-no-mask-prompt-e-loob") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-large - max_length: 64 - mode: average_no_mask - data: relbert/semeval2012_relational_similarity - template_mode: manual - template: I wasn’t aware of this relationship, but I just read in the encyclopedia that <obj> is <subj>’s <mask> - loss_function: info_loob - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 22 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 0 - exclude_relation: None - n_sample: 640 - gradient_accumulation: 8 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-e-loob/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
research-backup/roberta-large-semeval2012-average-no-mask-prompt-d-loob
research-backup
2022-09-19T19:47:44Z
105
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-08-30T06:57:12Z
--- datasets: - relbert/semeval2012_relational_similarity model-index: - name: relbert/roberta-large-semeval2012-average-no-mask-prompt-d-loob results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.8871031746031746 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6871657754010695 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6913946587537092 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.8148971650917176 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.958 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6359649122807017 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6458333333333334 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9153231881874341 - name: F1 (macro) type: f1_macro value: 0.909786964934943 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8577464788732394 - name: F1 (macro) type: f1_macro value: 0.6952254602767576 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6847237269772481 - name: F1 (macro) type: f1_macro value: 0.6742659270266346 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9634137859080476 - name: F1 (macro) type: f1_macro value: 0.8926357349234371 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9106863052334692 - name: F1 (macro) type: f1_macro value: 0.9093125585829993 --- # relbert/roberta-large-semeval2012-average-no-mask-prompt-d-loob RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on [relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-d-loob/raw/main/analogy.json)): - Accuracy on SAT (full): 0.6871657754010695 - Accuracy on SAT: 0.6913946587537092 - Accuracy on BATS: 0.8148971650917176 - Accuracy on U2: 0.6359649122807017 - Accuracy on U4: 0.6458333333333334 - Accuracy on Google: 0.958 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-d-loob/raw/main/classification.json)): - Micro F1 score on BLESS: 0.9153231881874341 - Micro F1 score on CogALexV: 0.8577464788732394 - Micro F1 score on EVALution: 0.6847237269772481 - Micro F1 score on K&H+N: 0.9634137859080476 - Micro F1 score on ROOT09: 0.9106863052334692 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-d-loob/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.8871031746031746 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/roberta-large-semeval2012-average-no-mask-prompt-d-loob") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-large - max_length: 64 - mode: average_no_mask - data: relbert/semeval2012_relational_similarity - template_mode: manual - template: I wasn’t aware of this relationship, but I just read in the encyclopedia that <subj> is the <mask> of <obj> - loss_function: info_loob - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 21 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 0 - exclude_relation: None - n_sample: 640 - gradient_accumulation: 8 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-d-loob/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
research-backup/roberta-large-semeval2012-average-no-mask-prompt-c-loob
research-backup
2022-09-19T19:44:02Z
108
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-08-29T23:35:40Z
--- datasets: - relbert/semeval2012_relational_similarity model-index: - name: relbert/roberta-large-semeval2012-average-no-mask-prompt-c-loob results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.9222619047619047 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6550802139037433 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6528189910979229 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.8226792662590328 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.936 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6666666666666666 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6712962962962963 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9234593943046557 - name: F1 (macro) type: f1_macro value: 0.9180602208649703 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8690140845070422 - name: F1 (macro) type: f1_macro value: 0.7117308070284601 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.695557963163597 - name: F1 (macro) type: f1_macro value: 0.6823770398712694 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9635528969882451 - name: F1 (macro) type: f1_macro value: 0.8903933273008022 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9088060169225948 - name: F1 (macro) type: f1_macro value: 0.9056193124925707 --- # relbert/roberta-large-semeval2012-average-no-mask-prompt-c-loob RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on [relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-c-loob/raw/main/analogy.json)): - Accuracy on SAT (full): 0.6550802139037433 - Accuracy on SAT: 0.6528189910979229 - Accuracy on BATS: 0.8226792662590328 - Accuracy on U2: 0.6666666666666666 - Accuracy on U4: 0.6712962962962963 - Accuracy on Google: 0.936 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-c-loob/raw/main/classification.json)): - Micro F1 score on BLESS: 0.9234593943046557 - Micro F1 score on CogALexV: 0.8690140845070422 - Micro F1 score on EVALution: 0.695557963163597 - Micro F1 score on K&H+N: 0.9635528969882451 - Micro F1 score on ROOT09: 0.9088060169225948 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-c-loob/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.9222619047619047 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/roberta-large-semeval2012-average-no-mask-prompt-c-loob") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-large - max_length: 64 - mode: average_no_mask - data: relbert/semeval2012_relational_similarity - template_mode: manual - template: Today, I finally discovered the relation between <subj> and <obj> : <mask> - loss_function: info_loob - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 21 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 0 - exclude_relation: None - n_sample: 640 - gradient_accumulation: 8 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-c-loob/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
research-backup/roberta-large-semeval2012-average-no-mask-prompt-b-loob
research-backup
2022-09-19T19:40:25Z
105
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-08-29T16:15:09Z
--- datasets: - relbert/semeval2012_relational_similarity model-index: - name: relbert/roberta-large-semeval2012-average-no-mask-prompt-b-loob results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.8373412698412699 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6042780748663101 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6023738872403561 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.7904391328515842 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.914 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5307017543859649 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5995370370370371 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9114057556124755 - name: F1 (macro) type: f1_macro value: 0.9068848357754794 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.853755868544601 - name: F1 (macro) type: f1_macro value: 0.6897229218026726 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.66738894907909 - name: F1 (macro) type: f1_macro value: 0.6606752688018641 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9581275648605412 - name: F1 (macro) type: f1_macro value: 0.8767313605600328 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8928235662801629 - name: F1 (macro) type: f1_macro value: 0.8910996698230066 --- # relbert/roberta-large-semeval2012-average-no-mask-prompt-b-loob RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on [relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-b-loob/raw/main/analogy.json)): - Accuracy on SAT (full): 0.6042780748663101 - Accuracy on SAT: 0.6023738872403561 - Accuracy on BATS: 0.7904391328515842 - Accuracy on U2: 0.5307017543859649 - Accuracy on U4: 0.5995370370370371 - Accuracy on Google: 0.914 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-b-loob/raw/main/classification.json)): - Micro F1 score on BLESS: 0.9114057556124755 - Micro F1 score on CogALexV: 0.853755868544601 - Micro F1 score on EVALution: 0.66738894907909 - Micro F1 score on K&H+N: 0.9581275648605412 - Micro F1 score on ROOT09: 0.8928235662801629 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-b-loob/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.8373412698412699 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/roberta-large-semeval2012-average-no-mask-prompt-b-loob") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-large - max_length: 64 - mode: average_no_mask - data: relbert/semeval2012_relational_similarity - template_mode: manual - template: Today, I finally discovered the relation between <subj> and <obj> : <obj> is <subj>'s <mask> - loss_function: info_loob - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 22 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 0 - exclude_relation: None - n_sample: 640 - gradient_accumulation: 8 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-b-loob/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
research-backup/roberta-large-semeval2012-average-prompt-e-loob
research-backup
2022-09-19T19:33:04Z
101
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-08-29T01:31:32Z
--- datasets: - relbert/semeval2012_relational_similarity model-index: - name: relbert/roberta-large-semeval2012-average-prompt-e-loob results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.9121031746031746 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5909090909090909 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5875370919881305 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.7670928293496387 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.912 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5570175438596491 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5879629629629629 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9207473255989151 - name: F1 (macro) type: f1_macro value: 0.9149001350257856 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8481220657276995 - name: F1 (macro) type: f1_macro value: 0.6824179529207882 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6798483206933911 - name: F1 (macro) type: f1_macro value: 0.6735513654805187 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9589622313417264 - name: F1 (macro) type: f1_macro value: 0.872950103232891 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.895017235976183 - name: F1 (macro) type: f1_macro value: 0.8900982680408713 --- # relbert/roberta-large-semeval2012-average-prompt-e-loob RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on [relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-e-loob/raw/main/analogy.json)): - Accuracy on SAT (full): 0.5909090909090909 - Accuracy on SAT: 0.5875370919881305 - Accuracy on BATS: 0.7670928293496387 - Accuracy on U2: 0.5570175438596491 - Accuracy on U4: 0.5879629629629629 - Accuracy on Google: 0.912 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-e-loob/raw/main/classification.json)): - Micro F1 score on BLESS: 0.9207473255989151 - Micro F1 score on CogALexV: 0.8481220657276995 - Micro F1 score on EVALution: 0.6798483206933911 - Micro F1 score on K&H+N: 0.9589622313417264 - Micro F1 score on ROOT09: 0.895017235976183 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-e-loob/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.9121031746031746 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/roberta-large-semeval2012-average-prompt-e-loob") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-large - max_length: 64 - mode: average - data: relbert/semeval2012_relational_similarity - template_mode: manual - template: I wasn’t aware of this relationship, but I just read in the encyclopedia that <obj> is <subj>’s <mask> - loss_function: info_loob - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 22 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 0 - exclude_relation: None - n_sample: 640 - gradient_accumulation: 8 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-e-loob/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
research-backup/roberta-large-semeval2012-average-prompt-d-loob
research-backup
2022-09-19T19:29:19Z
104
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-08-28T18:08:47Z
--- datasets: - relbert/semeval2012_relational_similarity model-index: - name: relbert/roberta-large-semeval2012-average-prompt-d-loob results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.8432936507936508 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.7032085561497327 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.7091988130563798 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.8182323513062812 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.962 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6535087719298246 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6342592592592593 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9154738586710863 - name: F1 (macro) type: f1_macro value: 0.9105308478206379 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8652582159624412 - name: F1 (macro) type: f1_macro value: 0.7157465075284571 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6841820151679306 - name: F1 (macro) type: f1_macro value: 0.6652440461492628 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9582666759407387 - name: F1 (macro) type: f1_macro value: 0.8705160523996387 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9078658727671576 - name: F1 (macro) type: f1_macro value: 0.9051927463291504 --- # relbert/roberta-large-semeval2012-average-prompt-d-loob RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on [relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-d-loob/raw/main/analogy.json)): - Accuracy on SAT (full): 0.7032085561497327 - Accuracy on SAT: 0.7091988130563798 - Accuracy on BATS: 0.8182323513062812 - Accuracy on U2: 0.6535087719298246 - Accuracy on U4: 0.6342592592592593 - Accuracy on Google: 0.962 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-d-loob/raw/main/classification.json)): - Micro F1 score on BLESS: 0.9154738586710863 - Micro F1 score on CogALexV: 0.8652582159624412 - Micro F1 score on EVALution: 0.6841820151679306 - Micro F1 score on K&H+N: 0.9582666759407387 - Micro F1 score on ROOT09: 0.9078658727671576 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-d-loob/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.8432936507936508 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/roberta-large-semeval2012-average-prompt-d-loob") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-large - max_length: 64 - mode: average - data: relbert/semeval2012_relational_similarity - template_mode: manual - template: I wasn’t aware of this relationship, but I just read in the encyclopedia that <subj> is the <mask> of <obj> - loss_function: info_loob - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 22 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 0 - exclude_relation: None - n_sample: 640 - gradient_accumulation: 8 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-d-loob/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
research-backup/roberta-large-semeval2012-average-prompt-c-loob
research-backup
2022-09-19T19:25:37Z
109
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-08-28T10:46:43Z
--- datasets: - relbert/semeval2012_relational_similarity model-index: - name: relbert/roberta-large-semeval2012-average-prompt-c-loob results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.9857142857142858 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6363636363636364 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6261127596439169 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.8210116731517509 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.906 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6403508771929824 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6574074074074074 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9180352568931747 - name: F1 (macro) type: f1_macro value: 0.913245619730716 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8791079812206573 - name: F1 (macro) type: f1_macro value: 0.7394683332126576 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6998916576381365 - name: F1 (macro) type: f1_macro value: 0.6908316763861931 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9588926758016276 - name: F1 (macro) type: f1_macro value: 0.8808973874170258 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9109996866186149 - name: F1 (macro) type: f1_macro value: 0.9081322080316404 --- # relbert/roberta-large-semeval2012-average-prompt-c-loob RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on [relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-c-loob/raw/main/analogy.json)): - Accuracy on SAT (full): 0.6363636363636364 - Accuracy on SAT: 0.6261127596439169 - Accuracy on BATS: 0.8210116731517509 - Accuracy on U2: 0.6403508771929824 - Accuracy on U4: 0.6574074074074074 - Accuracy on Google: 0.906 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-c-loob/raw/main/classification.json)): - Micro F1 score on BLESS: 0.9180352568931747 - Micro F1 score on CogALexV: 0.8791079812206573 - Micro F1 score on EVALution: 0.6998916576381365 - Micro F1 score on K&H+N: 0.9588926758016276 - Micro F1 score on ROOT09: 0.9109996866186149 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-c-loob/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.9857142857142858 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/roberta-large-semeval2012-average-prompt-c-loob") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-large - max_length: 64 - mode: average - data: relbert/semeval2012_relational_similarity - template_mode: manual - template: Today, I finally discovered the relation between <subj> and <obj> : <mask> - loss_function: info_loob - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 22 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 0 - exclude_relation: None - n_sample: 640 - gradient_accumulation: 8 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-c-loob/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
HyperMoon/wav2vec2-base-finetuned-deepfake-0919
HyperMoon
2022-09-19T19:24:22Z
163
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "audio-classification", "generated_from_trainer", "dataset:asvspoof2019", "license:apache-2.0", "endpoints_compatible", "region:us" ]
audio-classification
2022-09-19T14:56:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - asvspoof2019 metrics: - accuracy model-index: - name: wav2vec2-base-finetuned-deepfake-0919 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-finetuned-deepfake-0919 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the asvspoof2019 dataset. It achieves the following results on the evaluation set: - Loss: 0.3335 - Accuracy: 0.8974 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3025 | 1.0 | 1586 | 0.3335 | 0.8974 | | 0.4214 | 2.0 | 3172 | 0.3331 | 0.8974 | | 0.4378 | 3.0 | 4758 | 0.3307 | 0.8974 | | 0.3993 | 4.0 | 6344 | 0.3331 | 0.8974 | | 0.2839 | 5.0 | 7930 | 0.3315 | 0.8974 | ### Framework versions - Transformers 4.22.1 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
research-backup/roberta-large-semeval2012-mask-prompt-e-loob
research-backup
2022-09-19T19:14:37Z
107
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-08-27T12:42:28Z
--- datasets: - relbert/semeval2012_relational_similarity model-index: - name: relbert/roberta-large-semeval2012-mask-prompt-e-loob results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.8682936507936508 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6176470588235294 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6231454005934718 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.7570872707059477 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.874 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6008771929824561 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6226851851851852 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9311435889709206 - name: F1 (macro) type: f1_macro value: 0.9268380061574883 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8744131455399061 - name: F1 (macro) type: f1_macro value: 0.7267491613759859 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.7053087757313109 - name: F1 (macro) type: f1_macro value: 0.694918491135901 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9652222299506156 - name: F1 (macro) type: f1_macro value: 0.8967485289493923 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8965841429019116 - name: F1 (macro) type: f1_macro value: 0.8952392246946669 --- # relbert/roberta-large-semeval2012-mask-prompt-e-loob RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on [relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-e-loob/raw/main/analogy.json)): - Accuracy on SAT (full): 0.6176470588235294 - Accuracy on SAT: 0.6231454005934718 - Accuracy on BATS: 0.7570872707059477 - Accuracy on U2: 0.6008771929824561 - Accuracy on U4: 0.6226851851851852 - Accuracy on Google: 0.874 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-e-loob/raw/main/classification.json)): - Micro F1 score on BLESS: 0.9311435889709206 - Micro F1 score on CogALexV: 0.8744131455399061 - Micro F1 score on EVALution: 0.7053087757313109 - Micro F1 score on K&H+N: 0.9652222299506156 - Micro F1 score on ROOT09: 0.8965841429019116 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-e-loob/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.8682936507936508 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/roberta-large-semeval2012-mask-prompt-e-loob") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-large - max_length: 64 - mode: mask - data: relbert/semeval2012_relational_similarity - template_mode: manual - template: I wasn’t aware of this relationship, but I just read in the encyclopedia that <obj> is <subj>’s <mask> - loss_function: info_loob - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 22 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 0 - exclude_relation: None - n_sample: 640 - gradient_accumulation: 8 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-e-loob/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
research-backup/roberta-large-semeval2012-mask-prompt-d-loob
research-backup
2022-09-19T19:10:57Z
103
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-08-27T05:04:10Z
--- datasets: - relbert/semeval2012_relational_similarity model-index: - name: relbert/roberta-large-semeval2012-mask-prompt-d-loob results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.8978174603174603 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.7058823529411765 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.7002967359050445 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.8121178432462479 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.944 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6973684210526315 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6550925925925926 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9278288383305711 - name: F1 (macro) type: f1_macro value: 0.9233353025731263 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8809859154929578 - name: F1 (macro) type: f1_macro value: 0.7412230050431491 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.7177681473456122 - name: F1 (macro) type: f1_macro value: 0.7028341351536899 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9682131181748627 - name: F1 (macro) type: f1_macro value: 0.8931223998696634 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.914133500470072 - name: F1 (macro) type: f1_macro value: 0.9109123462416034 --- # relbert/roberta-large-semeval2012-mask-prompt-d-loob RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on [relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-d-loob/raw/main/analogy.json)): - Accuracy on SAT (full): 0.7058823529411765 - Accuracy on SAT: 0.7002967359050445 - Accuracy on BATS: 0.8121178432462479 - Accuracy on U2: 0.6973684210526315 - Accuracy on U4: 0.6550925925925926 - Accuracy on Google: 0.944 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-d-loob/raw/main/classification.json)): - Micro F1 score on BLESS: 0.9278288383305711 - Micro F1 score on CogALexV: 0.8809859154929578 - Micro F1 score on EVALution: 0.7177681473456122 - Micro F1 score on K&H+N: 0.9682131181748627 - Micro F1 score on ROOT09: 0.914133500470072 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-d-loob/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.8978174603174603 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/roberta-large-semeval2012-mask-prompt-d-loob") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-large - max_length: 64 - mode: mask - data: relbert/semeval2012_relational_similarity - template_mode: manual - template: I wasn’t aware of this relationship, but I just read in the encyclopedia that <subj> is the <mask> of <obj> - loss_function: info_loob - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 22 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 0 - exclude_relation: None - n_sample: 640 - gradient_accumulation: 8 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-d-loob/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
research-backup/roberta-large-semeval2012-mask-prompt-c-loob
research-backup
2022-09-19T19:07:16Z
107
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-08-26T21:41:21Z
--- datasets: - relbert/semeval2012_relational_similarity model-index: - name: relbert/roberta-large-semeval2012-mask-prompt-c-loob results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.8514285714285714 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6310160427807486 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6379821958456974 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.7581989994441356 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.912 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5131578947368421 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6087962962962963 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9282808497815278 - name: F1 (macro) type: f1_macro value: 0.9225058932245936 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8805164319248826 - name: F1 (macro) type: f1_macro value: 0.7394780138338314 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.7291440953412786 - name: F1 (macro) type: f1_macro value: 0.7162526164842762 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9662655630520971 - name: F1 (macro) type: f1_macro value: 0.9015857808430136 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9125665935443434 - name: F1 (macro) type: f1_macro value: 0.9105681683853759 --- # relbert/roberta-large-semeval2012-mask-prompt-c-loob RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on [relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-c-loob/raw/main/analogy.json)): - Accuracy on SAT (full): 0.6310160427807486 - Accuracy on SAT: 0.6379821958456974 - Accuracy on BATS: 0.7581989994441356 - Accuracy on U2: 0.5131578947368421 - Accuracy on U4: 0.6087962962962963 - Accuracy on Google: 0.912 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-c-loob/raw/main/classification.json)): - Micro F1 score on BLESS: 0.9282808497815278 - Micro F1 score on CogALexV: 0.8805164319248826 - Micro F1 score on EVALution: 0.7291440953412786 - Micro F1 score on K&H+N: 0.9662655630520971 - Micro F1 score on ROOT09: 0.9125665935443434 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-c-loob/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.8514285714285714 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/roberta-large-semeval2012-mask-prompt-c-loob") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-large - max_length: 64 - mode: mask - data: relbert/semeval2012_relational_similarity - template_mode: manual - template: Today, I finally discovered the relation between <subj> and <obj> : <mask> - loss_function: info_loob - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 30 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 0 - exclude_relation: None - n_sample: 640 - gradient_accumulation: 8 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-c-loob/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
research-backup/roberta-large-semeval2012-mask-prompt-a-loob
research-backup
2022-09-19T18:59:55Z
107
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-08-26T06:59:38Z
--- datasets: - relbert/semeval2012_relational_similarity model-index: - name: relbert/roberta-large-semeval2012-mask-prompt-a-loob results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.9060317460317461 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6550802139037433 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.655786350148368 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.8043357420789328 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.95 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.631578947368421 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6412037037037037 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9245140876902215 - name: F1 (macro) type: f1_macro value: 0.9208294548760101 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8814553990610329 - name: F1 (macro) type: f1_macro value: 0.7355497663400952 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.7128927410617552 - name: F1 (macro) type: f1_macro value: 0.7065924774146382 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9646657856298254 - name: F1 (macro) type: f1_macro value: 0.8945677578632619 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9081792541523034 - name: F1 (macro) type: f1_macro value: 0.906414518159255 --- # relbert/roberta-large-semeval2012-mask-prompt-a-loob RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on [relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-a-loob/raw/main/analogy.json)): - Accuracy on SAT (full): 0.6550802139037433 - Accuracy on SAT: 0.655786350148368 - Accuracy on BATS: 0.8043357420789328 - Accuracy on U2: 0.631578947368421 - Accuracy on U4: 0.6412037037037037 - Accuracy on Google: 0.95 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-a-loob/raw/main/classification.json)): - Micro F1 score on BLESS: 0.9245140876902215 - Micro F1 score on CogALexV: 0.8814553990610329 - Micro F1 score on EVALution: 0.7128927410617552 - Micro F1 score on K&H+N: 0.9646657856298254 - Micro F1 score on ROOT09: 0.9081792541523034 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-a-loob/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.9060317460317461 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/roberta-large-semeval2012-mask-prompt-a-loob") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-large - max_length: 64 - mode: mask - data: relbert/semeval2012_relational_similarity - template_mode: manual - template: Today, I finally discovered the relation between <subj> and <obj> : <subj> is the <mask> of <obj> - loss_function: info_loob - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 21 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 0 - exclude_relation: None - n_sample: 640 - gradient_accumulation: 8 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-a-loob/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
sd-concepts-library/ikea-fabler
sd-concepts-library
2022-09-19T18:58:11Z
0
0
null
[ "license:mit", "region:us" ]
null
2022-09-19T18:58:07Z
--- license: mit --- ### ikea-fabler on Stable Diffusion This is the `<ikea-fabler>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<ikea-fabler> 0](https://huggingface.co/sd-concepts-library/ikea-fabler/resolve/main/concept_images/3.jpeg) ![<ikea-fabler> 1](https://huggingface.co/sd-concepts-library/ikea-fabler/resolve/main/concept_images/0.jpeg) ![<ikea-fabler> 2](https://huggingface.co/sd-concepts-library/ikea-fabler/resolve/main/concept_images/1.jpeg) ![<ikea-fabler> 3](https://huggingface.co/sd-concepts-library/ikea-fabler/resolve/main/concept_images/2.jpeg) ![<ikea-fabler> 4](https://huggingface.co/sd-concepts-library/ikea-fabler/resolve/main/concept_images/4.jpeg)
research-backup/roberta-large-semeval2012-v2-average-no-mask-prompt-e-nce
research-backup
2022-09-19T18:56:16Z
103
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity_v2", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-08-20T14:24:21Z
--- datasets: - relbert/semeval2012_relational_similarity_v2 model-index: - name: relbert/roberta-large-semeval2012-v2-average-no-mask-prompt-e-nce results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.911547619047619 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5935828877005348 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6023738872403561 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.7498610339077265 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.868 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.618421052631579 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6365740740740741 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9190899502787404 - name: F1 (macro) type: f1_macro value: 0.9137760457433256 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.854225352112676 - name: F1 (macro) type: f1_macro value: 0.6960792498811619 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6738894907908992 - name: F1 (macro) type: f1_macro value: 0.6683142084374337 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9637615636085414 - name: F1 (macro) type: f1_macro value: 0.890107974704234 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9053588216859918 - name: F1 (macro) type: f1_macro value: 0.9023263285944801 --- # relbert/roberta-large-semeval2012-v2-average-no-mask-prompt-e-nce RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on [relbert/semeval2012_relational_similarity_v2](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v2). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-v2-average-no-mask-prompt-e-nce/raw/main/analogy.json)): - Accuracy on SAT (full): 0.5935828877005348 - Accuracy on SAT: 0.6023738872403561 - Accuracy on BATS: 0.7498610339077265 - Accuracy on U2: 0.618421052631579 - Accuracy on U4: 0.6365740740740741 - Accuracy on Google: 0.868 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-v2-average-no-mask-prompt-e-nce/raw/main/classification.json)): - Micro F1 score on BLESS: 0.9190899502787404 - Micro F1 score on CogALexV: 0.854225352112676 - Micro F1 score on EVALution: 0.6738894907908992 - Micro F1 score on K&H+N: 0.9637615636085414 - Micro F1 score on ROOT09: 0.9053588216859918 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-v2-average-no-mask-prompt-e-nce/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.911547619047619 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/roberta-large-semeval2012-v2-average-no-mask-prompt-e-nce") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-large - max_length: 64 - mode: average_no_mask - data: relbert/semeval2012_relational_similarity_v2 - template_mode: manual - template: I wasn’t aware of this relationship, but I just read in the encyclopedia that <obj> is <subj>’s <mask> - loss_function: nce_logout - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 21 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 0 - exclude_relation: None - n_sample: 640 - gradient_accumulation: 8 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-semeval2012-v2-average-no-mask-prompt-e-nce/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
adil-o/dqn-SpaceInvadersNoFrameskip-v4
adil-o
2022-09-19T18:46:41Z
1
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2022-09-19T18:46:06Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - metrics: - type: mean_reward value: 425.50 +/- 151.35 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib ``` # Download model and save it into the logs/ folder python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga adil-o -f logs/ python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga adil-o ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 3), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
research-backup/roberta-large-semeval2012-v2-average-no-mask-prompt-a-nce
research-backup
2022-09-19T18:41:57Z
104
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity_v2", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-08-19T18:06:50Z
--- datasets: - relbert/semeval2012_relational_similarity_v2 model-index: - name: relbert/roberta-large-semeval2012-v2-average-no-mask-prompt-a-nce results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.8451190476190477 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6096256684491979 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6142433234421365 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.7504168982768205 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.902 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5701754385964912 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6342592592592593 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9174325749585657 - name: F1 (macro) type: f1_macro value: 0.9147052349582953 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8525821596244132 - name: F1 (macro) type: f1_macro value: 0.6857226427858921 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6771397616468039 - name: F1 (macro) type: f1_macro value: 0.6712704719484096 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9540933435348126 - name: F1 (macro) type: f1_macro value: 0.8681826742269192 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9106863052334692 - name: F1 (macro) type: f1_macro value: 0.9083078769735016 --- # relbert/roberta-large-semeval2012-v2-average-no-mask-prompt-a-nce RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on [relbert/semeval2012_relational_similarity_v2](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v2). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-v2-average-no-mask-prompt-a-nce/raw/main/analogy.json)): - Accuracy on SAT (full): 0.6096256684491979 - Accuracy on SAT: 0.6142433234421365 - Accuracy on BATS: 0.7504168982768205 - Accuracy on U2: 0.5701754385964912 - Accuracy on U4: 0.6342592592592593 - Accuracy on Google: 0.902 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-v2-average-no-mask-prompt-a-nce/raw/main/classification.json)): - Micro F1 score on BLESS: 0.9174325749585657 - Micro F1 score on CogALexV: 0.8525821596244132 - Micro F1 score on EVALution: 0.6771397616468039 - Micro F1 score on K&H+N: 0.9540933435348126 - Micro F1 score on ROOT09: 0.9106863052334692 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-v2-average-no-mask-prompt-a-nce/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.8451190476190477 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/roberta-large-semeval2012-v2-average-no-mask-prompt-a-nce") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-large - max_length: 64 - mode: average_no_mask - data: relbert/semeval2012_relational_similarity_v2 - template_mode: manual - template: Today, I finally discovered the relation between <subj> and <obj> : <subj> is the <mask> of <obj> - loss_function: nce_logout - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 29 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 0 - exclude_relation: None - n_sample: 640 - gradient_accumulation: 8 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-semeval2012-v2-average-no-mask-prompt-a-nce/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
research-backup/roberta-large-semeval2012-v2-average-prompt-c-nce
research-backup
2022-09-19T18:31:18Z
103
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity_v2", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-08-19T02:53:49Z
--- datasets: - relbert/semeval2012_relational_similarity_v2 model-index: - name: relbert/roberta-large-semeval2012-v2-average-prompt-c-nce results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.8846428571428572 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6818181818181818 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6735905044510386 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.811561978877154 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.924 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.631578947368421 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6504629629629629 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.924815428657526 - name: F1 (macro) type: f1_macro value: 0.9212115289556371 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8720657276995305 - name: F1 (macro) type: f1_macro value: 0.7245215538948597 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6841820151679306 - name: F1 (macro) type: f1_macro value: 0.6787204202080052 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9559713431174793 - name: F1 (macro) type: f1_macro value: 0.8722517438133693 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9088060169225948 - name: F1 (macro) type: f1_macro value: 0.9066857579930224 --- # relbert/roberta-large-semeval2012-v2-average-prompt-c-nce RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on [relbert/semeval2012_relational_similarity_v2](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v2). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-v2-average-prompt-c-nce/raw/main/analogy.json)): - Accuracy on SAT (full): 0.6818181818181818 - Accuracy on SAT: 0.6735905044510386 - Accuracy on BATS: 0.811561978877154 - Accuracy on U2: 0.631578947368421 - Accuracy on U4: 0.6504629629629629 - Accuracy on Google: 0.924 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-v2-average-prompt-c-nce/raw/main/classification.json)): - Micro F1 score on BLESS: 0.924815428657526 - Micro F1 score on CogALexV: 0.8720657276995305 - Micro F1 score on EVALution: 0.6841820151679306 - Micro F1 score on K&H+N: 0.9559713431174793 - Micro F1 score on ROOT09: 0.9088060169225948 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-v2-average-prompt-c-nce/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.8846428571428572 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/roberta-large-semeval2012-v2-average-prompt-c-nce") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-large - max_length: 64 - mode: average - data: relbert/semeval2012_relational_similarity_v2 - template_mode: manual - template: Today, I finally discovered the relation between <subj> and <obj> : <mask> - loss_function: nce_logout - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 29 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 0 - exclude_relation: None - n_sample: 640 - gradient_accumulation: 8 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-semeval2012-v2-average-prompt-c-nce/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
research-backup/roberta-large-semeval2012-v2-average-prompt-b-nce
research-backup
2022-09-19T18:27:44Z
107
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity_v2", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-08-18T21:50:31Z
--- datasets: - relbert/semeval2012_relational_similarity_v2 model-index: - name: relbert/roberta-large-semeval2012-v2-average-prompt-b-nce results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.8919047619047619 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.606951871657754 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6023738872403561 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.7793218454697054 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.904 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5570175438596491 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6342592592592593 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9160765406056953 - name: F1 (macro) type: f1_macro value: 0.9122021728567965 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8495305164319249 - name: F1 (macro) type: f1_macro value: 0.6901992286900205 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6543878656554712 - name: F1 (macro) type: f1_macro value: 0.6403838038925135 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.954858454475899 - name: F1 (macro) type: f1_macro value: 0.867619689233697 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8987778125979317 - name: F1 (macro) type: f1_macro value: 0.8964916628332213 --- # relbert/roberta-large-semeval2012-v2-average-prompt-b-nce RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on [relbert/semeval2012_relational_similarity_v2](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v2). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-v2-average-prompt-b-nce/raw/main/analogy.json)): - Accuracy on SAT (full): 0.606951871657754 - Accuracy on SAT: 0.6023738872403561 - Accuracy on BATS: 0.7793218454697054 - Accuracy on U2: 0.5570175438596491 - Accuracy on U4: 0.6342592592592593 - Accuracy on Google: 0.904 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-v2-average-prompt-b-nce/raw/main/classification.json)): - Micro F1 score on BLESS: 0.9160765406056953 - Micro F1 score on CogALexV: 0.8495305164319249 - Micro F1 score on EVALution: 0.6543878656554712 - Micro F1 score on K&H+N: 0.954858454475899 - Micro F1 score on ROOT09: 0.8987778125979317 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-v2-average-prompt-b-nce/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.8919047619047619 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/roberta-large-semeval2012-v2-average-prompt-b-nce") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-large - max_length: 64 - mode: average - data: relbert/semeval2012_relational_similarity_v2 - template_mode: manual - template: Today, I finally discovered the relation between <subj> and <obj> : <obj> is <subj>'s <mask> - loss_function: nce_logout - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 30 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 0 - exclude_relation: None - n_sample: 640 - gradient_accumulation: 8 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-semeval2012-v2-average-prompt-b-nce/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
research-backup/roberta-large-semeval2012-v2-average-prompt-a-nce
research-backup
2022-09-19T18:24:10Z
105
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity_v2", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-08-18T16:46:29Z
--- datasets: - relbert/semeval2012_relational_similarity_v2 model-index: - name: relbert/roberta-large-semeval2012-v2-average-prompt-a-nce results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.8308333333333333 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6390374331550802 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6409495548961425 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.7570872707059477 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.93 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.618421052631579 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6388888888888888 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9154738586710863 - name: F1 (macro) type: f1_macro value: 0.9102917119981933 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.852112676056338 - name: F1 (macro) type: f1_macro value: 0.6892409688901546 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6852654387865655 - name: F1 (macro) type: f1_macro value: 0.6726667668087644 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9533282325937261 - name: F1 (macro) type: f1_macro value: 0.862481874668915 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9078658727671576 - name: F1 (macro) type: f1_macro value: 0.9075386153074033 --- # relbert/roberta-large-semeval2012-v2-average-prompt-a-nce RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on [relbert/semeval2012_relational_similarity_v2](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v2). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-v2-average-prompt-a-nce/raw/main/analogy.json)): - Accuracy on SAT (full): 0.6390374331550802 - Accuracy on SAT: 0.6409495548961425 - Accuracy on BATS: 0.7570872707059477 - Accuracy on U2: 0.618421052631579 - Accuracy on U4: 0.6388888888888888 - Accuracy on Google: 0.93 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-v2-average-prompt-a-nce/raw/main/classification.json)): - Micro F1 score on BLESS: 0.9154738586710863 - Micro F1 score on CogALexV: 0.852112676056338 - Micro F1 score on EVALution: 0.6852654387865655 - Micro F1 score on K&H+N: 0.9533282325937261 - Micro F1 score on ROOT09: 0.9078658727671576 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-v2-average-prompt-a-nce/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.8308333333333333 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/roberta-large-semeval2012-v2-average-prompt-a-nce") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-large - max_length: 64 - mode: average - data: relbert/semeval2012_relational_similarity_v2 - template_mode: manual - template: Today, I finally discovered the relation between <subj> and <obj> : <subj> is the <mask> of <obj> - loss_function: nce_logout - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 29 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 0 - exclude_relation: None - n_sample: 640 - gradient_accumulation: 8 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-semeval2012-v2-average-prompt-a-nce/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
research-backup/roberta-large-semeval2012-v2-mask-prompt-e-nce
research-backup
2022-09-19T18:20:37Z
107
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity_v2", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-08-18T11:42:44Z
--- datasets: - relbert/semeval2012_relational_similarity_v2 model-index: - name: relbert/roberta-large-semeval2012-v2-mask-prompt-e-nce results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.8457142857142858 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6096256684491979 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6112759643916914 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.7576431350750417 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.878 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5964912280701754 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6087962962962963 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9264728039777008 - name: F1 (macro) type: f1_macro value: 0.9231888761944194 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8720657276995305 - name: F1 (macro) type: f1_macro value: 0.7203249423895846 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.7074756229685807 - name: F1 (macro) type: f1_macro value: 0.7003587066174993 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9625095638867636 - name: F1 (macro) type: f1_macro value: 0.8943198093953978 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9022250078345346 - name: F1 (macro) type: f1_macro value: 0.9008228707899653 --- # relbert/roberta-large-semeval2012-v2-mask-prompt-e-nce RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on [relbert/semeval2012_relational_similarity_v2](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v2). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-v2-mask-prompt-e-nce/raw/main/analogy.json)): - Accuracy on SAT (full): 0.6096256684491979 - Accuracy on SAT: 0.6112759643916914 - Accuracy on BATS: 0.7576431350750417 - Accuracy on U2: 0.5964912280701754 - Accuracy on U4: 0.6087962962962963 - Accuracy on Google: 0.878 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-v2-mask-prompt-e-nce/raw/main/classification.json)): - Micro F1 score on BLESS: 0.9264728039777008 - Micro F1 score on CogALexV: 0.8720657276995305 - Micro F1 score on EVALution: 0.7074756229685807 - Micro F1 score on K&H+N: 0.9625095638867636 - Micro F1 score on ROOT09: 0.9022250078345346 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-v2-mask-prompt-e-nce/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.8457142857142858 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/roberta-large-semeval2012-v2-mask-prompt-e-nce") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-large - max_length: 64 - mode: mask - data: relbert/semeval2012_relational_similarity_v2 - template_mode: manual - template: I wasn’t aware of this relationship, but I just read in the encyclopedia that <obj> is <subj>’s <mask> - loss_function: nce_logout - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 29 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 0 - exclude_relation: None - n_sample: 640 - gradient_accumulation: 8 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-semeval2012-v2-mask-prompt-e-nce/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
sd-concepts-library/grisstyle
sd-concepts-library
2022-09-19T18:17:52Z
0
9
null
[ "license:mit", "region:us" ]
null
2022-09-19T18:17:47Z
--- license: mit --- ### GrisStyle on Stable Diffusion This is the `<gris>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<gris> 0](https://huggingface.co/sd-concepts-library/grisstyle/resolve/main/concept_images/24.jpeg) ![<gris> 1](https://huggingface.co/sd-concepts-library/grisstyle/resolve/main/concept_images/3.jpeg) ![<gris> 2](https://huggingface.co/sd-concepts-library/grisstyle/resolve/main/concept_images/6.jpeg) ![<gris> 3](https://huggingface.co/sd-concepts-library/grisstyle/resolve/main/concept_images/0.jpeg) ![<gris> 4](https://huggingface.co/sd-concepts-library/grisstyle/resolve/main/concept_images/19.jpeg) ![<gris> 5](https://huggingface.co/sd-concepts-library/grisstyle/resolve/main/concept_images/26.jpeg) ![<gris> 6](https://huggingface.co/sd-concepts-library/grisstyle/resolve/main/concept_images/17.jpeg) ![<gris> 7](https://huggingface.co/sd-concepts-library/grisstyle/resolve/main/concept_images/22.jpeg) ![<gris> 8](https://huggingface.co/sd-concepts-library/grisstyle/resolve/main/concept_images/7.jpeg) ![<gris> 9](https://huggingface.co/sd-concepts-library/grisstyle/resolve/main/concept_images/25.jpeg) ![<gris> 10](https://huggingface.co/sd-concepts-library/grisstyle/resolve/main/concept_images/28.jpeg) ![<gris> 11](https://huggingface.co/sd-concepts-library/grisstyle/resolve/main/concept_images/5.jpeg) ![<gris> 12](https://huggingface.co/sd-concepts-library/grisstyle/resolve/main/concept_images/8.jpeg) ![<gris> 13](https://huggingface.co/sd-concepts-library/grisstyle/resolve/main/concept_images/14.jpeg) ![<gris> 14](https://huggingface.co/sd-concepts-library/grisstyle/resolve/main/concept_images/15.jpeg) ![<gris> 15](https://huggingface.co/sd-concepts-library/grisstyle/resolve/main/concept_images/9.jpeg) ![<gris> 16](https://huggingface.co/sd-concepts-library/grisstyle/resolve/main/concept_images/16.jpeg) ![<gris> 17](https://huggingface.co/sd-concepts-library/grisstyle/resolve/main/concept_images/27.jpeg) ![<gris> 18](https://huggingface.co/sd-concepts-library/grisstyle/resolve/main/concept_images/13.jpeg) ![<gris> 19](https://huggingface.co/sd-concepts-library/grisstyle/resolve/main/concept_images/20.jpeg) ![<gris> 20](https://huggingface.co/sd-concepts-library/grisstyle/resolve/main/concept_images/12.jpeg) ![<gris> 21](https://huggingface.co/sd-concepts-library/grisstyle/resolve/main/concept_images/1.jpeg) ![<gris> 22](https://huggingface.co/sd-concepts-library/grisstyle/resolve/main/concept_images/10.jpeg) ![<gris> 23](https://huggingface.co/sd-concepts-library/grisstyle/resolve/main/concept_images/2.jpeg) ![<gris> 24](https://huggingface.co/sd-concepts-library/grisstyle/resolve/main/concept_images/23.jpeg) ![<gris> 25](https://huggingface.co/sd-concepts-library/grisstyle/resolve/main/concept_images/18.jpeg) ![<gris> 26](https://huggingface.co/sd-concepts-library/grisstyle/resolve/main/concept_images/11.jpeg) ![<gris> 27](https://huggingface.co/sd-concepts-library/grisstyle/resolve/main/concept_images/21.jpeg) ![<gris> 28](https://huggingface.co/sd-concepts-library/grisstyle/resolve/main/concept_images/4.jpeg)
research-backup/roberta-large-semeval2012-v2-mask-prompt-c-nce
research-backup
2022-09-19T18:13:28Z
105
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity_v2", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-08-18T01:33:59Z
--- datasets: - relbert/semeval2012_relational_similarity_v2 model-index: - name: relbert/roberta-large-semeval2012-v2-mask-prompt-c-nce results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.926031746031746 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6417112299465241 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6498516320474778 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.7821011673151751 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.92 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6228070175438597 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6226851851851852 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9213500075335241 - name: F1 (macro) type: f1_macro value: 0.9186450756231657 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8868544600938967 - name: F1 (macro) type: f1_macro value: 0.7505227584902805 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.7036836403033586 - name: F1 (macro) type: f1_macro value: 0.6937893013670059 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.960283786603603 - name: F1 (macro) type: f1_macro value: 0.8893120683793279 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9157004073958008 - name: F1 (macro) type: f1_macro value: 0.9153408949649426 --- # relbert/roberta-large-semeval2012-v2-mask-prompt-c-nce RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on [relbert/semeval2012_relational_similarity_v2](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v2). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-v2-mask-prompt-c-nce/raw/main/analogy.json)): - Accuracy on SAT (full): 0.6417112299465241 - Accuracy on SAT: 0.6498516320474778 - Accuracy on BATS: 0.7821011673151751 - Accuracy on U2: 0.6228070175438597 - Accuracy on U4: 0.6226851851851852 - Accuracy on Google: 0.92 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-v2-mask-prompt-c-nce/raw/main/classification.json)): - Micro F1 score on BLESS: 0.9213500075335241 - Micro F1 score on CogALexV: 0.8868544600938967 - Micro F1 score on EVALution: 0.7036836403033586 - Micro F1 score on K&H+N: 0.960283786603603 - Micro F1 score on ROOT09: 0.9157004073958008 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-v2-mask-prompt-c-nce/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.926031746031746 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/roberta-large-semeval2012-v2-mask-prompt-c-nce") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-large - max_length: 64 - mode: mask - data: relbert/semeval2012_relational_similarity_v2 - template_mode: manual - template: Today, I finally discovered the relation between <subj> and <obj> : <mask> - loss_function: nce_logout - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 30 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 0 - exclude_relation: None - n_sample: 640 - gradient_accumulation: 8 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-semeval2012-v2-mask-prompt-c-nce/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
research-backup/roberta-large-semeval2012-v2-mask-prompt-b-nce
research-backup
2022-09-19T18:09:52Z
104
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity_v2", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-08-17T20:31:16Z
--- datasets: - relbert/semeval2012_relational_similarity_v2 model-index: - name: relbert/roberta-large-semeval2012-v2-mask-prompt-b-nce results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.8893650793650794 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5909090909090909 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5934718100890207 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.7626459143968871 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.892 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5526315789473685 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6064814814814815 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9249660991411782 - name: F1 (macro) type: f1_macro value: 0.9227342891403616 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8685446009389672 - name: F1 (macro) type: f1_macro value: 0.7221315620076061 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6895991332611051 - name: F1 (macro) type: f1_macro value: 0.6823504904547306 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9632051192877513 - name: F1 (macro) type: f1_macro value: 0.8871487887884136 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9015982450642431 - name: F1 (macro) type: f1_macro value: 0.9009961240438994 --- # relbert/roberta-large-semeval2012-v2-mask-prompt-b-nce RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on [relbert/semeval2012_relational_similarity_v2](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v2). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-v2-mask-prompt-b-nce/raw/main/analogy.json)): - Accuracy on SAT (full): 0.5909090909090909 - Accuracy on SAT: 0.5934718100890207 - Accuracy on BATS: 0.7626459143968871 - Accuracy on U2: 0.5526315789473685 - Accuracy on U4: 0.6064814814814815 - Accuracy on Google: 0.892 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-v2-mask-prompt-b-nce/raw/main/classification.json)): - Micro F1 score on BLESS: 0.9249660991411782 - Micro F1 score on CogALexV: 0.8685446009389672 - Micro F1 score on EVALution: 0.6895991332611051 - Micro F1 score on K&H+N: 0.9632051192877513 - Micro F1 score on ROOT09: 0.9015982450642431 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-v2-mask-prompt-b-nce/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.8893650793650794 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/roberta-large-semeval2012-v2-mask-prompt-b-nce") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-large - max_length: 64 - mode: mask - data: relbert/semeval2012_relational_similarity_v2 - template_mode: manual - template: Today, I finally discovered the relation between <subj> and <obj> : <obj> is <subj>'s <mask> - loss_function: nce_logout - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 27 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 0 - exclude_relation: None - n_sample: 640 - gradient_accumulation: 8 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-semeval2012-v2-mask-prompt-b-nce/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
AlexanderD/test_cats
AlexanderD
2022-09-19T18:08:02Z
0
0
keras
[ "keras", "region:us" ]
null
2022-09-19T17:11:54Z
--- thumbnail: "url to a thumbnail used in social sharing" library_name: keras tags: - keras widget: - src: https://huggingface.co/datasets/test_cats/cifar1.jpg example_title: Tiger ---
research-backup/roberta-large-semeval2012-v2-mask-prompt-a-nce
research-backup
2022-09-19T18:06:14Z
106
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity_v2", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-08-17T15:27:11Z
--- datasets: - relbert/semeval2012_relational_similarity_v2 model-index: - name: relbert/roberta-large-semeval2012-v2-mask-prompt-a-nce results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.8338095238095238 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.7165775401069518 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.7181008902077152 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.7626459143968871 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.946 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6359649122807017 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6435185185185185 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9264728039777008 - name: F1 (macro) type: f1_macro value: 0.9236796530968624 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8760563380281691 - name: F1 (macro) type: f1_macro value: 0.7316819468333253 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.7085590465872156 - name: F1 (macro) type: f1_macro value: 0.6972629880144019 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9585448981011337 - name: F1 (macro) type: f1_macro value: 0.8754227812726614 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9088060169225948 - name: F1 (macro) type: f1_macro value: 0.9075082674855798 --- # relbert/roberta-large-semeval2012-v2-mask-prompt-a-nce RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on [relbert/semeval2012_relational_similarity_v2](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity_v2). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-v2-mask-prompt-a-nce/raw/main/analogy.json)): - Accuracy on SAT (full): 0.7165775401069518 - Accuracy on SAT: 0.7181008902077152 - Accuracy on BATS: 0.7626459143968871 - Accuracy on U2: 0.6359649122807017 - Accuracy on U4: 0.6435185185185185 - Accuracy on Google: 0.946 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-v2-mask-prompt-a-nce/raw/main/classification.json)): - Micro F1 score on BLESS: 0.9264728039777008 - Micro F1 score on CogALexV: 0.8760563380281691 - Micro F1 score on EVALution: 0.7085590465872156 - Micro F1 score on K&H+N: 0.9585448981011337 - Micro F1 score on ROOT09: 0.9088060169225948 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-v2-mask-prompt-a-nce/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.8338095238095238 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/roberta-large-semeval2012-v2-mask-prompt-a-nce") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-large - max_length: 64 - mode: mask - data: relbert/semeval2012_relational_similarity_v2 - template_mode: manual - template: Today, I finally discovered the relation between <subj> and <obj> : <subj> is the <mask> of <obj> - loss_function: nce_logout - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 29 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 0 - exclude_relation: None - n_sample: 640 - gradient_accumulation: 8 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-semeval2012-v2-mask-prompt-a-nce/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
sd-concepts-library/jin-kisaragi
sd-concepts-library
2022-09-19T17:51:12Z
0
0
null
[ "license:mit", "region:us" ]
null
2022-09-19T17:51:06Z
--- license: mit --- ### Jin Kisaragi on Stable Diffusion This is the `<jin-kisaragi>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<jin-kisaragi> 0](https://huggingface.co/sd-concepts-library/jin-kisaragi/resolve/main/concept_images/3.jpeg) ![<jin-kisaragi> 1](https://huggingface.co/sd-concepts-library/jin-kisaragi/resolve/main/concept_images/0.jpeg) ![<jin-kisaragi> 2](https://huggingface.co/sd-concepts-library/jin-kisaragi/resolve/main/concept_images/5.jpeg) ![<jin-kisaragi> 3](https://huggingface.co/sd-concepts-library/jin-kisaragi/resolve/main/concept_images/1.jpeg) ![<jin-kisaragi> 4](https://huggingface.co/sd-concepts-library/jin-kisaragi/resolve/main/concept_images/2.jpeg) ![<jin-kisaragi> 5](https://huggingface.co/sd-concepts-library/jin-kisaragi/resolve/main/concept_images/4.jpeg)
clboetticher-school/xlm-roberta-base-finetuned-panx-all
clboetticher-school
2022-09-19T17:47:16Z
134
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-09-19T17:18:44Z
--- license: mit tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-all results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-all This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1348 - F1: 0.8844 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.3055 | 1.0 | 835 | 0.1755 | 0.8272 | | 0.1561 | 2.0 | 1670 | 0.1441 | 0.8727 | | 0.1016 | 3.0 | 2505 | 0.1348 | 0.8844 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.12.1+cu113 - Datasets 1.16.1 - Tokenizers 0.10.3
sd-concepts-library/f-22
sd-concepts-library
2022-09-19T17:46:19Z
0
0
null
[ "license:mit", "region:us" ]
null
2022-09-19T17:46:12Z
--- license: mit --- ### F-22 on Stable Diffusion This is the `<f-22>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<f-22> 0](https://huggingface.co/sd-concepts-library/f-22/resolve/main/concept_images/3.jpeg) ![<f-22> 1](https://huggingface.co/sd-concepts-library/f-22/resolve/main/concept_images/0.jpeg) ![<f-22> 2](https://huggingface.co/sd-concepts-library/f-22/resolve/main/concept_images/1.jpeg) ![<f-22> 3](https://huggingface.co/sd-concepts-library/f-22/resolve/main/concept_images/2.jpeg) ![<f-22> 4](https://huggingface.co/sd-concepts-library/f-22/resolve/main/concept_images/4.jpeg)
research-backup/roberta-large-semeval2012-mask-prompt-d-triplet
research-backup
2022-09-19T17:04:39Z
103
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:semeval2012", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-07-24T13:36:16Z
--- datasets: - semeval2012 model-index: - name: relbert/roberta-large-semeval2012-mask-prompt-d-triplet results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.8038492063492063 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6122994652406417 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6231454005934718 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.7665369649805448 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.948 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6096491228070176 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6388888888888888 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9065843001356034 - name: F1 (macro) type: f1_macro value: 0.9034391812068588 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8814553990610329 - name: F1 (macro) type: f1_macro value: 0.7370945104669245 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.7193932827735645 - name: F1 (macro) type: f1_macro value: 0.6939557303629665 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9527022327328372 - name: F1 (macro) type: f1_macro value: 0.8646220605131613 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.900658100908806 - name: F1 (macro) type: f1_macro value: 0.90318517559646 --- # relbert/roberta-large-semeval2012-mask-prompt-d-triplet RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on [semeval2012](https://huggingface.co/datasets/semeval2012). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-d-triplet/raw/main/analogy.json)): - Accuracy on SAT (full): 0.6122994652406417 - Accuracy on SAT: 0.6231454005934718 - Accuracy on BATS: 0.7665369649805448 - Accuracy on U2: 0.6096491228070176 - Accuracy on U4: 0.6388888888888888 - Accuracy on Google: 0.948 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-d-triplet/raw/main/classification.json)): - Micro F1 score on BLESS: 0.9065843001356034 - Micro F1 score on CogALexV: 0.8814553990610329 - Micro F1 score on EVALution: 0.7193932827735645 - Micro F1 score on K&H+N: 0.9527022327328372 - Micro F1 score on ROOT09: 0.900658100908806 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-d-triplet/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.8038492063492063 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/roberta-large-semeval2012-mask-prompt-d-triplet") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-large - max_length: 64 - mode: mask - data: semeval2012 - n_sample: 10 - custom_template: I wasn’t aware of this relationship, but I just read in the encyclopedia that <subj> is the <mask> of <obj> - template: None - softmax_loss: True - in_batch_negative: True - parent_contrast: True - mse_margin: 1 - epoch: 1 - lr_warmup: 10 - batch: 64 - lr: 2e-05 - lr_decay: False - weight_decay: 0 - optimizer: adam - momentum: 0.9 - fp16: False - random_seed: 0 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-d-triplet/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
clboetticher-school/xlm-roberta-base-finetuned-panx-it
clboetticher-school
2022-09-19T17:02:30Z
104
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-09-19T16:46:40Z
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-it results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.it metrics: - name: F1 type: f1 value: 0.8124233755619126 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-it This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.2630 - F1: 0.8124 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.8193 | 1.0 | 70 | 0.3200 | 0.7356 | | 0.2773 | 2.0 | 140 | 0.2841 | 0.7882 | | 0.1807 | 3.0 | 210 | 0.2630 | 0.8124 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.12.1+cu113 - Datasets 1.16.1 - Tokenizers 0.10.3
research-backup/roberta-large-semeval2012-mask-prompt-a-triplet
research-backup
2022-09-19T16:51:50Z
105
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:semeval2012", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-07-24T13:29:44Z
--- datasets: - semeval2012 model-index: - name: relbert/roberta-large-semeval2012-mask-prompt-a-triplet results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.8050396825396825 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5721925133689839 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5786350148367952 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.7687604224569206 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.926 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5614035087719298 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5972222222222222 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9120084375470845 - name: F1 (macro) type: f1_macro value: 0.9106038197651877 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8546948356807512 - name: F1 (macro) type: f1_macro value: 0.6744144716286173 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.7258938244853739 - name: F1 (macro) type: f1_macro value: 0.7201470763428615 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9517980107115531 - name: F1 (macro) type: f1_macro value: 0.8677563702420764 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9012848636790974 - name: F1 (macro) type: f1_macro value: 0.8960382821239596 --- # relbert/roberta-large-semeval2012-mask-prompt-a-triplet RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on [semeval2012](https://huggingface.co/datasets/semeval2012). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-a-triplet/raw/main/analogy.json)): - Accuracy on SAT (full): 0.5721925133689839 - Accuracy on SAT: 0.5786350148367952 - Accuracy on BATS: 0.7687604224569206 - Accuracy on U2: 0.5614035087719298 - Accuracy on U4: 0.5972222222222222 - Accuracy on Google: 0.926 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-a-triplet/raw/main/classification.json)): - Micro F1 score on BLESS: 0.9120084375470845 - Micro F1 score on CogALexV: 0.8546948356807512 - Micro F1 score on EVALution: 0.7258938244853739 - Micro F1 score on K&H+N: 0.9517980107115531 - Micro F1 score on ROOT09: 0.9012848636790974 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-a-triplet/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.8050396825396825 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/roberta-large-semeval2012-mask-prompt-a-triplet") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-large - max_length: 64 - mode: mask - data: semeval2012 - n_sample: 10 - custom_template: Today, I finally discovered the relation between <subj> and <obj> : <subj> is the <mask> of <obj> - template: None - softmax_loss: True - in_batch_negative: True - parent_contrast: True - mse_margin: 1 - epoch: 1 - lr_warmup: 10 - batch: 64 - lr: 2e-05 - lr_decay: False - weight_decay: 0 - optimizer: adam - momentum: 0.9 - fp16: False - random_seed: 0 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-a-triplet/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
clboetticher-school/xlm-roberta-base-finetuned-panx-fr
clboetticher-school
2022-09-19T16:46:23Z
114
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-09-19T16:27:36Z
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-fr results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.fr metrics: - name: F1 type: f1 value: 0.9213082901554404 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1117 - F1: 0.9213 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.5779 | 1.0 | 191 | 0.2832 | 0.8091 | | 0.2735 | 2.0 | 382 | 0.1570 | 0.8943 | | 0.1769 | 3.0 | 573 | 0.1117 | 0.9213 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.12.1+cu113 - Datasets 1.16.1 - Tokenizers 0.10.3
research-backup/roberta-large-semeval2012-average-prompt-c-triplet
research-backup
2022-09-19T16:43:19Z
103
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:semeval2012", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-07-24T10:43:59Z
--- datasets: - semeval2012 model-index: - name: relbert/roberta-large-semeval2012-average-prompt-c-triplet results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.8742857142857143 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5748663101604278 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5786350148367952 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.820455808782657 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.918 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6140350877192983 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6319444444444444 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.924815428657526 - name: F1 (macro) type: f1_macro value: 0.9202612346118308 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8718309859154929 - name: F1 (macro) type: f1_macro value: 0.7152177972947781 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.704225352112676 - name: F1 (macro) type: f1_macro value: 0.6846186699994758 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9667524518327885 - name: F1 (macro) type: f1_macro value: 0.8963571819720165 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9015982450642431 - name: F1 (macro) type: f1_macro value: 0.8974114781326906 --- # relbert/roberta-large-semeval2012-average-prompt-c-triplet RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on [semeval2012](https://huggingface.co/datasets/semeval2012). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-c-triplet/raw/main/analogy.json)): - Accuracy on SAT (full): 0.5748663101604278 - Accuracy on SAT: 0.5786350148367952 - Accuracy on BATS: 0.820455808782657 - Accuracy on U2: 0.6140350877192983 - Accuracy on U4: 0.6319444444444444 - Accuracy on Google: 0.918 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-c-triplet/raw/main/classification.json)): - Micro F1 score on BLESS: 0.924815428657526 - Micro F1 score on CogALexV: 0.8718309859154929 - Micro F1 score on EVALution: 0.704225352112676 - Micro F1 score on K&H+N: 0.9667524518327885 - Micro F1 score on ROOT09: 0.9015982450642431 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-c-triplet/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.8742857142857143 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/roberta-large-semeval2012-average-prompt-c-triplet") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-large - max_length: 64 - mode: average - data: semeval2012 - n_sample: 10 - custom_template: Today, I finally discovered the relation between <subj> and <obj> : <mask> - template: None - softmax_loss: True - in_batch_negative: True - parent_contrast: True - mse_margin: 1 - epoch: 1 - lr_warmup: 10 - batch: 64 - lr: 2e-05 - lr_decay: False - weight_decay: 0 - optimizer: adam - momentum: 0.9 - fp16: False - random_seed: 0 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-c-triplet/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
research-backup/roberta-large-semeval2012-average-prompt-e-triplet
research-backup
2022-09-19T16:31:56Z
96
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:semeval2012", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-07-22T18:25:55Z
--- datasets: - semeval2012 model-index: - name: relbert/roberta-large-semeval2012-average-prompt-e-triplet results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.9255952380952381 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.550802139037433 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5548961424332344 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.754863813229572 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.872 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.631578947368421 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5740740740740741 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9031188790116016 - name: F1 (macro) type: f1_macro value: 0.896130708234777 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8692488262910798 - name: F1 (macro) type: f1_macro value: 0.7176923678524982 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6933911159263272 - name: F1 (macro) type: f1_macro value: 0.6793569940444139 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9611184530847882 - name: F1 (macro) type: f1_macro value: 0.8913615954101612 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9088060169225948 - name: F1 (macro) type: f1_macro value: 0.9076649125882928 --- # relbert/roberta-large-semeval2012-average-prompt-e-triplet RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on [semeval2012](https://huggingface.co/datasets/semeval2012). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-e-triplet/raw/main/analogy.json)): - Accuracy on SAT (full): 0.550802139037433 - Accuracy on SAT: 0.5548961424332344 - Accuracy on BATS: 0.754863813229572 - Accuracy on U2: 0.631578947368421 - Accuracy on U4: 0.5740740740740741 - Accuracy on Google: 0.872 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-e-triplet/raw/main/classification.json)): - Micro F1 score on BLESS: 0.9031188790116016 - Micro F1 score on CogALexV: 0.8692488262910798 - Micro F1 score on EVALution: 0.6933911159263272 - Micro F1 score on K&H+N: 0.9611184530847882 - Micro F1 score on ROOT09: 0.9088060169225948 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-e-triplet/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.9255952380952381 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/roberta-large-semeval2012-average-prompt-e-triplet") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-large - max_length: 64 - mode: average - data: semeval2012 - n_sample: 10 - custom_template: I wasn’t aware of this relationship, but I just read in the encyclopedia that <obj> is <subj>’s <mask> - template: None - softmax_loss: True - in_batch_negative: True - parent_contrast: True - mse_margin: 1 - epoch: 1 - lr_warmup: 10 - batch: 64 - lr: 2e-05 - lr_decay: False - weight_decay: 0 - optimizer: adam - momentum: 0.9 - fp16: False - random_seed: 0 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-e-triplet/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
sd-concepts-library/singsing
sd-concepts-library
2022-09-19T16:30:34Z
0
0
null
[ "license:mit", "region:us" ]
null
2022-09-19T16:30:28Z
--- license: mit --- ### Singsing on Stable Diffusion This is the `<singsing>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<singsing> 0](https://huggingface.co/sd-concepts-library/singsing/resolve/main/concept_images/3.jpeg) ![<singsing> 1](https://huggingface.co/sd-concepts-library/singsing/resolve/main/concept_images/0.jpeg) ![<singsing> 2](https://huggingface.co/sd-concepts-library/singsing/resolve/main/concept_images/1.jpeg) ![<singsing> 3](https://huggingface.co/sd-concepts-library/singsing/resolve/main/concept_images/2.jpeg)
research-backup/roberta-large-semeval2012-average-no-mask-prompt-d-triplet
research-backup
2022-09-19T16:23:34Z
104
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:semeval2012", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-07-22T16:52:00Z
--- datasets: - semeval2012 model-index: - name: relbert/roberta-large-semeval2012-average-no-mask-prompt-d-triplet results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.815952380952381 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6951871658 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.706231454 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.7882156754 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.924 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6622807018 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6527777778 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9153231881874341 - name: F1 (macro) type: f1_macro value: 0.9098445625290479 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8725352112676056 - name: F1 (macro) type: f1_macro value: 0.7174660438773314 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6944745395449621 - name: F1 (macro) type: f1_macro value: 0.688951875758847 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9689782291159491 - name: F1 (macro) type: f1_macro value: 0.90395779327521 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9050454403008461 - name: F1 (macro) type: f1_macro value: 0.9062415320017446 --- # relbert/roberta-large-semeval2012-average-no-mask-prompt-d-triplet RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on [semeval2012](https://huggingface.co/datasets/semeval2012). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-d-triplet/raw/main/analogy.json)): - Accuracy on SAT (full): 0.6951871658 - Accuracy on SAT: 0.706231454 - Accuracy on BATS: 0.7882156754 - Accuracy on U2: 0.6622807018 - Accuracy on U4: 0.6527777778 - Accuracy on Google: 0.924 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-d-triplet/raw/main/classification.json)): - Micro F1 score on BLESS: 0.9153231881874341 - Micro F1 score on CogALexV: 0.8725352112676056 - Micro F1 score on EVALution: 0.6944745395449621 - Micro F1 score on K&H+N: 0.9689782291159491 - Micro F1 score on ROOT09: 0.9050454403008461 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-d-triplet/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.815952380952381 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/roberta-large-semeval2012-average-no-mask-prompt-d-triplet") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-large - max_length: 64 - mode: average_no_mask - data: semeval2012 - n_sample: 10 - custom_template: I wasn’t aware of this relationship, but I just read in the encyclopedia that <subj> is the <mask> of <obj> - template: None - softmax_loss: True - in_batch_negative: True - parent_contrast: True - mse_margin: 1 - epoch: 1 - lr_warmup: 10 - batch: 64 - lr: 2e-05 - lr_decay: False - weight_decay: 0 - optimizer: adam - momentum: 0.9 - fp16: False - random_seed: 0 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-d-triplet/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
research-backup/roberta-large-semeval2012-average-no-mask-prompt-c-triplet
research-backup
2022-09-19T16:19:14Z
104
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:semeval2012", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-07-22T16:49:45Z
--- datasets: - semeval2012 model-index: - name: relbert/roberta-large-semeval2012-average-no-mask-prompt-c-triplet results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.8449404761904762 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5401069519 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5400593472 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.7954419122 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.912 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6228070175 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.625 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9285821907488323 - name: F1 (macro) type: f1_macro value: 0.9238353183237691 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8795774647887324 - name: F1 (macro) type: f1_macro value: 0.7380564609392272 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6776814734561214 - name: F1 (macro) type: f1_macro value: 0.6542229601159605 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9683522292550601 - name: F1 (macro) type: f1_macro value: 0.897601966442876 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9088060169225948 - name: F1 (macro) type: f1_macro value: 0.909285139662564 --- # relbert/roberta-large-semeval2012-average-no-mask-prompt-c-triplet RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on [semeval2012](https://huggingface.co/datasets/semeval2012). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-c-triplet/raw/main/analogy.json)): - Accuracy on SAT (full): 0.5401069519 - Accuracy on SAT: 0.5400593472 - Accuracy on BATS: 0.7954419122 - Accuracy on U2: 0.6228070175 - Accuracy on U4: 0.625 - Accuracy on Google: 0.912 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-c-triplet/raw/main/classification.json)): - Micro F1 score on BLESS: 0.9285821907488323 - Micro F1 score on CogALexV: 0.8795774647887324 - Micro F1 score on EVALution: 0.6776814734561214 - Micro F1 score on K&H+N: 0.9683522292550601 - Micro F1 score on ROOT09: 0.9088060169225948 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-c-triplet/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.8449404761904762 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/roberta-large-semeval2012-average-no-mask-prompt-c-triplet") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-large - max_length: 64 - mode: average_no_mask - data: semeval2012 - n_sample: 10 - custom_template: Today, I finally discovered the relation between <subj> and <obj> : <mask> - template: None - softmax_loss: True - in_batch_negative: True - parent_contrast: True - mse_margin: 1 - epoch: 1 - lr_warmup: 10 - batch: 64 - lr: 2e-05 - lr_decay: False - weight_decay: 0 - optimizer: adam - momentum: 0.9 - fp16: False - random_seed: 0 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-c-triplet/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
clboetticher-school/xlm-roberta-base-finetuned-panx-de-fr
clboetticher-school
2022-09-19T16:15:42Z
134
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-09-19T15:49:33Z
--- license: mit tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de-fr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1608 - F1: 0.8593 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2888 | 1.0 | 715 | 0.1779 | 0.8233 | | 0.1437 | 2.0 | 1430 | 0.1570 | 0.8497 | | 0.0931 | 3.0 | 2145 | 0.1608 | 0.8593 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.12.1+cu113 - Datasets 1.16.1 - Tokenizers 0.10.3
sd-concepts-library/singsing-doll
sd-concepts-library
2022-09-19T16:14:12Z
0
0
null
[ "license:mit", "region:us" ]
null
2022-09-19T16:14:06Z
--- license: mit --- ### Singsing doll on Stable Diffusion This is the `<singsing>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<singsing> 0](https://huggingface.co/sd-concepts-library/singsing-doll/resolve/main/concept_images/3.jpeg) ![<singsing> 1](https://huggingface.co/sd-concepts-library/singsing-doll/resolve/main/concept_images/0.jpeg) ![<singsing> 2](https://huggingface.co/sd-concepts-library/singsing-doll/resolve/main/concept_images/1.jpeg) ![<singsing> 3](https://huggingface.co/sd-concepts-library/singsing-doll/resolve/main/concept_images/2.jpeg)
research-backup/roberta-large-semeval2012-average-no-mask-prompt-a-triplet
research-backup
2022-09-19T16:11:21Z
104
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:semeval2012", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-07-22T16:45:11Z
--- datasets: - semeval2012 model-index: - name: relbert/roberta-large-semeval2012-average-no-mask-prompt-a-triplet results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.8172222222222222 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5748663102 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5816023739 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.694274597 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.868 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.600877193 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5810185185 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9053789362663854 - name: F1 (macro) type: f1_macro value: 0.9048929138088492 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8598591549295774 - name: F1 (macro) type: f1_macro value: 0.6899576626683188 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6744312026002167 - name: F1 (macro) type: f1_macro value: 0.6530063441620636 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9689086735758503 - name: F1 (macro) type: f1_macro value: 0.8993902866318307 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8934503290504543 - name: F1 (macro) type: f1_macro value: 0.8920036295821294 --- # relbert/roberta-large-semeval2012-average-no-mask-prompt-a-triplet RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on [semeval2012](https://huggingface.co/datasets/semeval2012). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-a-triplet/raw/main/analogy.json)): - Accuracy on SAT (full): 0.5748663102 - Accuracy on SAT: 0.5816023739 - Accuracy on BATS: 0.694274597 - Accuracy on U2: 0.600877193 - Accuracy on U4: 0.5810185185 - Accuracy on Google: 0.868 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-a-triplet/raw/main/classification.json)): - Micro F1 score on BLESS: 0.9053789362663854 - Micro F1 score on CogALexV: 0.8598591549295774 - Micro F1 score on EVALution: 0.6744312026002167 - Micro F1 score on K&H+N: 0.9689086735758503 - Micro F1 score on ROOT09: 0.8934503290504543 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-a-triplet/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.8172222222222222 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/roberta-large-semeval2012-average-no-mask-prompt-a-triplet") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-large - max_length: 64 - mode: average_no_mask - data: semeval2012 - n_sample: 10 - custom_template: Today, I finally discovered the relation between <subj> and <obj> : <subj> is the <mask> of <obj> - template: None - softmax_loss: True - in_batch_negative: True - parent_contrast: True - mse_margin: 1 - epoch: 1 - lr_warmup: 10 - batch: 64 - lr: 2e-05 - lr_decay: False - weight_decay: 0 - optimizer: adam - momentum: 0.9 - fp16: False - random_seed: 0 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-a-triplet/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
surya07/swin-tiny-patch4-window7-224-finetuned-eurosat
surya07
2022-09-19T16:11:19Z
217
0
transformers
[ "transformers", "pytorch", "tensorboard", "swin", "image-classification", "generated_from_trainer", "dataset:imagefolder", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2022-09-19T14:33:01Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: swin-tiny-patch4-window7-224-finetuned-eurosat results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.875 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-eurosat This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.4066 - Accuracy: 0.875 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.57 | 1 | 0.7569 | 0.5417 | | No log | 1.57 | 2 | 0.5000 | 0.8333 | | No log | 2.57 | 3 | 0.4066 | 0.875 | ### Framework versions - Transformers 4.22.1 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
research-backup/roberta-large-semeval2012-average-prompt-e-nce
research-backup
2022-09-19T16:02:37Z
105
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-07-24T20:44:09Z
--- datasets: - relbert/semeval2012_relational_similarity model-index: - name: relbert/roberta-large-semeval2012-average-prompt-e-nce results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.848452380952381 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6016042780748663 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6023738872403561 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.7476375764313508 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.86 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5482456140350878 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6111111111111112 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9193912912460449 - name: F1 (macro) type: f1_macro value: 0.9171163163754675 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8481220657276995 - name: F1 (macro) type: f1_macro value: 0.6734502135237685 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.685807150595883 - name: F1 (macro) type: f1_macro value: 0.679750083279063 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.962092230646171 - name: F1 (macro) type: f1_macro value: 0.8868721386428041 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.898464431212786 - name: F1 (macro) type: f1_macro value: 0.8953388906170653 --- # relbert/roberta-large-semeval2012-average-prompt-e-nce RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on [relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-e-nce/raw/main/analogy.json)): - Accuracy on SAT (full): 0.6016042780748663 - Accuracy on SAT: 0.6023738872403561 - Accuracy on BATS: 0.7476375764313508 - Accuracy on U2: 0.5482456140350878 - Accuracy on U4: 0.6111111111111112 - Accuracy on Google: 0.86 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-e-nce/raw/main/classification.json)): - Micro F1 score on BLESS: 0.9193912912460449 - Micro F1 score on CogALexV: 0.8481220657276995 - Micro F1 score on EVALution: 0.685807150595883 - Micro F1 score on K&H+N: 0.962092230646171 - Micro F1 score on ROOT09: 0.898464431212786 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-e-nce/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.848452380952381 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/roberta-large-semeval2012-average-prompt-e-nce") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-large - max_length: 64 - mode: average - data: relbert/semeval2012_relational_similarity - template_mode: manual - template: I wasn’t aware of this relationship, but I just read in the encyclopedia that <obj> is <subj>’s <mask> - loss_function: nce_logout - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 21 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 0 - exclude_relation: None - n_sample: 640 - gradient_accumulation: 8 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-e-nce/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
research-backup/roberta-large-semeval2012-average-no-mask-prompt-c-nce
research-backup
2022-09-19T15:58:24Z
105
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-07-22T11:02:11Z
--- datasets: - relbert/semeval2012_relational_similarity model-index: - name: relbert/roberta-large-semeval2012-average-no-mask-prompt-c-nce results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.926031746031746 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6577540106951871 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.658753709198813 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.8193440800444691 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.946 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6491228070175439 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6759259259259259 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9196926322133494 - name: F1 (macro) type: f1_macro value: 0.9153080947914317 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8715962441314554 - name: F1 (macro) type: f1_macro value: 0.7255280883118129 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.699349945828819 - name: F1 (macro) type: f1_macro value: 0.6824088213474949 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9533977881338248 - name: F1 (macro) type: f1_macro value: 0.8578016229945142 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9113130680037606 - name: F1 (macro) type: f1_macro value: 0.910034270119033 --- # relbert/roberta-large-semeval2012-average-no-mask-prompt-c-nce RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on [relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-c-nce/raw/main/analogy.json)): - Accuracy on SAT (full): 0.6577540106951871 - Accuracy on SAT: 0.658753709198813 - Accuracy on BATS: 0.8193440800444691 - Accuracy on U2: 0.6491228070175439 - Accuracy on U4: 0.6759259259259259 - Accuracy on Google: 0.946 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-c-nce/raw/main/classification.json)): - Micro F1 score on BLESS: 0.9196926322133494 - Micro F1 score on CogALexV: 0.8715962441314554 - Micro F1 score on EVALution: 0.699349945828819 - Micro F1 score on K&H+N: 0.9533977881338248 - Micro F1 score on ROOT09: 0.9113130680037606 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-c-nce/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.926031746031746 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/roberta-large-semeval2012-average-no-mask-prompt-c-nce") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-large - max_length: 64 - mode: average_no_mask - data: relbert/semeval2012_relational_similarity - template_mode: manual - template: Today, I finally discovered the relation between <subj> and <obj> : <mask> - loss_function: nce_logout - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 24 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 0 - exclude_relation: None - n_sample: 640 - gradient_accumulation: 8 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-c-nce/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
research-backup/roberta-large-semeval2012-average-no-mask-prompt-b-nce
research-backup
2022-09-19T15:54:08Z
112
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-07-22T10:59:39Z
--- datasets: - relbert/semeval2012_relational_similarity model-index: - name: relbert/roberta-large-semeval2012-average-no-mask-prompt-b-nce results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.8173412698412699 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6122994652406417 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6142433234421365 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.7865480822679266 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.93 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5394736842105263 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6018518518518519 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9174325749585657 - name: F1 (macro) type: f1_macro value: 0.9108478749677724 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.855868544600939 - name: F1 (macro) type: f1_macro value: 0.6923047005195835 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6836403033586133 - name: F1 (macro) type: f1_macro value: 0.667310500013795 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9517284551714544 - name: F1 (macro) type: f1_macro value: 0.8530904199464412 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9019116264493889 - name: F1 (macro) type: f1_macro value: 0.8996556790705655 --- # relbert/roberta-large-semeval2012-average-no-mask-prompt-b-nce RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on [relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-b-nce/raw/main/analogy.json)): - Accuracy on SAT (full): 0.6122994652406417 - Accuracy on SAT: 0.6142433234421365 - Accuracy on BATS: 0.7865480822679266 - Accuracy on U2: 0.5394736842105263 - Accuracy on U4: 0.6018518518518519 - Accuracy on Google: 0.93 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-b-nce/raw/main/classification.json)): - Micro F1 score on BLESS: 0.9174325749585657 - Micro F1 score on CogALexV: 0.855868544600939 - Micro F1 score on EVALution: 0.6836403033586133 - Micro F1 score on K&H+N: 0.9517284551714544 - Micro F1 score on ROOT09: 0.9019116264493889 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-b-nce/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.8173412698412699 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/roberta-large-semeval2012-average-no-mask-prompt-b-nce") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-large - max_length: 64 - mode: average_no_mask - data: relbert/semeval2012_relational_similarity - template_mode: manual - template: Today, I finally discovered the relation between <subj> and <obj> : <obj> is <subj>'s <mask> - loss_function: nce_logout - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 29 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 0 - exclude_relation: None - n_sample: 640 - gradient_accumulation: 8 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-b-nce/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
research-backup/roberta-large-semeval2012-average-no-mask-prompt-a-nce
research-backup
2022-09-19T15:49:51Z
106
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-07-22T10:57:35Z
--- datasets: - relbert/semeval2012_relational_similarity model-index: - name: relbert/roberta-large-semeval2012-average-no-mask-prompt-a-nce results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.866547619047619 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.7112299465240641 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.7062314540059347 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.782657031684269 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.936 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6754385964912281 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6921296296296297 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9124604489980412 - name: F1 (macro) type: f1_macro value: 0.9071904229357174 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8607981220657277 - name: F1 (macro) type: f1_macro value: 0.7021043673336924 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6863488624052004 - name: F1 (macro) type: f1_macro value: 0.6714181204599561 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9499895666689852 - name: F1 (macro) type: f1_macro value: 0.8482944164556818 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9075524913820119 - name: F1 (macro) type: f1_macro value: 0.9080337875282686 --- # relbert/roberta-large-semeval2012-average-no-mask-prompt-a-nce RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on [relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-a-nce/raw/main/analogy.json)): - Accuracy on SAT (full): 0.7112299465240641 - Accuracy on SAT: 0.7062314540059347 - Accuracy on BATS: 0.782657031684269 - Accuracy on U2: 0.6754385964912281 - Accuracy on U4: 0.6921296296296297 - Accuracy on Google: 0.936 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-a-nce/raw/main/classification.json)): - Micro F1 score on BLESS: 0.9124604489980412 - Micro F1 score on CogALexV: 0.8607981220657277 - Micro F1 score on EVALution: 0.6863488624052004 - Micro F1 score on K&H+N: 0.9499895666689852 - Micro F1 score on ROOT09: 0.9075524913820119 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-a-nce/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.866547619047619 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/roberta-large-semeval2012-average-no-mask-prompt-a-nce") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-large - max_length: 64 - mode: average_no_mask - data: relbert/semeval2012_relational_similarity - template_mode: manual - template: Today, I finally discovered the relation between <subj> and <obj> : <subj> is the <mask> of <obj> - loss_function: nce_logout - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 29 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 0 - exclude_relation: None - n_sample: 640 - gradient_accumulation: 8 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-a-nce/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
research-backup/roberta-large-semeval2012-average-no-mask-prompt-d-nce
research-backup
2022-09-19T15:45:33Z
114
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-07-22T10:55:31Z
--- datasets: - relbert/semeval2012_relational_similarity model-index: - name: relbert/roberta-large-semeval2012-average-no-mask-prompt-d-nce results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.8909722222222223 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6925133689839572 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6913946587537092 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.8037798777098388 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.968 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6885964912280702 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6898148148148148 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9273768268796143 - name: F1 (macro) type: f1_macro value: 0.9211786019752478 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8615023474178404 - name: F1 (macro) type: f1_macro value: 0.7077498583524542 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6917659804983749 - name: F1 (macro) type: f1_macro value: 0.6746361055952557 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9573624539194547 - name: F1 (macro) type: f1_macro value: 0.8730312566461178 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9031651519899718 - name: F1 (macro) type: f1_macro value: 0.9025725245537483 --- # relbert/roberta-large-semeval2012-average-no-mask-prompt-d-nce RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on [relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-d-nce/raw/main/analogy.json)): - Accuracy on SAT (full): 0.6925133689839572 - Accuracy on SAT: 0.6913946587537092 - Accuracy on BATS: 0.8037798777098388 - Accuracy on U2: 0.6885964912280702 - Accuracy on U4: 0.6898148148148148 - Accuracy on Google: 0.968 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-d-nce/raw/main/classification.json)): - Micro F1 score on BLESS: 0.9273768268796143 - Micro F1 score on CogALexV: 0.8615023474178404 - Micro F1 score on EVALution: 0.6917659804983749 - Micro F1 score on K&H+N: 0.9573624539194547 - Micro F1 score on ROOT09: 0.9031651519899718 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-d-nce/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.8909722222222223 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/roberta-large-semeval2012-average-no-mask-prompt-d-nce") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-large - max_length: 64 - mode: average_no_mask - data: relbert/semeval2012_relational_similarity - template_mode: manual - template: I wasn’t aware of this relationship, but I just read in the encyclopedia that <subj> is the <mask> of <obj> - loss_function: nce_logout - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 29 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 0 - exclude_relation: None - n_sample: 640 - gradient_accumulation: 8 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-d-nce/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
research-backup/roberta-large-semeval2012-average-prompt-b-nce
research-backup
2022-09-19T15:36:58Z
104
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-07-22T10:51:01Z
--- datasets: - relbert/semeval2012_relational_similarity model-index: - name: relbert/roberta-large-semeval2012-average-prompt-b-nce results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.9023809523809524 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6096256684491979 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6083086053412463 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.7854363535297387 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.93 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5833333333333334 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5995370370370371 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9174325749585657 - name: F1 (macro) type: f1_macro value: 0.9129994415974204 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8603286384976526 - name: F1 (macro) type: f1_macro value: 0.698172861434558 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6679306608884074 - name: F1 (macro) type: f1_macro value: 0.6495733078766703 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9611184530847882 - name: F1 (macro) type: f1_macro value: 0.8867329071712199 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9012848636790974 - name: F1 (macro) type: f1_macro value: 0.9017314335034342 --- # relbert/roberta-large-semeval2012-average-prompt-b-nce RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on [relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-b-nce/raw/main/analogy.json)): - Accuracy on SAT (full): 0.6096256684491979 - Accuracy on SAT: 0.6083086053412463 - Accuracy on BATS: 0.7854363535297387 - Accuracy on U2: 0.5833333333333334 - Accuracy on U4: 0.5995370370370371 - Accuracy on Google: 0.93 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-b-nce/raw/main/classification.json)): - Micro F1 score on BLESS: 0.9174325749585657 - Micro F1 score on CogALexV: 0.8603286384976526 - Micro F1 score on EVALution: 0.6679306608884074 - Micro F1 score on K&H+N: 0.9611184530847882 - Micro F1 score on ROOT09: 0.9012848636790974 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-b-nce/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.9023809523809524 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/roberta-large-semeval2012-average-prompt-b-nce") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-large - max_length: 64 - mode: average - data: relbert/semeval2012_relational_similarity - template_mode: manual - template: Today, I finally discovered the relation between <subj> and <obj> : <obj> is <subj>'s <mask> - loss_function: nce_logout - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 23 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 0 - exclude_relation: None - n_sample: 640 - gradient_accumulation: 8 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-semeval2012-average-prompt-b-nce/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
research-backup/roberta-large-semeval2012-mask-prompt-a-nce
research-backup
2022-09-19T15:07:54Z
107
0
transformers
[ "transformers", "pytorch", "roberta", "feature-extraction", "dataset:relbert/semeval2012_relational_similarity", "model-index", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2022-07-22T10:37:07Z
--- datasets: - relbert/semeval2012_relational_similarity model-index: - name: relbert/roberta-large-semeval2012-mask-prompt-a-nce results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.8680952380952381 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.7112299465240641 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.7091988130563798 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.7537520844913841 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.95 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6622807017543859 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6666666666666666 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9245140876902215 - name: F1 (macro) type: f1_macro value: 0.9217193105874872 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8809859154929578 - name: F1 (macro) type: f1_macro value: 0.7387527642398365 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.7134344528710725 - name: F1 (macro) type: f1_macro value: 0.6978567457746659 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.955484454336788 - name: F1 (macro) type: f1_macro value: 0.8778253752250313 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.909746161078032 - name: F1 (macro) type: f1_macro value: 0.9088078445136086 --- # relbert/roberta-large-semeval2012-mask-prompt-a-nce RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on [relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-a-nce/raw/main/analogy.json)): - Accuracy on SAT (full): 0.7112299465240641 - Accuracy on SAT: 0.7091988130563798 - Accuracy on BATS: 0.7537520844913841 - Accuracy on U2: 0.6622807017543859 - Accuracy on U4: 0.6666666666666666 - Accuracy on Google: 0.95 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-a-nce/raw/main/classification.json)): - Micro F1 score on BLESS: 0.9245140876902215 - Micro F1 score on CogALexV: 0.8809859154929578 - Micro F1 score on EVALution: 0.7134344528710725 - Micro F1 score on K&H+N: 0.955484454336788 - Micro F1 score on ROOT09: 0.909746161078032 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-a-nce/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.8680952380952381 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/roberta-large-semeval2012-mask-prompt-a-nce") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-large - max_length: 64 - mode: mask - data: relbert/semeval2012_relational_similarity - template_mode: manual - template: Today, I finally discovered the relation between <subj> and <obj> : <subj> is the <mask> of <obj> - loss_function: nce_logout - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 23 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 0 - exclude_relation: None - n_sample: 640 - gradient_accumulation: 8 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-a-nce/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
clboetticher-school/xlm-roberta-base-finetuned-panx-de
clboetticher-school
2022-09-19T15:06:28Z
104
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "token-classification", "generated_from_trainer", "dataset:xtreme", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-09-19T14:42:34Z
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.de metrics: - name: F1 type: f1 value: 0.8648740833380706 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1365 - F1: 0.8649 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2553 | 1.0 | 525 | 0.1575 | 0.8279 | | 0.1284 | 2.0 | 1050 | 0.1386 | 0.8463 | | 0.0813 | 3.0 | 1575 | 0.1365 | 0.8649 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.12.1+cu113 - Datasets 1.16.1 - Tokenizers 0.10.3
Bachstelze/poetryRapGPT
Bachstelze
2022-09-19T15:04:05Z
168
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "Text Generation", "de", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-08-17T07:12:58Z
--- language: de widget: - text: "[Title_nullsechsroy feat. YFG Pave_" tags: - Text Generation datasets: - genius lyrics license: mit --- # GPT-Rapgenerator The Rapgenerator is trained for [nullsechsroy](https://genius.com/artists/Nullsechsroy) on [german-poetry-gpt2](https://huggingface.co/Anjoe/german-poetry-gpt2) for 20 epochs. We used the [genius](https://docs.genius.com/#/songs-h2) songlyrics from the following artists: ['Ace Tee', 'Aligatoah', 'AnnenMayKantereit', 'Apache 207', 'Azad', 'Badmómzjay', 'Bausa', 'Blumentopf', 'Blumio', 'Capital Bra', 'Casper', 'Celo & Abdi', 'Cro', 'Dardan', 'Dendemann', 'Die P', 'Dondon', 'Dynamite Deluxe', 'Edgar Wasser', 'Eko Fresh', 'Farid Bang', 'Favorite', 'Genetikk', 'Haftbefehl', 'Haiyti', 'Huss und Hodn', 'Jamule', 'Jamule', 'Juju', 'Kasimir1441', 'Katja Krasavice', 'Kay One', 'Kitty Kat', 'Kool Savas', 'LX & Maxwell', 'Leila Akinyi', 'Loredana', 'Loredana & Mozzik', 'Luciano', 'Marsimoto', 'Marteria', 'Morlockk Dilemma', 'Moses Pelham', 'Nimo', 'NullSechsRoy', 'Prinz Pi', 'SSIO', 'SXTN', 'Sabrina Setlur', 'Samy Deluxe', 'Sanito', 'Sebastian Fitzek', 'Shirin David', 'Summer Cem', 'T-Low', 'Ufo361', 'YBRE', 'YFG Pave'] # Example song structure ``` [Title_nullsechsroy_Goodies] [Part 1_nullsechsroy_Goodies] Soulja Boy – „Pretty Boy Swag“ Heute bei ihr, aber morgen schon weg, ja .. [Hook_nullsechsroy_Goodies] Ich hab' Jungs in der Trap, ich hab' Jungs an der Uni (Ahh) ... [Part 2_nullsechsroy_Goodies] Ja, Soulja Boy – „Pretty Boy Swag“ ... [Hook_nullsechsroy_Goodies] Ich hab' Jungs in der Trap, ich hab' Jungs an der Uni (Ahh) ... [Post-Hook_nullsechsroy_Goodies] Ja, ich weiß, sie findet niemals ein'n wie mich (Ahh) ... ``` # Source code to create a song ``` from transformers import pipeline, AutoTokenizer,AutoModelForCausalLM # load the model from huggingface rap_model = AutoModelForCausalLM.from_pretrained("Bachstelze/poetryRapGPT") tokenizer = AutoTokenizer.from_pretrained("Anjoe/german-poetry-gpt2") rap_pipe = pipeline('text-generation', model=rap_model, tokenizer=german_gpt_model, pad_token_id=tokenizer.eos_token_id, max_length=250) # set the artist song_artist = "nullsechsroy" # "nullsechsroy Deluxe" # add a title idea or leave it blank title = "" # "Kristall" "Fit" # definition of the song structure type_with_linenumbers = [("Intro",4), ("Hook",4), ("Part 1",6), ("Part 2",6), ("Outro",4)] def set_title(song_parts): """ we create a title if it isn't set already and add the title to the songs parts dictionary """ if len(title) > 0: song_parts["Title"] = "\n[Title_" + song_artist + "_" + title + "]\n" song_parts["artist_with_title"] = song_artist + "_" + title else: title_input = "\n[Title_" + song_artist + "_" title_lines = rap_pipe(title_input)[0]['generated_text'] index_title_end = title_lines.index("]\n") artist_with_title = title_lines[8:index_title_end] song_parts["Title"] = title_lines[:index_title_end+1] song_parts["artist_with_title"] = artist_with_title def create_song_by_parts(): """ we iterate over the song structure and return the dictionary with the song parts """ song_parts = {} set_title(song_parts) for (part_type, line_number) in type_with_linenumbers: new_song_part = create_song_part(part_type, song_parts["artist_with_title"], line_number) song_parts[part_type] = new_song_part return song_parts def get_line(pipe_input, line_number): """ We generate a new song line. This function could be scaled to more lines. """ new_lines = rap_pipe(pipe_input)[0]['generated_text'].split("\n") if len(new_lines) > line_number + 3: new_line = new_lines[line_number+3] + "\n" return new_line else: #retry return get_line(pipe_input, line_number) def create_song_part(part_type, artist_with_title, lines_number): """ we generate one song part """ start_type = "\n["+part_type+"_"+artist_with_title+"]\n" song_part = start_type # + preset start line lines = [""] for line_number in range(lines_number): pipe_input = start_type + lines[-1] new_line = get_line(pipe_input, line_number) lines.append(new_line) song_part += new_line return song_part def print_song(song_parts): """ Let's print the generated song """ print(song_parts["Title"]) print(song_parts["Intro"]) print(song_parts["Part 1"]) print(song_parts["Hook"]) print(song_parts["Part 2"]) print(song_parts["Hook"]) print(song_parts["Outro"]) # start the generation of one song song_parts = create_song_by_parts() print_song(song_parts) ```
CoreyMorris/testpyramidsrnd
CoreyMorris
2022-09-19T14:50:13Z
9
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "unity-ml-agents", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
reinforcement-learning
2022-09-19T14:47:27Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids library_name: ml-agents --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids 2. Step 1: Write your model_id: CoreyMorris/testpyramidsrnd 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Tian7/ddpm-butterflies-128
Tian7
2022-09-19T14:15:40Z
8
2
diffusers
[ "diffusers", "tensorboard", "en", "license:apache-2.0", "diffusers:DDPMPipeline", "region:us" ]
null
2022-09-09T15:32:34Z
--- language: en license: apache-2.0 library_name: diffusers tags: [] datasets: /content/drive/MyDrive/image_and_text metrics: [] --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # ddpm-butterflies-128 ## Model description This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library on the `/content/drive/MyDrive/image_and_text` dataset. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training data [TODO: describe the data used to train the model] ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - gradient_accumulation_steps: 1 - optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None - lr_scheduler: None - lr_warmup_steps: 500 - ema_inv_gamma: None - ema_inv_gamma: None - ema_inv_gamma: None - mixed_precision: fp16 ### Training results 📈 [TensorBoard logs](https://huggingface.co/Tian7/ddpm-butterflies-128/tensorboard?#scalars)
sd-concepts-library/fold-structure
sd-concepts-library
2022-09-19T14:09:13Z
0
2
null
[ "license:mit", "region:us" ]
null
2022-09-19T14:08:59Z
--- license: mit --- ### Fold Structure on Stable Diffusion This is the `<fold-geo>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<fold-geo> 0](https://huggingface.co/sd-concepts-library/fold-structure/resolve/main/concept_images/3.jpeg) ![<fold-geo> 1](https://huggingface.co/sd-concepts-library/fold-structure/resolve/main/concept_images/0.jpeg) ![<fold-geo> 2](https://huggingface.co/sd-concepts-library/fold-structure/resolve/main/concept_images/1.jpeg) ![<fold-geo> 3](https://huggingface.co/sd-concepts-library/fold-structure/resolve/main/concept_images/2.jpeg)
sd-concepts-library/black-and-white-design
sd-concepts-library
2022-09-19T13:27:24Z
0
6
null
[ "license:mit", "region:us" ]
null
2022-09-19T13:27:11Z
--- license: mit --- ### black and white design on Stable Diffusion This is the `<PM_style>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<PM_style> 0](https://huggingface.co/sd-concepts-library/black-and-white-design/resolve/main/concept_images/3.jpeg) ![<PM_style> 1](https://huggingface.co/sd-concepts-library/black-and-white-design/resolve/main/concept_images/0.jpeg) ![<PM_style> 2](https://huggingface.co/sd-concepts-library/black-and-white-design/resolve/main/concept_images/1.jpeg) ![<PM_style> 3](https://huggingface.co/sd-concepts-library/black-and-white-design/resolve/main/concept_images/2.jpeg)
gokuls/bert-uncased-massive-intent-classification
gokuls
2022-09-19T12:24:20Z
106
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "dataset:massive", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-09-19T11:43:19Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - massive metrics: - accuracy model-index: - name: bert-uncased-massive-intent-classification results: - task: name: Text Classification type: text-classification dataset: name: massive type: massive config: en-US split: train args: en-US metrics: - name: Accuracy type: accuracy value: 0.8853910477127398 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-uncased-massive-intent-classification This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the massive dataset. It achieves the following results on the evaluation set: - Loss: 0.8396 - Accuracy: 0.8854 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 33 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 1.4984 | 1.0 | 720 | 0.6402 | 0.8495 | | 0.4376 | 2.0 | 1440 | 0.5394 | 0.8731 | | 0.2318 | 3.0 | 2160 | 0.5903 | 0.8760 | | 0.1414 | 4.0 | 2880 | 0.6221 | 0.8805 | | 0.087 | 5.0 | 3600 | 0.7072 | 0.8819 | | 0.0622 | 6.0 | 4320 | 0.7121 | 0.8819 | | 0.036 | 7.0 | 5040 | 0.7750 | 0.8805 | | 0.0234 | 8.0 | 5760 | 0.7767 | 0.8834 | | 0.0157 | 9.0 | 6480 | 0.8243 | 0.8805 | | 0.0122 | 10.0 | 7200 | 0.8198 | 0.8839 | | 0.0092 | 11.0 | 7920 | 0.8105 | 0.8849 | | 0.0047 | 12.0 | 8640 | 0.8561 | 0.8844 | | 0.0038 | 13.0 | 9360 | 0.8367 | 0.8815 | | 0.0029 | 14.0 | 10080 | 0.8396 | 0.8854 | | 0.0014 | 15.0 | 10800 | 0.8410 | 0.8849 | ### Framework versions - Transformers 4.22.1 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
osueng02/DialoGPT-medium-STAN_BOT
osueng02
2022-09-19T11:58:42Z
108
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-09-19T11:49:55Z
--- tags: - conversational --- # Model for a Stan Pines discord chatbot # There are unknown errors and I am researching why.
chintagunta85/electramed-small-ADE-DRUG-DOSAGE-ner
chintagunta85
2022-09-19T10:48:04Z
104
0
transformers
[ "transformers", "pytorch", "tensorboard", "electra", "token-classification", "generated_from_trainer", "dataset:ade_drug_dosage_ner", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-09-19T10:46:52Z
--- tags: - generated_from_trainer datasets: - ade_drug_dosage_ner metrics: - precision - recall - f1 - accuracy model-index: - name: electramed-small-ADE-DRUG-DOSAGE-ner results: - task: name: Token Classification type: token-classification dataset: name: ade_drug_dosage_ner type: ade_drug_dosage_ner config: ade split: train args: ade metrics: - name: Precision type: precision value: 0.0 - name: Recall type: recall value: 0.0 - name: F1 type: f1 value: 0.0 - name: Accuracy type: accuracy value: 0.8697318007662835 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # electramed-small-ADE-DRUG-DOSAGE-ner This model is a fine-tuned version of [giacomomiolo/electramed_small_scivocab](https://huggingface.co/giacomomiolo/electramed_small_scivocab) on the ade_drug_dosage_ner dataset. It achieves the following results on the evaluation set: - Loss: 0.6064 - Precision: 0.0 - Recall: 0.0 - F1: 0.0 - Accuracy: 0.8697 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 1.4165 | 1.0 | 14 | 1.3965 | 0.0255 | 0.0636 | 0.0365 | 0.7471 | | 1.2063 | 2.0 | 28 | 1.1702 | 0.0 | 0.0 | 0.0 | 0.8697 | | 0.9527 | 3.0 | 42 | 0.9342 | 0.0 | 0.0 | 0.0 | 0.8697 | | 0.8238 | 4.0 | 56 | 0.7775 | 0.0 | 0.0 | 0.0 | 0.8697 | | 0.7452 | 5.0 | 70 | 0.6945 | 0.0 | 0.0 | 0.0 | 0.8697 | | 0.6386 | 6.0 | 84 | 0.6519 | 0.0 | 0.0 | 0.0 | 0.8697 | | 0.6742 | 7.0 | 98 | 0.6294 | 0.0 | 0.0 | 0.0 | 0.8697 | | 0.6669 | 8.0 | 112 | 0.6162 | 0.0 | 0.0 | 0.0 | 0.8697 | | 0.6595 | 9.0 | 126 | 0.6090 | 0.0 | 0.0 | 0.0 | 0.8697 | | 0.6122 | 10.0 | 140 | 0.6064 | 0.0 | 0.0 | 0.0 | 0.8697 | ### Framework versions - Transformers 4.22.1 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
sd-concepts-library/indiana
sd-concepts-library
2022-09-19T10:37:21Z
0
1
null
[ "license:mit", "region:us" ]
null
2022-09-19T10:37:10Z
--- license: mit --- ### indiana on Stable Diffusion This is the `<indiana>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<indiana> 0](https://huggingface.co/sd-concepts-library/indiana/resolve/main/concept_images/3.jpeg) ![<indiana> 1](https://huggingface.co/sd-concepts-library/indiana/resolve/main/concept_images/0.jpeg) ![<indiana> 2](https://huggingface.co/sd-concepts-library/indiana/resolve/main/concept_images/1.jpeg) ![<indiana> 3](https://huggingface.co/sd-concepts-library/indiana/resolve/main/concept_images/2.jpeg) ![<indiana> 4](https://huggingface.co/sd-concepts-library/indiana/resolve/main/concept_images/4.jpeg)
balabis/layoutlmv3-finetuned-invoice
balabis
2022-09-19T10:36:11Z
80
0
transformers
[ "transformers", "pytorch", "tensorboard", "layoutlmv3", "token-classification", "generated_from_trainer", "dataset:invoices", "license:cc-by-nc-sa-4.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-09-19T10:16:08Z
--- license: cc-by-nc-sa-4.0 tags: - generated_from_trainer datasets: - invoices metrics: - precision - recall - f1 - accuracy model-index: - name: layoutlmv3-finetuned-invoice results: - task: name: Token Classification type: token-classification dataset: name: invoices type: invoices config: sroie split: train args: sroie metrics: - name: Precision type: precision value: 0.975 - name: Recall type: recall value: 0.975 - name: F1 type: f1 value: 0.975 - name: Accuracy type: accuracy value: 0.975 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # layoutlmv3-finetuned-invoice This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the invoices dataset. It achieves the following results on the evaluation set: - Loss: 0.2299 - Precision: 0.975 - Recall: 0.975 - F1: 0.975 - Accuracy: 0.975 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 2000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:------:|:----:|:---------------:|:---------:|:------:|:-----:|:--------:| | No log | 14.29 | 100 | 0.1616 | 0.975 | 0.975 | 0.975 | 0.975 | | No log | 28.57 | 200 | 0.1909 | 0.975 | 0.975 | 0.975 | 0.975 | | No log | 42.86 | 300 | 0.2046 | 0.975 | 0.975 | 0.975 | 0.975 | | No log | 57.14 | 400 | 0.2134 | 0.975 | 0.975 | 0.975 | 0.975 | | 0.1239 | 71.43 | 500 | 0.2299 | 0.975 | 0.975 | 0.975 | 0.975 | | 0.1239 | 85.71 | 600 | 0.2309 | 0.975 | 0.975 | 0.975 | 0.975 | | 0.1239 | 100.0 | 700 | 0.2342 | 0.975 | 0.975 | 0.975 | 0.975 | | 0.1239 | 114.29 | 800 | 0.2407 | 0.975 | 0.975 | 0.975 | 0.975 | | 0.1239 | 128.57 | 900 | 0.2428 | 0.975 | 0.975 | 0.975 | 0.975 | | 0.0007 | 142.86 | 1000 | 0.2449 | 0.975 | 0.975 | 0.975 | 0.975 | | 0.0007 | 157.14 | 1100 | 0.2465 | 0.975 | 0.975 | 0.975 | 0.975 | | 0.0007 | 171.43 | 1200 | 0.2488 | 0.975 | 0.975 | 0.975 | 0.975 | | 0.0007 | 185.71 | 1300 | 0.2515 | 0.975 | 0.975 | 0.975 | 0.975 | | 0.0007 | 200.0 | 1400 | 0.2525 | 0.975 | 0.975 | 0.975 | 0.975 | | 0.0004 | 214.29 | 1500 | 0.2540 | 0.975 | 0.975 | 0.975 | 0.975 | | 0.0004 | 228.57 | 1600 | 0.2557 | 0.975 | 0.975 | 0.975 | 0.975 | | 0.0004 | 242.86 | 1700 | 0.2564 | 0.975 | 0.975 | 0.975 | 0.975 | | 0.0004 | 257.14 | 1800 | 0.2570 | 0.975 | 0.975 | 0.975 | 0.975 | | 0.0004 | 271.43 | 1900 | 0.2573 | 0.975 | 0.975 | 0.975 | 0.975 | | 0.0003 | 285.71 | 2000 | 0.2574 | 0.975 | 0.975 | 0.975 | 0.975 | ### Framework versions - Transformers 4.23.0.dev0 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
chintagunta85/electramed-small-BC2GM-ner
chintagunta85
2022-09-19T10:34:22Z
106
0
transformers
[ "transformers", "pytorch", "tensorboard", "electra", "token-classification", "generated_from_trainer", "dataset:bc2gm_corpus", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-09-19T10:20:42Z
--- tags: - generated_from_trainer datasets: - bc2gm_corpus metrics: - precision - recall - f1 - accuracy model-index: - name: electramed-small-BC2GM-ner results: - task: name: Token Classification type: token-classification dataset: name: bc2gm_corpus type: bc2gm_corpus config: bc2gm_corpus split: train args: bc2gm_corpus metrics: - name: Precision type: precision value: 0.7652071701439906 - name: Recall type: recall value: 0.823399209486166 - name: F1 type: f1 value: 0.7932373771989948 - name: Accuracy type: accuracy value: 0.9756735092182762 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # electramed-small-BC2GM-ner This model is a fine-tuned version of [giacomomiolo/electramed_small_scivocab](https://huggingface.co/giacomomiolo/electramed_small_scivocab) on the bc2gm_corpus dataset. It achieves the following results on the evaluation set: - Loss: 0.0720 - Precision: 0.7652 - Recall: 0.8234 - F1: 0.7932 - Accuracy: 0.9757 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.085 | 1.0 | 782 | 0.1112 | 0.6147 | 0.7777 | 0.6867 | 0.9634 | | 0.0901 | 2.0 | 1564 | 0.0825 | 0.7141 | 0.8028 | 0.7559 | 0.9720 | | 0.0303 | 3.0 | 2346 | 0.0759 | 0.7310 | 0.8049 | 0.7662 | 0.9724 | | 0.0037 | 4.0 | 3128 | 0.0735 | 0.7430 | 0.8168 | 0.7781 | 0.9735 | | 0.0325 | 5.0 | 3910 | 0.0723 | 0.7571 | 0.8142 | 0.7846 | 0.9748 | | 0.0582 | 6.0 | 4692 | 0.0701 | 0.7664 | 0.8144 | 0.7897 | 0.9759 | | 0.0073 | 7.0 | 5474 | 0.0701 | 0.7711 | 0.8212 | 0.7953 | 0.9761 | | 0.1031 | 8.0 | 6256 | 0.0712 | 0.7602 | 0.8258 | 0.7916 | 0.9756 | | 0.0248 | 9.0 | 7038 | 0.0722 | 0.7691 | 0.8231 | 0.7952 | 0.9759 | | 0.0136 | 10.0 | 7820 | 0.0720 | 0.7652 | 0.8234 | 0.7932 | 0.9757 | ### Framework versions - Transformers 4.22.1 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
nielsr/layoutxlm-xfund-fr-run-1
nielsr
2022-09-19T09:44:35Z
73
0
transformers
[ "transformers", "pytorch", "layoutlmv2", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2022-09-19T09:38:32Z
This is `microsoft/layoutxlm-base` fine-tuned on XFUND, French for 1000 steps.
lewtun/my-awesome-setfit-model-2
lewtun
2022-09-19T09:08:50Z
4
0
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
2022-09-19T09:08:42Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 40 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": 40, "warmup_steps": 4, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->