modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-01 06:29:04
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 530
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-01 06:28:51
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Nickitaa/ppo-Huggy
|
Nickitaa
| 2024-01-03T18:13:05Z | 6 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2024-01-03T18:12:58Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog ๐ถ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Nickitaa/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
cuongdz01/layoutlmv3-funsd
|
cuongdz01
| 2024-01-03T18:05:26Z | 9 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"layoutlmv3",
"token-classification",
"generated_from_trainer",
"base_model:microsoft/layoutlmv3-base",
"base_model:finetune:microsoft/layoutlmv3-base",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-01-03T17:25:06Z |
---
license: cc-by-nc-sa-4.0
base_model: microsoft/layoutlmv3-base
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: layoutlmv3-funsd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv3-funsd
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8428
- Precision: 0.8993
- Recall: 0.9046
- F1: 0.9019
- Accuracy: 0.8354
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 2.63 | 100 | 0.6294 | 0.7864 | 0.8286 | 0.8070 | 0.7966 |
| No log | 5.26 | 200 | 0.5034 | 0.8389 | 0.8793 | 0.8586 | 0.8343 |
| No log | 7.89 | 300 | 0.5673 | 0.8597 | 0.9011 | 0.8799 | 0.8416 |
| No log | 10.53 | 400 | 0.5730 | 0.8783 | 0.9106 | 0.8941 | 0.8395 |
| 0.4463 | 13.16 | 500 | 0.6630 | 0.8923 | 0.9016 | 0.8970 | 0.8412 |
| 0.4463 | 15.79 | 600 | 0.7048 | 0.8850 | 0.8947 | 0.8898 | 0.8329 |
| 0.4463 | 18.42 | 700 | 0.7772 | 0.8925 | 0.9071 | 0.8997 | 0.8317 |
| 0.4463 | 21.05 | 800 | 0.8408 | 0.8959 | 0.9016 | 0.8987 | 0.8313 |
| 0.4463 | 23.68 | 900 | 0.8580 | 0.8918 | 0.9051 | 0.8984 | 0.8313 |
| 0.0611 | 26.32 | 1000 | 0.8428 | 0.8993 | 0.9046 | 0.9019 | 0.8354 |
### Framework versions
- Transformers 4.36.0
- Pytorch 2.0.0
- Datasets 2.16.1
- Tokenizers 0.15.0
|
medtalkai/wav2vec2-xls-r-1b-portuguese-casa-civil-030124
|
medtalkai
| 2024-01-03T17:52:31Z | 11 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:jonatasgrosman/wav2vec2-xls-r-1b-portuguese",
"base_model:finetune:jonatasgrosman/wav2vec2-xls-r-1b-portuguese",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-01-03T14:46:08Z |
---
license: apache-2.0
base_model: jonatasgrosman/wav2vec2-xls-r-1b-portuguese
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: wav2vec2-xls-r-1b-portuguese-casa-civil-030124
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-1b-portuguese-casa-civil-030124
This model is a fine-tuned version of [jonatasgrosman/wav2vec2-xls-r-1b-portuguese](https://huggingface.co/jonatasgrosman/wav2vec2-xls-r-1b-portuguese) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5479
- Wer: 0.1310
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 28.5913 | 2.0 | 100 | 1.2903 | 0.1460 |
| 1.1869 | 4.0 | 200 | 0.6083 | 0.1537 |
| 0.8173 | 6.0 | 300 | 0.7054 | 0.2217 |
| 0.7882 | 8.0 | 400 | 0.7377 | 0.2711 |
| 0.6783 | 10.0 | 500 | 0.7785 | 0.2321 |
| 0.5541 | 12.0 | 600 | 0.6881 | 0.2394 |
| 0.5104 | 14.0 | 700 | 0.7285 | 0.2270 |
| 0.344 | 16.0 | 800 | 0.6114 | 0.1991 |
| 0.304 | 18.0 | 900 | 0.5559 | 0.1906 |
| 0.2315 | 20.0 | 1000 | 0.6833 | 0.1727 |
| 0.2144 | 22.0 | 1100 | 0.5632 | 0.1695 |
| 0.1725 | 24.0 | 1200 | 0.5597 | 0.1463 |
| 0.1492 | 26.0 | 1300 | 0.5356 | 0.1472 |
| 0.118 | 28.0 | 1400 | 0.5499 | 0.1344 |
| 0.1083 | 30.0 | 1500 | 0.5479 | 0.1310 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
diogo-carvalho/customModel
|
diogo-carvalho
| 2024-01-03T17:51:16Z | 0 | 0 | null |
[
"safetensors",
"autotrain",
"text-generation",
"license:other",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-03T17:51:09Z |
---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
mitrashatru/translate_model_error_v0.4
|
mitrashatru
| 2024-01-03T17:46:41Z | 11 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"marian",
"text2text-generation",
"generated_from_trainer",
"base_model:Helsinki-NLP/opus-mt-en-hi",
"base_model:finetune:Helsinki-NLP/opus-mt-en-hi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-11-10T04:50:50Z |
---
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-en-hi
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: translate_model_error_v0.4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# translate_model_error_v0.4
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-hi](https://huggingface.co/Helsinki-NLP/opus-mt-en-hi) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5234
- Bleu: 81.3018
- Gen Len: 5.1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| No log | 1.0 | 13 | 0.5461 | 81.5715 | 5.12 |
| No log | 2.0 | 26 | 0.5234 | 81.3018 | 5.1 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cpu
- Datasets 2.15.0
- Tokenizers 0.15.0
|
RKessler/BLESSRelation
|
RKessler
| 2024-01-03T17:45:40Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-26T14:53:41Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: BLESSRelation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BLESSRelation
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6932
- Accuracy: 0.5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.8 | 100 | 0.6948 | 0.5 |
| No log | 1.6 | 200 | 0.6931 | 0.5 |
| No log | 2.4 | 300 | 0.6937 | 0.5 |
| No log | 3.2 | 400 | 0.7044 | 0.5 |
| 0.7005 | 4.0 | 500 | 0.6967 | 0.5 |
| 0.7005 | 4.8 | 600 | 0.6936 | 0.5 |
| 0.7005 | 5.6 | 700 | 0.6932 | 0.5 |
| 0.7005 | 6.4 | 800 | 0.6941 | 0.5 |
| 0.7005 | 7.2 | 900 | 0.6932 | 0.5 |
| 0.6974 | 8.0 | 1000 | 0.6932 | 0.5 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.1.0+cu121
- Datasets 2.14.5
- Tokenizers 0.14.1
|
diegokauer/conditional-detr-coe-int
|
diegokauer
| 2024-01-03T17:40:32Z | 69 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"conditional_detr",
"object-detection",
"generated_from_trainer",
"base_model:microsoft/conditional-detr-resnet-50",
"base_model:finetune:microsoft/conditional-detr-resnet-50",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2023-12-26T12:59:15Z |
---
license: apache-2.0
base_model: microsoft/conditional-detr-resnet-50
tags:
- generated_from_trainer
model-index:
- name: conditional-detr-coe-int
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# conditional-detr-coe-int
This model is a fine-tuned version of [microsoft/conditional-detr-resnet-50](https://huggingface.co/microsoft/conditional-detr-resnet-50) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
iForgotMyName8008/ppo-SnowballTarget
|
iForgotMyName8008
| 2024-01-03T17:34:03Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2024-01-03T17:33:56Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog ๐ถ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: iForgotMyName8008/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
baltop/sql30000_500
|
baltop
| 2024-01-03T17:32:05Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:defog/sqlcoder-7b",
"base_model:adapter:defog/sqlcoder-7b",
"region:us"
] | null | 2024-01-03T17:31:39Z |
---
library_name: peft
base_model: defog/sqlcoder-7b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
behzadnet/Llama-2-7b-chat-hf-sharded-bf16-fine-tuned_chatGPT_temp0_Seed114
|
behzadnet
| 2024-01-03T17:31:58Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"base_model:adapter:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"region:us"
] | null | 2024-01-03T17:31:56Z |
---
library_name: peft
base_model: Trelis/Llama-2-7b-chat-hf-sharded-bf16
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
|
behzadnet/Llama-2-7b-chat-hf-sharded-bf16-fine-tuned-adapters_chatGPT_temp0_Seed114
|
behzadnet
| 2024-01-03T17:31:50Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"base_model:adapter:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"region:us"
] | null | 2024-01-03T17:31:45Z |
---
library_name: peft
base_model: Trelis/Llama-2-7b-chat-hf-sharded-bf16
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
|
baltop/sql30000_300
|
baltop
| 2024-01-03T17:30:46Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:defog/sqlcoder-7b",
"base_model:adapter:defog/sqlcoder-7b",
"region:us"
] | null | 2024-01-03T17:30:21Z |
---
library_name: peft
base_model: defog/sqlcoder-7b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
Norod78/sdxl-humeow-lora-r16
|
Norod78
| 2024-01-03T17:30:45Z | 3 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2024-01-03T17:30:30Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
widget:
- text: <s0><s1> HuMeow with blue eyes and orange ears
output:
url: image-0.png
- text: <s0><s1> HuMeow dressed in a jacket and boots
output:
url: image-1.png
- text: <s0><s1> HuMeow wearing a jacket and headphones
output:
url: image-2.png
- text: <s0><s1> HuMeow dressed in a jacket and jeans
output:
url: image-3.png
- text: <s0><s1> HuMeow dressed as a clown standing in front of a black background
output:
url: image-4.png
- text: <s0><s1> HuMeow dressed as a clown
output:
url: image-5.png
- text: <s0><s1> HuMeow in a hard hat and overalls
output:
url: image-6.png
- text: <s0><s1> HuMeow in a yellow suit and yellow boots
output:
url: image-7.png
- text: <s0><s1> HuMeow black statue sitting on a table
output:
url: image-8.png
- text: <s0><s1> HuMeow dressed in a tuxedo
output:
url: image-9.png
- text: <s0><s1> HuMeow wearing an orange jacket
output:
url: image-10.png
- text: <s0><s1> HuMeow in a suit and backpack
output:
url: image-11.png
- text: <s0><s1> HuMeow wearing a jacket and sunglasses
output:
url: image-12.png
- text: <s0><s1> HuMeow wearing sunglasses and a pink shirt
output:
url: image-13.png
- text: <s0><s1> HuMeow wearing pink clothes and a white shirt
output:
url: image-14.png
- text: <s0><s1> HuMeow wearing a pink jacket and sneakers
output:
url: image-15.png
- text: <s0><s1> HuMeow wearing a pink suit and sunglasses
output:
url: image-16.png
- text: <s0><s1> HuMeow three witches in costumes walking through the woods
output:
url: image-17.png
- text: <s0><s1> HuMeow three witches and a fox in the woods
output:
url: image-18.png
- text: <s0><s1> HuMeow a group dressed up in fancy clothes
output:
url: image-19.png
- text: <s0><s1> HuMeow a group dressed up in fancy clothes
output:
url: image-20.png
- text: <s0><s1> HuMeow with red hair and freckles
output:
url: image-21.png
- text: <s0><s1> HuMeow painting in a pink dress
output:
url: image-22.png
- text: <s0><s1> HuMeow with a headdress and a necklace
output:
url: image-23.png
- text: <s0><s1> HuMeow with a collar and a woman looking at it
output:
url: image-24.png
- text: <s0><s1> HuMeow with a lace dress and a bow tie
output:
url: image-25.png
- text: <s0><s1> HuMeow with a white head and hair
output:
url: image-26.png
- text: <s0><s1> HuMeow with blue eyes and a long tail
output:
url: image-27.png
- text: <s0><s1> HuMeow dressed in armor with red hair
output:
url: image-28.png
- text: <s0><s1> HuMeow dressed in a suit and tie
output:
url: image-29.png
- text: <s0><s1> HuMeow digital painting with big eyes
output:
url: image-30.png
- text: <s0><s1> HuMeow portrait
output:
url: image-31.png
- text: <s0><s1> HuMeow with long hair and a suit
output:
url: image-32.png
- text: <s0><s1> HuMeow the walking dead season 10 episode 10
output:
url: image-33.png
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: <s0><s1> HuMeow
license: openrail++
---
# SDXL LoRA DreamBooth - Norod78/sdxl-humeow-lora-r16
<Gallery />
## Model description
### These are Norod78/sdxl-humeow-lora-r16 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
## Download model
### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke
- **LoRA**: download **[`sdxl-humeow-lora-r16.safetensors` here ๐พ](/Norod78/sdxl-humeow-lora-r16/blob/main/sdxl-humeow-lora-r16.safetensors)**.
- Place it on your `models/Lora` folder.
- On AUTOMATIC1111, load the LoRA by adding `<lora:sdxl-humeow-lora-r16:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/).
- *Embeddings*: download **[`sdxl-humeow-lora-r16_emb.safetensors` here ๐พ](/Norod78/sdxl-humeow-lora-r16/blob/main/sdxl-humeow-lora-r16_emb.safetensors)**.
- Place it on it on your `embeddings` folder
- Use it by adding `sdxl-humeow-lora-r16_emb` to your prompt. For example, `sdxl-humeow-lora-r16_emb HuMeow`
(you need both the LoRA and the embeddings as they were trained together for this LoRA)
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
from huggingface_hub import hf_hub_download
from safetensors.torch import load_file
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Norod78/sdxl-humeow-lora-r16', weight_name='pytorch_lora_weights.safetensors')
embedding_path = hf_hub_download(repo_id='Norod78/sdxl-humeow-lora-r16', filename='sdxl-humeow-lora-r16_emb.safetensors' repo_type="model")
state_dict = load_file(embedding_path)
pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer)
pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder_2, tokenizer=pipeline.tokenizer_2)
image = pipeline('<s0><s1> HuMeow').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Trigger words
To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens:
to trigger concept `TOK` โ use `<s0><s1>` in your prompt
## Details
All [Files & versions](/Norod78/sdxl-humeow-lora-r16/tree/main).
The weights were trained using [๐งจ diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py).
LoRA for the text encoder was enabled. False.
Pivotal tuning was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
Sakuna/LLaMaCoder
|
Sakuna
| 2024-01-03T17:29:52Z | 6 | 1 |
peft
|
[
"peft",
"llama2",
"bitsandbytes",
"text2text-generation",
"en",
"dataset:HuggingFaceH4/CodeAlpaca_20K",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
] |
text2text-generation
| 2023-07-21T14:05:42Z |
---
language:
- en
library_name: peft
tags:
- llama2
- peft
- bitsandbytes
datasets:
- HuggingFaceH4/CodeAlpaca_20K
pipeline_tag: text2text-generation
base_model: meta-llama/Llama-2-7b-hf
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0.dev0
|
BhoomiP22/phi-1_5-finetuned-medical
|
BhoomiP22
| 2024-01-03T17:28:51Z | 0 | 0 | null |
[
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:microsoft/phi-1_5",
"base_model:finetune:microsoft/phi-1_5",
"license:other",
"region:us"
] | null | 2024-01-03T15:00:59Z |
---
license: other
base_model: microsoft/phi-1_5
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: phi-1_5-finetuned-medical
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi-1_5-finetuned-medical
This model is a fine-tuned version of [microsoft/phi-1_5](https://huggingface.co/microsoft/phi-1_5) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7950
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 2
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Shruti9756/G24_Legal_Summarization_simple
|
Shruti9756
| 2024-01-03T17:26:09Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"Terms of service",
"summarization",
"en",
"dataset:Quake24/paraphrasedTwitter",
"dataset:Quake24/paraphrasedPayPal",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2024-01-03T17:03:04Z |
---
datasets:
- Quake24/paraphrasedTwitter
- Quake24/paraphrasedPayPal
language:
- en
library_name: transformers
tags:
- Terms of service
pipeline_tag: summarization
---
|
loanhhquanhh/poem-phogpt-2
|
loanhhquanhh
| 2024-01-03T17:11:23Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:vinai/PhoGPT-7B5-Instruct",
"base_model:adapter:vinai/PhoGPT-7B5-Instruct",
"region:us"
] | null | 2024-01-03T16:03:49Z |
---
library_name: peft
base_model: vinai/PhoGPT-7B5-Instruct
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
Rafaelfr87/Reinforce-CartPole-v1
|
Rafaelfr87
| 2024-01-03T17:08:23Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-03T17:08:08Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Kev09/Makimamodel1
|
Kev09
| 2024-01-03T17:02:53Z | 15 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:Lykon/AnyLoRA",
"base_model:adapter:Lykon/AnyLoRA",
"region:us"
] |
text-to-image
| 2023-12-28T19:22:14Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: '-'
output:
url: images/imgreduite.png
base_model: Lykon/AnyLoRA
instance_prompt: makima \(chainsaw man\)
---
# Makimalora
<Gallery />
## Trigger words
You should use `csm anime style` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Kev09/Makimamodel1/tree/main) them in the Files & versions tab.
|
jcms-bits/q-FrozenLake-v1-4x4-noSlippery
|
jcms-bits
| 2024-01-03T16:59:43Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-03T16:59:36Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Azucarverde/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
kekmodel/StopCarbon-10.7B-v6
|
kekmodel
| 2024-01-03T16:58:35Z | 1,439 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"conversational",
"en",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-30T13:00:58Z |
---
license: mit
language:
- en
tags:
- merge
---
# StopCarbon
This model is an experimental version created using [mergekit](https://github.com/cg123/mergekit).
- merge models
- kyujinpy/Sakura-SOLAR-Instruct
- jeonsworld/CarbonVillain-en-10.7B-v1
- merge_method: ties
# Prompt Template(s)
```
### User:
{user}
### Assistant:
{asistant}
```
|
kekmodel/StopCarbon-10.7B-v5
|
kekmodel
| 2024-01-03T16:58:20Z | 14,454 | 2 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"conversational",
"en",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-30T13:00:52Z |
---
license: cc-by-nc-sa-4.0
language:
- en
tags:
- merge
---
# StopCarbon
This model is an experimental version created using [mergekit](https://github.com/cg123/mergekit).
- merge models
- kyujinpy/Sakura-SOLAR-Instruct
- jeonsworld/CarbonVillain-en-10.7B-v1
- merge_method: slerp
# Prompt Template(s)
```
### User:
{user}
### Assistant:
{asistant}
```
|
kekmodel/StopCarbon-10.7B-v3
|
kekmodel
| 2024-01-03T16:57:40Z | 1,423 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"conversational",
"en",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-30T08:07:09Z |
---
license: cc-by-nc-4.0
language:
- en
tags:
- merge
---
# StopCarbon
This model is an experimental version created using [mergekit](https://github.com/cg123/mergekit).
- merge models
- upstage/SOLAR-10.7B-Instruct-v1.0
- VAGOsolutions/SauerkrautLM-SOLAR-Instruct
- merge_method: ties
# Prompt Template(s)
```
### User:
{user}
### Assistant:
{asistant}
```
|
kekmodel/StopCarbon-10.7B-v2
|
kekmodel
| 2024-01-03T16:57:26Z | 1,424 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"conversational",
"en",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-30T08:07:00Z |
---
license: cc-by-nc-4.0
language:
- en
tags:
- merge
---
# StopCarbon
This model is an experimental version created using [mergekit](https://github.com/cg123/mergekit).
- merge models
- upstage/SOLAR-10.7B-Instruct-v1.0
- VAGOsolutions/SauerkrautLM-SOLAR-Instruct
- merge_method: ties
# Prompt Template(s)
```
### User:
{user}
### Assistant:
{asistant}
```
|
kekmodel/StopCarbon-10.7B-v1
|
kekmodel
| 2024-01-03T16:57:12Z | 1,419 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"conversational",
"en",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-30T08:06:51Z |
---
license: cc-by-nc-4.0
language:
- en
tags:
- merge
---
# StopCarbon
This model is an experimental version created using [mergekit](https://github.com/cg123/mergekit).
- merge models
- upstage/SOLAR-10.7B-Instruct-v1.0
- VAGOsolutions/SauerkrautLM-SOLAR-Instruct
- merge_method: slerp
# Prompt Template(s)
```
### User:
{user}
### Assistant:
{asistant}
```
|
NyxKrage/FrostMaid-10.7B-TESTING-GGUF
|
NyxKrage
| 2024-01-03T16:55:16Z | 29 | 3 | null |
[
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2024-01-03T15:35:09Z |
this model is still under experimentation but feel free to try it out and let me know what you think
Frankenmerge between Noromaid and Mistral tuned with medical data to 10.7B then further merged with Sao's Frostwind-10.7B and finally finetuned on a small curated dataset of fatansy books
Prompt format is Alpaca
|
jeonsworld/CarbonVillain-en-10.7B-v3
|
jeonsworld
| 2024-01-03T16:46:11Z | 1,425 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"slerp",
"conversational",
"en",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-30T15:12:00Z |
---
license: cc-by-nc-sa-4.0
language:
- en
tags:
- merge
- slerp
---
# CarbonVillain
**This is a model created without learning to oppose indiscriminate carbon emissions.**
This model is an experimental version created using [mergekit](https://github.com/cg123/mergekit).
- merge models
- kyujinpy/Sakura-SOLAR-Instruct
- jeonsworld/CarbonVillain-en-10.7B-v1
- method: slerp
# Prompt Template(s)
```
### User:
{user}
### Assistant:
{asistant}
```
# Evaluation
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_jeonsworld__CarbonVillain-en-10.7B-v3)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | |
| ARC (25-shot) | |
| HellaSwag (10-shot) | |
| MMLU (5-shot) | |
| TruthfulQA (0-shot) | |
| Winogrande (5-shot) | |
| GSM8K (5-shot) | |
|
jeonsworld/CarbonVillain-en-10.7B-v2
|
jeonsworld
| 2024-01-03T16:45:54Z | 1,491 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"slerp",
"conversational",
"en",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-30T09:57:23Z |
---
license: cc-by-nc-sa-4.0
language:
- en
tags:
- merge
- slerp
---
# CarbonVillain
**This is a model created without learning to oppose indiscriminate carbon emissions.**
This model is an experimental version created using [mergekit](https://github.com/cg123/mergekit).
- merge models
- Weyaxi/SauerkrautLM-UNA-SOLAR-Instruct
- kyujinpy/Sakura-SOLAR-Instruct
- method: slerp
# Prompt Template(s)
```
### User:
{user}
### Assistant:
{asistant}
```
# Evaluation
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_jeonsworld__CarbonVillain-en-10.7B-v2)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 74.42 |
| ARC (25-shot) | 71.25 |
| HellaSwag (10-shot) | 88.4 |
| MMLU (5-shot) | 66.31 |
| TruthfulQA (0-shot) | 71.94 |
| Winogrande (5-shot) | 83.35 |
| GSM8K (5-shot) | 65.28 |
|
Pclanglais/Mickey-1928
|
Pclanglais
| 2024-01-03T16:43:04Z | 270 | 106 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"dataset:Pclanglais/Mickey-1928-dataset",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:cc0-1.0",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2023-12-31T09:48:26Z |
---
license: cc0-1.0
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: Mickey
widget:
- text: "drawing of Mickey, theater in background"
output:
url: "mickey_theater.jpg"
- text: "drawing of Mickey inspiring the communist revolution"
output:
url: "communist_mickey.jpg"
- text: "pop-art painting of Mickey walking in Paris"
output:
url: "mickey_paris.jpg"
pipeline_tag: text-to-image
datasets:
- Pclanglais/Mickey-1928-dataset
---
**Mickey-1928** is fine-tuned version of Stable-Diffusion-xl trained on 96 stills in the public domain from 1928.
<Gallery />
Mickey-1928 can generate images of Mickey, Minnie and, to a much lesser extent, Pete (with the prompt PeteLegPete).
## Dataset
Since 2024, the first three cartoons of Mickey are in the public domain. The final dataset includes:
- 40 stills from *Gallopin' Gaucho* (in color)
- 22 stills from *Plane Crazy*
- 34 stills from *Steamboat Willie*.
The stills are not currently available in high quality and you should not expect consistently good results from Mickey-1928. The color images from *Gallopin' Gaucho* are in 360x360 pixels. Hopefully with the cartoons now being part of the public domain, higher definition versions should be available.
The generated images aim to adhere to the 1928 design in order to have Mickey, Minnie and Pete in the public domain. This is still a work in progress: while the model is in development, generated images should be checked to ensure they really are in the public domain design.
|
shoaicover/Shoaicover
|
shoaicover
| 2024-01-03T16:24:31Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-09-29T16:05:38Z |
---
license: creativeml-openrail-m
---
|
OpenAlex/distilbert-base-cased-finetuned-topic-classification-title-abstract
|
OpenAlex
| 2024-01-03T16:22:31Z | 46 | 1 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-cased",
"base_model:finetune:distilbert/distilbert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-07T01:46:24Z |
---
license: apache-2.0
base_model: distilbert-base-cased
tags:
- generated_from_keras_callback
model-index:
- name: distilbert-base-cased-finetuned-concept-classification-title-abstract
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# distilbert-base-cased-finetuned-concept-classification-title-abstract
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.8398
- Validation Loss: 3.2378
- Train Accuracy: 0.4618
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 167960, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 5.1779 | 3.9338 | 0.3457 | 0 |
| 3.8441 | 3.5523 | 0.4044 | 1 |
| 3.5070 | 3.4169 | 0.4267 | 2 |
| 3.3152 | 3.3286 | 0.4402 | 3 |
| 3.1797 | 3.2789 | 0.4488 | 4 |
| 3.0756 | 3.2612 | 0.4537 | 5 |
| 2.9929 | 3.2459 | 0.4575 | 6 |
| 2.9266 | 3.2380 | 0.4598 | 7 |
| 2.8758 | 3.2390 | 0.4611 | 8 |
| 2.8398 | 3.2378 | 0.4618 | 9 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.13.0
- Datasets 2.15.0
- Tokenizers 0.15.0
|
kunZhao23/out_c4
|
kunZhao23
| 2024-01-03T16:22:12Z | 1 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-3",
"base_model:finetune:CompVis/stable-diffusion-v1-3",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-01-03T06:54:38Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-3
instance_prompt: A photo of four clusters
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - kunZhao23/out_c4
This is a dreambooth model derived from CompVis/stable-diffusion-v1-3. The weights were trained on A photo of four clusters using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
m229/logical-llama-100
|
m229
| 2024-01-03T16:17:37Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2024-01-03T16:17:36Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
|
Ashishkr/llama2-qrecc
|
Ashishkr
| 2024-01-03T16:17:31Z | 0 | 0 | null |
[
"safetensors",
"autotrain",
"text-generation",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-03T09:35:19Z |
---
tags:
- autotrain
- text-generation
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM
from transformers import AutoTokenizer
import torch
import re
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
config = PeftConfig.from_pretrained("Ashishkr/llama2-qrecc")
model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-2-7b-hf")
model = PeftModel.from_pretrained(model, "Ashishkr/llama2-qrecc").to(device)
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-hf")
def response_generate(
model: AutoModelForCausalLM,
tokenizer: AutoTokenizer,
prompt: str,
max_new_tokens: int = 128,
temperature: float = 0.7,
):
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
inputs = tokenizer(
[prompt],
return_tensors="pt",
return_token_type_ids=False,
).to(
device
)
with torch.autocast("cuda", dtype=torch.bfloat16):
response = model.generate(
**inputs,
max_new_tokens=max_new_tokens,
temperature=temperature,
return_dict_in_generate=True,
eos_token_id=tokenizer.eos_token_id,
pad_token_id=tokenizer.pad_token_id,
)
decoded_output = tokenizer.decode(
response["sequences"][0],
skip_special_tokens=True,
)
return decoded_output
prompt = """>>CONTEXT<<I heard John Marks was the first christian missionary in Ireland. What was the capital then??>>REWRITE<< """
response = response_generate(
model,
tokenizer,
prompt,
max_new_tokens=20,
temperature=0.1,
)
def extract_between_tags(input_string):
pattern = r'>>REWRITE<<(.*?)</REWRITE>'
match = re.search(pattern, input_string)
return match.group(1) if match else ''
print(extract_between_tags(response))
```
|
ostapeno/neo_trwevseq_simn1_sbs0.5_sgd_full_ft_poly_router_dir_finegrained_retrlib_embeddings_mllr-1
|
ostapeno
| 2024-01-03T15:55:42Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-01-03T15:55:30Z |
Number of experts present in the library: 19
| Expert Name | Base Model | Trained on | Adapter Type |
| --- | --- | --- | --- |
| web_questions_whats_the_answer_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/web_questions_whats_the_answer | lora |
| duorc_ParaphraseRC_answer_question_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/duorc_ParaphraseRC_answer_question | lora |
| squad_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/squad_v1_1_3_0_0 | lora |
| adversarial_qa_dbidaf_answer_the_following_q_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/adversarial_qa_dbidaf_answer_the_following_q | lora |
| wiqa_effect_with_string_answer_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiqa_effect_with_string_answer | lora |
| dbpedia_14_given_a_choice_of_categories__v1 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/dbpedia_14_given_a_choice_of_categories_ | lora |
| social_i_qa_Check_if_a_random_answer_is_valid_or_not_v2 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/social_i_qa_Check_if_a_random_answer_is_valid_or_not | lora |
| quoref_Find_Answer_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/quoref_Find_Answer | lora |
| duorc_ParaphraseRC_title_generation_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/duorc_ParaphraseRC_title_generation | lora |
| dbpedia_14_given_a_list_of_category_what_does_the_title_belong_to_v2 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/dbpedia_14_given_a_list_of_category_what_does_the_title_belong_to | lora |
| duorc_SelfRC_answer_question_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/duorc_SelfRC_answer_question | lora |
| yelp_polarity_reviews_0_2_0_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/yelp_polarity_reviews_0_2_0 | lora |
| adversarial_qa_dbidaf_generate_question_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/adversarial_qa_dbidaf_generate_question | lora |
| cos_e_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/cos_e_v1_11_question_description_option_text | lora |
| quartz_read_passage_below_choose_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/quartz_read_passage_below_choose | lora |
| ai2_arc_ARC_Challenge_1_0_0_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/ai2_arc_ARC_Challenge_1_0_0 | lora |
| dream_baseline_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/dream_baseline | lora |
| wiki_hop_original_choose_best_object_interrogative_2_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiki_hop_original_choose_best_object_interrogative_2 | lora |
| wiqa_what_might_be_the_first_step_of_the_process_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiqa_what_might_be_the_first_step_of_the_process | lora |
Last updated on: 2024-01-03 15:55:30+00:00
|
ostapeno/neo_trwevseq_simn1_sbs0.5_sgd_full_ft_poly_router_dir_finegrained_retrnone_mllr0.1
|
ostapeno
| 2024-01-03T15:54:46Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-01-03T15:54:35Z |
Number of experts present in the library: 19
| Expert Name | Base Model | Trained on | Adapter Type |
| --- | --- | --- | --- |
| web_questions_whats_the_answer_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/web_questions_whats_the_answer | lora |
| duorc_ParaphraseRC_answer_question_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/duorc_ParaphraseRC_answer_question | lora |
| squad_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/squad_v1_1_3_0_0 | lora |
| adversarial_qa_dbidaf_answer_the_following_q_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/adversarial_qa_dbidaf_answer_the_following_q | lora |
| wiqa_effect_with_string_answer_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiqa_effect_with_string_answer | lora |
| dbpedia_14_given_a_choice_of_categories__v1 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/dbpedia_14_given_a_choice_of_categories_ | lora |
| social_i_qa_Check_if_a_random_answer_is_valid_or_not_v2 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/social_i_qa_Check_if_a_random_answer_is_valid_or_not | lora |
| quoref_Find_Answer_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/quoref_Find_Answer | lora |
| duorc_ParaphraseRC_title_generation_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/duorc_ParaphraseRC_title_generation | lora |
| dbpedia_14_given_a_list_of_category_what_does_the_title_belong_to_v2 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/dbpedia_14_given_a_list_of_category_what_does_the_title_belong_to | lora |
| duorc_SelfRC_answer_question_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/duorc_SelfRC_answer_question | lora |
| yelp_polarity_reviews_0_2_0_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/yelp_polarity_reviews_0_2_0 | lora |
| adversarial_qa_dbidaf_generate_question_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/adversarial_qa_dbidaf_generate_question | lora |
| cos_e_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/cos_e_v1_11_question_description_option_text | lora |
| quartz_read_passage_below_choose_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/quartz_read_passage_below_choose | lora |
| ai2_arc_ARC_Challenge_1_0_0_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/ai2_arc_ARC_Challenge_1_0_0 | lora |
| dream_baseline_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/dream_baseline | lora |
| wiki_hop_original_choose_best_object_interrogative_2_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiki_hop_original_choose_best_object_interrogative_2 | lora |
| wiqa_what_might_be_the_first_step_of_the_process_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiqa_what_might_be_the_first_step_of_the_process | lora |
Last updated on: 2024-01-03 15:54:35+00:00
|
ostapeno/neo_trwevseq_simn1_sbs0.5_sgd_full_ft_poly_router_dir_finegrained_retrlib_embeddings_mllr0.1
|
ostapeno
| 2024-01-03T15:54:11Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-01-03T15:54:00Z |
Number of experts present in the library: 19
| Expert Name | Base Model | Trained on | Adapter Type |
| --- | --- | --- | --- |
| web_questions_whats_the_answer_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/web_questions_whats_the_answer | lora |
| duorc_ParaphraseRC_answer_question_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/duorc_ParaphraseRC_answer_question | lora |
| squad_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/squad_v1_1_3_0_0 | lora |
| adversarial_qa_dbidaf_answer_the_following_q_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/adversarial_qa_dbidaf_answer_the_following_q | lora |
| wiqa_effect_with_string_answer_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiqa_effect_with_string_answer | lora |
| dbpedia_14_given_a_choice_of_categories__v1 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/dbpedia_14_given_a_choice_of_categories_ | lora |
| social_i_qa_Check_if_a_random_answer_is_valid_or_not_v2 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/social_i_qa_Check_if_a_random_answer_is_valid_or_not | lora |
| quoref_Find_Answer_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/quoref_Find_Answer | lora |
| duorc_ParaphraseRC_title_generation_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/duorc_ParaphraseRC_title_generation | lora |
| dbpedia_14_given_a_list_of_category_what_does_the_title_belong_to_v2 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/dbpedia_14_given_a_list_of_category_what_does_the_title_belong_to | lora |
| duorc_SelfRC_answer_question_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/duorc_SelfRC_answer_question | lora |
| yelp_polarity_reviews_0_2_0_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/yelp_polarity_reviews_0_2_0 | lora |
| adversarial_qa_dbidaf_generate_question_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/adversarial_qa_dbidaf_generate_question | lora |
| cos_e_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/cos_e_v1_11_question_description_option_text | lora |
| quartz_read_passage_below_choose_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/quartz_read_passage_below_choose | lora |
| ai2_arc_ARC_Challenge_1_0_0_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/ai2_arc_ARC_Challenge_1_0_0 | lora |
| dream_baseline_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/dream_baseline | lora |
| wiki_hop_original_choose_best_object_interrogative_2_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiki_hop_original_choose_best_object_interrogative_2 | lora |
| wiqa_what_might_be_the_first_step_of_the_process_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiqa_what_might_be_the_first_step_of_the_process | lora |
Last updated on: 2024-01-03 15:54:00+00:00
|
ostapeno/neo_trwevseq_simn1_sbs0.5_sgd_full_ft_poly_router_dir_coarsegrained_retrlib_embeddings_mllr0.1
|
ostapeno
| 2024-01-03T15:53:50Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-01-03T15:53:37Z |
Number of experts present in the library: 19
| Expert Name | Base Model | Trained on | Adapter Type |
| --- | --- | --- | --- |
| web_questions_whats_the_answer_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/web_questions_whats_the_answer | lora |
| duorc_ParaphraseRC_answer_question_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/duorc_ParaphraseRC_answer_question | lora |
| squad_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/squad_v1_1_3_0_0 | lora |
| adversarial_qa_dbidaf_answer_the_following_q_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/adversarial_qa_dbidaf_answer_the_following_q | lora |
| wiqa_effect_with_string_answer_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiqa_effect_with_string_answer | lora |
| dbpedia_14_given_a_choice_of_categories__v1 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/dbpedia_14_given_a_choice_of_categories_ | lora |
| social_i_qa_Check_if_a_random_answer_is_valid_or_not_v2 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/social_i_qa_Check_if_a_random_answer_is_valid_or_not | lora |
| quoref_Find_Answer_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/quoref_Find_Answer | lora |
| duorc_ParaphraseRC_title_generation_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/duorc_ParaphraseRC_title_generation | lora |
| dbpedia_14_given_a_list_of_category_what_does_the_title_belong_to_v2 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/dbpedia_14_given_a_list_of_category_what_does_the_title_belong_to | lora |
| duorc_SelfRC_answer_question_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/duorc_SelfRC_answer_question | lora |
| yelp_polarity_reviews_0_2_0_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/yelp_polarity_reviews_0_2_0 | lora |
| adversarial_qa_dbidaf_generate_question_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/adversarial_qa_dbidaf_generate_question | lora |
| cos_e_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/cos_e_v1_11_question_description_option_text | lora |
| quartz_read_passage_below_choose_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/quartz_read_passage_below_choose | lora |
| ai2_arc_ARC_Challenge_1_0_0_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/ai2_arc_ARC_Challenge_1_0_0 | lora |
| dream_baseline_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/dream_baseline | lora |
| wiki_hop_original_choose_best_object_interrogative_2_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiki_hop_original_choose_best_object_interrogative_2 | lora |
| wiqa_what_might_be_the_first_step_of_the_process_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiqa_what_might_be_the_first_step_of_the_process | lora |
Last updated on: 2024-01-03 15:53:37+00:00
|
ostapeno/neo_trwevseq_simn1_sbs0.5_sgd_full_ft_poly_router_dir_coarsegrained_retrnone_mllr-1
|
ostapeno
| 2024-01-03T15:53:38Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-01-03T15:53:27Z |
Number of experts present in the library: 19
| Expert Name | Base Model | Trained on | Adapter Type |
| --- | --- | --- | --- |
| web_questions_whats_the_answer_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/web_questions_whats_the_answer | lora |
| duorc_ParaphraseRC_answer_question_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/duorc_ParaphraseRC_answer_question | lora |
| squad_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/squad_v1_1_3_0_0 | lora |
| adversarial_qa_dbidaf_answer_the_following_q_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/adversarial_qa_dbidaf_answer_the_following_q | lora |
| wiqa_effect_with_string_answer_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiqa_effect_with_string_answer | lora |
| dbpedia_14_given_a_choice_of_categories__v1 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/dbpedia_14_given_a_choice_of_categories_ | lora |
| social_i_qa_Check_if_a_random_answer_is_valid_or_not_v2 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/social_i_qa_Check_if_a_random_answer_is_valid_or_not | lora |
| quoref_Find_Answer_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/quoref_Find_Answer | lora |
| duorc_ParaphraseRC_title_generation_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/duorc_ParaphraseRC_title_generation | lora |
| dbpedia_14_given_a_list_of_category_what_does_the_title_belong_to_v2 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/dbpedia_14_given_a_list_of_category_what_does_the_title_belong_to | lora |
| duorc_SelfRC_answer_question_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/duorc_SelfRC_answer_question | lora |
| yelp_polarity_reviews_0_2_0_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/yelp_polarity_reviews_0_2_0 | lora |
| adversarial_qa_dbidaf_generate_question_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/adversarial_qa_dbidaf_generate_question | lora |
| cos_e_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/cos_e_v1_11_question_description_option_text | lora |
| quartz_read_passage_below_choose_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/quartz_read_passage_below_choose | lora |
| ai2_arc_ARC_Challenge_1_0_0_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/ai2_arc_ARC_Challenge_1_0_0 | lora |
| dream_baseline_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/dream_baseline | lora |
| wiki_hop_original_choose_best_object_interrogative_2_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiki_hop_original_choose_best_object_interrogative_2 | lora |
| wiqa_what_might_be_the_first_step_of_the_process_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiqa_what_might_be_the_first_step_of_the_process | lora |
Last updated on: 2024-01-03 15:53:27+00:00
|
Mik99/mistral_7b_v02_dutch_data_test_02
|
Mik99
| 2024-01-03T15:44:45Z | 2 | 1 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2",
"region:us"
] | null | 2024-01-03T15:44:21Z |
---
library_name: peft
base_model: mistralai/Mistral-7B-Instruct-v0.2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
LoneStriker/Panda-7B-v0.1-4.0bpw-h6-exl2
|
LoneStriker
| 2024-01-03T15:37:32Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"dataset:NeuralNovel/Panda-v1",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:finetune:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2024-01-03T15:31:24Z |
---
license: apache-2.0
base_model: mistralai/Mistral-7B-Instruct-v0.2
datasets:
- NeuralNovel/Panda-v1
library_name: transformers
inference: false
---

# NeuralNovel/Panda-7B-v0.1
The **Panda-7B-v0.1** model by NeuralNovel.
This fine-tune has been designed to provide detailed, creative and logical responses in the context of diverse narratives. Optimised for creative writing, roleplay and logical problem solving.
Finetuned from
Mistral-7B-Instruct-v0.2, with apache-2.0 license, suitable for commercial or non-commercial use.
### Data-set
The model was finetuned using the Panda-v1 dataset.
### Summary
Fine-tuned with the intention to generate instructive and narrative text, with a specific focus on combining the elements of versatility, character engagement and nuanced writing capability.
#### Out-of-Scope Use
The model may not perform well in scenarios unrelated to instructive and narrative text generation. Misuse or applications outside its designed scope may result in suboptimal outcomes.
### Bias, Risks, and Limitations
The model may exhibit biases or limitations inherent in the training data. It is essential to consider these factors when deploying the model to avoid unintended consequences.
Users are advised to exercise caution, as there might be some inherent genre or writing bias.
### Hardware and Training
Trained using NVIDIA Tesla T40 24 GB.
```
n_epochs = 3,
n_checkpoints = 3,
batch_size = 12,
learning_rate = 1e-5,
```
*Sincere appreciation to Techmind for their generous sponsorship.*
|
FlyingFishzzz/model_left_lmk
|
FlyingFishzzz
| 2024-01-03T15:30:42Z | 3 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"controlnet",
"base_model:stabilityai/stable-diffusion-2-1-base",
"base_model:adapter:stabilityai/stable-diffusion-2-1-base",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-01-02T18:11:53Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1-base
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- controlnet
inference: true
---
# controlnet-FlyingFishzzz/model_left_lmk
These are controlnet weights trained on stabilityai/stable-diffusion-2-1-base with new type of conditioning.
You can find some example images below.
prompt: A young man in the forest wearing sportswear is looking into the distance to the side

prompt: A girl wearing a dress in the auditorium, looking to the side

prompt: An older lady wearing a cotton coat sits in the garden and looks to the side

|
jan-hq/Pandora-v1-10.7B
|
jan-hq
| 2024-01-03T15:28:28Z | 13 | 7 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-14T05:53:37Z |
---
license: apache-2.0
language:
- en
tags:
- merge
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://github.com/janhq/jan/assets/89722390/35daac7d-b895-487c-a6ac-6663daaad78e" alt="Jan banner" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<p align="center">
<a href="https://jan.ai/">Jan</a
>
- <a href="https://discord.gg/AsJ8krTT3N">Discord</a>
</p>
<!-- header end -->
# Model Description
This model uses the `passthrough` merge method from the best 7B models on the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard):
1. [viethq188/LeoScorpius-7B-Chat-DPO](https://huggingface.co/viethq188/LeoScorpius-7B-Chat-DPO)
2. [GreenNode/GreenNodeLM-7B-v1olet](https://huggingface.co/GreenNode/GreenNodeLM-7B-v1olet)
The yaml config file for this model is here:
```yaml
slices:
- sources:
- model: "viethq188/LeoScorpius-7B-Chat-DPO"
layer_range: [0, 24]
- sources:
- model: "GreenNode/GreenNodeLM-7B-v1olet"
layer_range: [8, 32]
merge_method: passthrough
dtype: bfloat16
```
# Prompt template
- **ChatML**
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
# Run this model
You can run this model using [Jan Desktop](https://jan.ai/) on Mac, Windows, or Linux.
Jan is an open source, ChatGPT alternative that is:
- ๐ป **100% offline on your machine**: Your conversations remain confidential, and visible only to you.
- ๐๏ธ **An Open File Format**: Conversations and model settings stay on your computer and can be exported or deleted at any time.
- ๐ **OpenAI Compatible**: Local server on port `1337` with OpenAI compatible endpoints
- ๐ **Open Source & Free**: We build in public; check out our [Github](https://github.com/janhq)

# About Jan
Jan believes in the need for an open-source AI ecosystem and is building the infra and tooling to allow open-source AIs to compete on a level playing field with proprietary ones.
Jan's long-term vision is to build a cognitive framework for future robots, who are practical, useful assistants for humans and businesses in everyday life.
# Jan Model Merger
This is a test project for merging models.
# Open LLM Leaderboard Evaluation Results
Detailed results can be found here.
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | ?|
| ARC (25-shot) | ? |
| HellaSwag (10-shot) | ? |
| MMLU (5-shot) | ?|
| TruthfulQA (0-shot) | ? |
| Winogrande (5-shot) | ? |
| GSM8K (5-shot) | ? |
# Acknowlegement
- [mergekit](https://github.com/cg123/mergekit)
- [DARE](https://github.com/yule-BUAA/MergeLM/blob/main/README.md)
-
[SLERP](https://github.com/Digitous/LLM-SLERP-Merge)
- [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness)
|
aboodalokla2/captcahtext
|
aboodalokla2
| 2024-01-03T15:27:27Z | 0 | 0 | null |
[
"image-to-text",
"region:us"
] |
image-to-text
| 2024-01-03T15:26:33Z |
---
pipeline_tag: image-to-text
---
|
jan-hq/trinity-v1.1
|
jan-hq
| 2024-01-03T15:26:39Z | 16 | 2 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-18T13:26:08Z |
---
license: apache-2.0
language:
- en
tags:
- merge
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://github.com/janhq/jan/assets/89722390/35daac7d-b895-487c-a6ac-6663daaad78e" alt="Jan banner" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<p align="center">
<a href="https://jan.ai/">Jan</a
>
- <a href="https://discord.gg/AsJ8krTT3N">Discord</a>
</p>
<!-- header end -->
# Model Description
This model finetuned [trinity-v1](https://huggingface.co/jan-hq/trinity-v1) on [ultrafeedback_binarized_subset](jan-hq/ultrafeedback_binarized_subset)
(cleaned version)
More details about the training result [here](https://huggingface.co/jan-hq/trinity-v1-dpo-adapter).
# Prompt template
```
{system_message}
### Instruction:
{prompt}
### Response:
```
# Run this model
You can run this model using [Jan Desktop](https://jan.ai/) on Mac, Windows, or Linux.
Jan is an open source, ChatGPT alternative that is:
- ๐ป **100% offline on your machine**: Your conversations remain confidential, and visible only to you.
- ๐๏ธ **An Open File Format**: Conversations and model settings stay on your computer and can be exported or deleted at any time.
- ๐ **OpenAI Compatible**: Local server on port `1337` with OpenAI compatible endpoints
- ๐ **Open Source & Free**: We build in public; check out our [Github](https://github.com/janhq)

# About Jan
Jan believes in the need for an open-source AI ecosystem and is building the infra and tooling to allow open-source AIs to compete on a level playing field with proprietary ones.
Jan's long-term vision is to build a cognitive framework for future robots, who are practical, useful assistants for humans and businesses in everyday life.
# Open LLM Leaderboard Evaluation Results
Detailed results can be found here.
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | ?|
| ARC (25-shot) | ? |
| HellaSwag (10-shot) | ? |
| MMLU (5-shot) | ?|
| TruthfulQA (0-shot) | ? |
| Winogrande (5-shot) | ? |
| GSM8K (5-shot) | ? |
# Acknowlegement
- [alignment-handbook](https://github.com/huggingface/alignment-handbook)
|
jan-hq/trinity-v1.2
|
jan-hq
| 2024-01-03T15:25:17Z | 15 | 3 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-19T01:29:34Z |
---
license: apache-2.0
language:
- en
tags:
- merge
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://github.com/janhq/jan/assets/89722390/35daac7d-b895-487c-a6ac-6663daaad78e" alt="Jan banner" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<p align="center">
<a href="https://jan.ai/">Jan</a
>
- <a href="https://discord.gg/AsJ8krTT3N">Discord</a>
</p>
<!-- header end -->
# Model Description
This model finetuned [trinity-v1.1](https://huggingface.co/jan-hq/trinity-v1) on [ultrafeedback_binarized_subset](jan-hq/ultrafeedback_binarized_subset)
(cleaned version) for adapting the ChatML prompt template.
More details about the training result [here](https://huggingface.co/jan-hq/trinity-v1.2-dpo-adapter).
# Prompt template
**ChatML**
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
```
{system_message}
### Instruction:
{prompt}
### Response:
```
# Run this model
You can run this model using [Jan Desktop](https://jan.ai/) on Mac, Windows, or Linux.
Jan is an open source, ChatGPT alternative that is:
- ๐ป **100% offline on your machine**: Your conversations remain confidential, and visible only to you.
- ๐๏ธ **An Open File Format**: Conversations and model settings stay on your computer and can be exported or deleted at any time.
- ๐ **OpenAI Compatible**: Local server on port `1337` with OpenAI compatible endpoints
- ๐ **Open Source & Free**: We build in public; check out our [Github](https://github.com/janhq)

# About Jan
Jan believes in the need for an open-source AI ecosystem and is building the infra and tooling to allow open-source AIs to compete on a level playing field with proprietary ones.
Jan's long-term vision is to build a cognitive framework for future robots, who are practical, useful assistants for humans and businesses in everyday life.
# Open LLM Leaderboard Evaluation Results
Detailed results can be found here.
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | ?|
| ARC (25-shot) | ? |
| HellaSwag (10-shot) | ? |
| MMLU (5-shot) | ?|
| TruthfulQA (0-shot) | ? |
| Winogrande (5-shot) | ? |
| GSM8K (5-shot) | ? |
# Acknowlegement
- [alignment-handbook](https://github.com/huggingface/alignment-handbook)
|
dearxoasis/whisper-small-fm
|
dearxoasis
| 2024-01-03T15:18:52Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"th",
"dataset:mozilla-foundation/common_voice_16_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-12-29T18:01:38Z |
---
language:
- th
license: apache-2.0
base_model: openai/whisper-small
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_16_0
metrics:
- wer
model-index:
- name: whisper-small-fm
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 16.0
type: mozilla-foundation/common_voice_16_0
config: th
split: test
args: 'config: th, split: test'
metrics:
- name: Wer
type: wer
value: 241.3265306122449
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-fm
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 16.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7602
- Wer: 241.3265
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0002 | 40.0 | 1000 | 0.6741 | 498.4694 |
| 0.0001 | 80.0 | 2000 | 0.7207 | 271.4286 |
| 0.0 | 120.0 | 3000 | 0.7514 | 218.3673 |
| 0.0 | 160.0 | 4000 | 0.7602 | 241.3265 |
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
|
EMBO/SourceData_GP-CHEM-ROLES_v_1-0-0_BioLinkBERT_large
|
EMBO
| 2024-01-03T15:18:18Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:source_data",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-01-03T15:05:13Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- source_data
metrics:
- precision
- recall
- f1
model-index:
- name: SourceData_GP-CHEM-ROLES_v_1-0-0_BioLinkBERT_large
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: source_data
type: source_data
args: ROLES_MULTI
metrics:
- name: Precision
type: precision
value: 0.9572859572859573
- name: Recall
type: recall
value: 0.9649457039436083
- name: F1
type: f1
value: 0.9611005692599621
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SourceData_GP-CHEM-ROLES_v_1-0-0_BioLinkBERT_large
This model is a fine-tuned version of [michiyasunaga/BioLinkBERT-base](https://huggingface.co/michiyasunaga/BioLinkBERT-base) on the source_data dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0100
- Accuracy Score: 0.9975
- Precision: 0.9573
- Recall: 0.9649
- F1: 0.9611
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.5e-05
- train_batch_size: 32
- eval_batch_size: 256
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adafactor
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy Score | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:---------:|:------:|:------:|
| 0.0068 | 1.0 | 863 | 0.0100 | 0.9975 | 0.9573 | 0.9649 | 0.9611 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0a0+bfe5ad2
- Datasets 2.10.1
- Tokenizers 0.12.1
|
revellsi/reachy-pollen
|
revellsi
| 2024-01-03T15:18:08Z | 17 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2024-01-03T15:18:03Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
widget:
- text: A <s0><s1> character a robot with a camera and a microphone
output:
url: image-0.png
- text: A <s0><s1> character a robot with a striped shirt and a black background
output:
url: image-1.png
- text: A <s0><s1> character a man is using a laptop to play a game with a robot
output:
url: image-2.png
- text: A <s0><s1> character a robot standing on a stand with a striped shirt
output:
url: image-3.png
- text: A <s0><s1> character a robot with a striped shirt and a black and white striped
tie
output:
url: image-4.png
- text: A <s0><s1> character a robot with a striped shirt and a black background
output:
url: image-5.png
- text: A <s0><s1> character a robot with a striped shirt on a stand
output:
url: image-6.png
- text: A <s0><s1> character a robot with a striped shirt on a stand
output:
url: image-7.png
- text: A <s0><s1> character a robot with a striped shirt and a hand up
output:
url: image-8.png
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: A <s0><s1> character
license: openrail++
---
# SDXL LoRA DreamBooth - revellsi/reachy-pollen
<Gallery />
## Model description
### These are revellsi/reachy-pollen LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
## Download model
### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke
- **LoRA**: download **[`reachy-pollen.safetensors` here ๐พ](/revellsi/reachy-pollen/blob/main/reachy-pollen.safetensors)**.
- Place it on your `models/Lora` folder.
- On AUTOMATIC1111, load the LoRA by adding `<lora:reachy-pollen:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/).
- *Embeddings*: download **[`reachy-pollen_emb.safetensors` here ๐พ](/revellsi/reachy-pollen/blob/main/reachy-pollen_emb.safetensors)**.
- Place it on it on your `embeddings` folder
- Use it by adding `reachy-pollen_emb` to your prompt. For example, `A reachy-pollen_emb character`
(you need both the LoRA and the embeddings as they were trained together for this LoRA)
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
from huggingface_hub import hf_hub_download
from safetensors.torch import load_file
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('revellsi/reachy-pollen', weight_name='pytorch_lora_weights.safetensors')
embedding_path = hf_hub_download(repo_id='revellsi/reachy-pollen', filename='reachy-pollen_emb.safetensors' repo_type="model")
state_dict = load_file(embedding_path)
pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer)
pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder_2, tokenizer=pipeline.tokenizer_2)
image = pipeline('A <s0><s1> character').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Trigger words
To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens:
to trigger concept `TOK` โ use `<s0><s1>` in your prompt
## Details
All [Files & versions](/revellsi/reachy-pollen/tree/main).
The weights were trained using [๐งจ diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py).
LoRA for the text encoder was enabled. False.
Pivotal tuning was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
VijayaKrishnaRamesh/ppo-Huggy
|
VijayaKrishnaRamesh
| 2024-01-03T15:02:55Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2024-01-03T15:02:50Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog ๐ถ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: VijayaKrishnaRamesh/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
tiagoblima/t5_large-qg-af
|
tiagoblima
| 2024-01-03T15:00:34Z | 0 | 0 | null |
[
"safetensors",
"generated_from_trainer",
"dataset:tiagoblima/qg_squad_v1_pt",
"base_model:unicamp-dl/ptt5-large-t5-vocab",
"base_model:finetune:unicamp-dl/ptt5-large-t5-vocab",
"license:mit",
"region:us"
] | null | 2023-12-31T14:56:43Z |
---
license: mit
base_model: unicamp-dl/ptt5-large-t5-vocab
tags:
- generated_from_trainer
datasets:
- tiagoblima/qg_squad_v1_pt
model-index:
- name: t5_large-qg-af
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5_large-qg-af
This model is a fine-tuned version of [unicamp-dl/ptt5-large-t5-vocab](https://huggingface.co/unicamp-dl/ptt5-large-t5-vocab) on the tiagoblima/qg_squad_v1_pt dataset.
It achieves the following results on the evaluation set:
- Loss: 5.5058
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.005
- train_batch_size: 64
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.2352 | 1.0 | 808 | 7.3750 |
| 5.3111 | 2.0 | 1616 | 6.3174 |
| 4.8485 | 3.0 | 2424 | 5.8192 |
| 4.616 | 4.0 | 3232 | 5.5792 |
| 4.5649 | 5.0 | 4040 | 5.5058 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.0.0
- Datasets 2.15.0
- Tokenizers 0.15.0
|
EMBO/SourceData_GENEPROD-ROLES_v_1-0-0_BioLinkBERT_large
|
EMBO
| 2024-01-03T14:56:28Z | 169 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:source_data",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-01-03T14:23:27Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- source_data
metrics:
- precision
- recall
- f1
model-index:
- name: SourceData_GENEPROD-ROLES_v_1-0-0_BioLinkBERT_large
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: source_data
type: source_data
args: ROLES_GP
metrics:
- name: Precision
type: precision
value: 0.9172342035565645
- name: Recall
type: recall
value: 0.9250655854996422
- name: F1
type: f1
value: 0.9211332494241136
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SourceData_GENEPROD-ROLES_v_1-0-0_BioLinkBERT_large
This model is a fine-tuned version of [michiyasunaga/BioLinkBERT-large](https://huggingface.co/michiyasunaga/BioLinkBERT-large) on the source_data dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0137
- Accuracy Score: 0.9948
- Precision: 0.9172
- Recall: 0.9251
- F1: 0.9211
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 256
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adafactor
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy Score | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:---------:|:------:|:------:|
| 0.0153 | 1.0 | 863 | 0.0137 | 0.9948 | 0.9172 | 0.9251 | 0.9211 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0a0+bfe5ad2
- Datasets 2.10.1
- Tokenizers 0.12.1
|
Dorjzodovsuren/mongolian-gpt2
|
Dorjzodovsuren
| 2024-01-03T14:44:59Z | 19 | 1 |
transformers
|
[
"transformers",
"jax",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"politics",
"mn",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-02T02:57:23Z |
---
license: mit
language:
- mn
library_name: transformers
tags:
- politics
widget:
- text: "ะะพะฝะณะพะป ัะปััะฝ ะตัำฉะฝั
ะธะนะปำฉะณั"
example_title: "Mongolian president"
- text: "ะฅะฐะนั ะณัะถ ัั ะฒั"
example_title: "What is love "
- text: "ะฆัะนะฒะฐะฝ "
example_title: "Tsuiwan"
---
|
Dangurangu/marian-finetuned-kde4-en-to-fr
|
Dangurangu
| 2024-01-03T14:36:19Z | 13 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"base_model:Helsinki-NLP/opus-mt-en-fr",
"base_model:finetune:Helsinki-NLP/opus-mt-en-fr",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2024-01-03T12:26:41Z |
---
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-en-fr
tags:
- translation
- generated_from_trainer
datasets:
- kde4
metrics:
- bleu
model-index:
- name: marian-finetuned-kde4-en-to-fr
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: kde4
type: kde4
config: en-fr
split: train
args: en-fr
metrics:
- name: Bleu
type: bleu
value: 52.837727401681214
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8556
- Bleu: 52.8377
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Anshler/clip-prefix
|
Anshler
| 2024-01-03T14:35:46Z | 0 | 0 |
transformers
|
[
"transformers",
"en",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2023-12-27T03:54:55Z |
---
license: mit
language:
- en
metrics:
- bleu
- meteor
- rouge
library_name: transformers
---
|
alirzb/S1_M1_R1_beit_42534242
|
alirzb
| 2024-01-03T14:33:52Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"beit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/beit-base-patch16-224",
"base_model:finetune:microsoft/beit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-01-03T13:34:43Z |
---
license: apache-2.0
base_model: microsoft/beit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: S1_M1_R1_beit_42534242
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9980483044645035
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# S1_M1_R1_beit_42534242
This model is a fine-tuned version of [microsoft/beit-base-patch16-224](https://huggingface.co/microsoft/beit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0090
- Accuracy: 0.9980
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0101 | 1.0 | 256 | 0.0465 | 0.9873 |
| 0.0107 | 2.0 | 512 | 0.0155 | 0.9939 |
| 0.0011 | 3.0 | 768 | 0.0082 | 0.9976 |
| 0.0095 | 4.0 | 1025 | 0.0077 | 0.9978 |
| 0.0002 | 5.0 | 1280 | 0.0090 | 0.9980 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Darshan2412/llama2-qlora-finetunined-french
|
Darshan2412
| 2024-01-03T14:33:10Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:TinyPixel/Llama-2-7B-bf16-sharded",
"base_model:adapter:TinyPixel/Llama-2-7B-bf16-sharded",
"region:us"
] | null | 2024-01-03T14:32:56Z |
---
library_name: peft
base_model: TinyPixel/Llama-2-7B-bf16-sharded
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
LarryAIDraw/SkayaV1
|
LarryAIDraw
| 2024-01-03T14:26:17Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-01-03T14:22:39Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/251267/skaya-killiland-or-manhwa-or-return-of-the-frozen-player
|
LarryAIDraw/reika_kitakami_v2
|
LarryAIDraw
| 2024-01-03T14:26:00Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-01-03T14:21:50Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/225596/reika-kitakami-or-the-idolmster-million-live-idolmaster
|
LarryAIDraw/EmiNuV4-09
|
LarryAIDraw
| 2024-01-03T14:25:51Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-01-03T14:21:29Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/252298/nu-kage-no-jitsuryokusha-ni-naritakute
|
mbruton/gal_mBERT
|
mbruton
| 2024-01-03T14:21:14Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"gl",
"dataset:mbruton/galician_srl",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-04-15T11:09:41Z |
---
license: apache-2.0
datasets:
- mbruton/galician_srl
language:
- gl
metrics:
- seqeval
library_name: transformers
pipeline_tag: token-classification
---
# Model Card for GalBERT for Semantic Role Labeling (cased)
This model is fine-tuned on [multilingual BERT](https://huggingface.co/bert-base-multilingual-cased) and is one of 24 models introduced as part of [this project](https://github.com/mbruton0426/GalicianSRL). Prior to this work, there were no published Galician datasets or models for SRL.
## Model Details
### Model Description
GalBERT for Semantic Role Labeling (SRL) is a transformers model, leveraging mBERT's extensive pretraining on 104 languages to achieve better SRL predictions for low-resource Galician. This model is cased: it makes a difference between english and English. It was fine-tuned on Galician with the following objectives:
- Identify up to 13 verbal roots within a sentence.
- Identify available arguments for each verbal root. Due to scarcity of data, this model focused solely on the identification of arguments 0, 1, and 2.
Labels are formatted as: r#:tag, where r# links the token to a specific verbal root of index #, and tag identifies the token as the verbal root (root) or an individual argument (arg0/arg1/arg2)
- **Developed by:** [Micaella Bruton](mailto:micaellabruton@gmail.com)
- **Model type:** Transformers
- **Language(s) (NLP):** Galician (gl)
- **License:** Apache 2.0
- **Finetuned from model:** [multilingual BERT](https://huggingface.co/bert-base-multilingual-cased)
### Model Sources
- **Repository:** [GalicianSRL](https://github.com/mbruton0426/GalicianSRL)
- **Paper:** To be updated
## Uses
This model is intended to be used to develop and improve natural language processing tools for Galician.
## Bias, Risks, and Limitations
Galician is a low-resource language which prior to this project lacked a semantic role labeling dataset. As such, the dataset used to train this model is extrememly limited and could benefit from the inclusion of additional sentences and manual validation by native speakers.
## Training Details
### Training Data
This model was trained on the "train" portion of the [GalicianSRL Dataset](https://huggingface.co/datasets/mbruton/galician_srl) produced as part of this same project.
#### Training Hyperparameters
- **Learning Rate:** 2e-5
- **Batch Size:** 16
- **Weight Decay:** 0.01
- **Early Stopping:** 10 epochs
## Evaluation
#### Testing Data
This model was tested on the "test" portion of the [GalicianSRL Dataset](https://huggingface.co/datasets/mbruton/galician_srl) produced as part of this same project.
#### Metrics
[seqeval](https://huggingface.co/spaces/evaluate-metric/seqeval) is a Python framework for sequence labeling evaluation. It can evaluate the performance of chunking tasks such as named-entity recognition, part-of-speech tagging, and semantic role labeling.
It supplies scoring both overall and per label type.
Overall:
- `accuracy`: the average [accuracy](https://huggingface.co/metrics/accuracy), on a scale between 0.0 and 1.0.
- `precision`: the average [precision](https://huggingface.co/metrics/precision), on a scale between 0.0 and 1.0.
- `recall`: the average [recall](https://huggingface.co/metrics/recall), on a scale between 0.0 and 1.0.
- `f1`: the average [F1 score](https://huggingface.co/metrics/f1), which is the harmonic mean of the precision and recall. It also has a scale of 0.0 to 1.0.
Per label type:
- `precision`: the average [precision](https://huggingface.co/metrics/precision), on a scale between 0.0 and 1.0.
- `recall`: the average [recall](https://huggingface.co/metrics/recall), on a scale between 0.0 and 1.0.
- `f1`: the average [F1 score](https://huggingface.co/metrics/f1), on a scale between 0.0 and 1.0.
### Results
| Label | Precision | Recall | f1-score | Support |
| :----------: | :-------: | :----: | :------: | :-----: |
| 0:arg0 | 0.72 | 0.77 | 0.74 | 485 |
| 0:arg1 | 0.74 | 0.74 | 0.74 | 483 |
| 0:arg2 | 0.66 | 0.76 | 0.71 | 264 |
| 0:root | 0.92 | 0.91 | 0.92 | 948 |
| 1:arg0 | 0.68 | 0.62 | 0.65 | 348 |
| 1:arg1 | 0.69 | 0.63 | 0.66 | 443 |
| 1:arg2 | 0.65 | 0.55 | 0.59 | 211 |
| 1:root | 0.85 | 0.83 | 0.84 | 802 |
| 2:arg0 | 0.59 | 0.56 | 0.57 | 240 |
| 2:arg1 | 0.61 | 0.58 | 0.59 | 331 |
| 2:arg2 | 0.56 | 0.55 | 0.56 | 156 |
| 2:root | 0.79 | 0.70 | 0.74 | 579 |
| 3:arg0 | 0.42 | 0.45 | 0.44 | 137 |
| 3:arg1 | 0.54 | 0.55 | 0.55 | 216 |
| 3:arg2 | 0.48 | 0.52 | 0.50 | 110 |
| 3:root | 0.63 | 0.71 | 0.67 | 374 |
| 4:arg0 | 0.42 | 0.40 | 0.41 | 70 |
| 4:arg1 | 0.50 | 0.52 | 0.51 | 109 |
| 4:arg2 | 0.46 | 0.50 | 0.48 | 66 |
| 4:root | 0.50 | 0.72 | 0.59 | 206 |
| 5:arg0 | 0.27 | 0.20 | 0.23 | 20 |
| 5:arg1 | 0.35 | 0.51 | 0.41 | 57 |
| 5:arg2 | 0.27 | 0.14 | 0.19 | 28 |
| 5:root | 0.42 | 0.28 | 0.34 | 102 |
| 6:arg0 | 0.50 | 0.08 | 0.13 | 13 |
| 6:arg1 | 0.20 | 0.04 | 0.07 | 25 |
| 6:arg2 | 0.00 | 0.00 | 0.00 | 8 |
| 6:root | 0.25 | 0.21 | 0.23 | 42 |
| 7:arg0 | 0.00 | 0.00 | 0.00 | 3 |
| 7:arg1 | 0.00 | 0.00 | 0.00 | 8 |
| 7:arg2 | 0.00 | 0.00 | 0.00 | 5 |
| 7:root | 0.00 | 0.00 | 0.00 | 16 |
| 8:arg0 | 0.00 | 0.00 | 0.00 | 1 |
| 8:arg1 | 0.00 | 0.00 | 0.00 | 2 |
| 8:arg2 | 0.00 | 0.00 | 0.00 | 1 |
| 8:root | 0.00 | 0.00 | 0.00 | 7 |
| 9:arg0 | 0.00 | 0.00 | 0.00 | 1 |
| 9:arg1 | 0.00 | 0.00 | 0.00 | 2 |
| 9:arg2 | 0.00 | 0.00 | 0.00 | 1 |
| 9:root | 0.00 | 0.00 | 0.00 | 3 |
| 10:arg1 | 0.00 | 0.00 | 0.00 | 1 |
| 10:root | 0.00 | 0.00 | 0.00 | 2 |
| micro avg | 0.69 | 0.68 | 0.69 | 6926 |
| macro avg | 0.35 | 0.33 | 0.33 | 6926 |
| weighted avg | 0.69 | 0.68 | 0.68 | 6926 |
| tot root avg | 0.40 | 0.40 | 0.39 | 3081 |
| tot A0 avg | 0.36 | 0.31 | 0.32 | 1318 |
| tot A1 avg | 0.33 | 0.32 | 0.32 | 1677 |
| tot A2 avg | 0.31 | 0.30 | 0.30 | 850 |
| tot r0 avg | 0.76 | 0.80 | 0.78 | 2180 |
| tot r1 avg | 0.72 | 0.66 | 0.69 | 1804 |
| tot r2 avg | 0.64 | 0.60 | 0.62 | 1306 |
| tot r3 avg | 0.52 | 0.56 | 0.54 | 837 |
| tot r4 avg | 0.47 | 0.54 | 0.50 | 451 |
| tot r5 avg | 0.33 | 0.28 | 0.29 | 207 |
| tot r6 avg | 0.24 | 0.08 | 0.11 | 88 |
| tot r7 avg | 0.00 | 0.00 | 0.00 | 32 |
| tot r8 avg | 0.00 | 0.00 | 0.00 | 11 |
| tot r9 avg | 0.00 | 0.00 | 0.00 | 7 |
| tot r10 avg | 0.00 | 0.00 | 0.00 | 3 |
## Citation
**BibTeX:**
```
@inproceedings{bruton-beloucif-2023-bertie,
title = "{BERT}ie Bott{'}s Every Flavor Labels: A Tasty Introduction to Semantic Role Labeling for {G}alician",
author = "Bruton, Micaella and
Beloucif, Meriem",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.671",
doi = "10.18653/v1/2023.emnlp-main.671",
pages = "10892--10902",
abstract = "In this paper, we leverage existing corpora, WordNet, and dependency parsing to build the first Galician dataset for training semantic role labeling systems in an effort to expand available NLP resources. Additionally, we introduce verb indexing, a new pre-processing method, which helps increase the performance when semantically parsing highly-complex sentences. We use transfer-learning to test both the resource and the verb indexing method. Our results show that the effects of verb indexing were amplified in scenarios where the model was both pre-trained and fine-tuned on datasets utilizing the method, but improvements are also noticeable when only used during fine-tuning. The best-performing Galician SRL model achieved an f1 score of 0.74, introducing a baseline for future Galician SRL systems. We also tested our method on Spanish where we achieved an f1 score of 0.83, outperforming the baseline set by the 2009 CoNLL Shared Task by 0.025 showing the merits of our verb indexing method for pre-processing.",
}
```
|
mbruton/gal_enpt_mBERT
|
mbruton
| 2024-01-03T14:19:55Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"gl",
"en",
"pt",
"dataset:mbruton/galician_srl",
"dataset:CoNLL-2012",
"dataset:PropBank.Br",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-04-15T11:11:42Z |
---
license: apache-2.0
datasets:
- mbruton/galician_srl
- CoNLL-2012
- PropBank.Br
language:
- gl
- en
- pt
metrics:
- seqeval
library_name: transformers
pipeline_tag: token-classification
---
# Model Card for GalBERT-enpt for Semantic Role Labeling (cased)
This model is fine-tuned on a version of [multilingual BERT](https://huggingface.co/bert-base-multilingual-cased) which is pre-trained on the SRL task for English and Portuguese, and is one of 24 models introduced as part of [this project](https://github.com/mbruton0426/GalicianSRL). Prior to this work, there were no published Galician datasets or models for SRL.
## Model Details
### Model Description
GalBERT-enpt for Semantic Role Labeling (SRL) is a transformers model, leveraging mBERT's extensive pretraining on 104 languages to achieve better SRL predictions for low-resource Galician. This model is additionally pre-trained on the SRL task for English and Portuguese. This model is cased: it makes a difference between english and English. It was fine-tuned on Galician with the following objectives:
- Identify up to 13 verbal roots within a sentence.
- Identify available arguments for each verbal root. Due to scarcity of data, this model focused solely on the identification of arguments 0, 1, and 2.
Labels are formatted as: r#:tag, where r# links the token to a specific verbal root of index #, and tag identifies the token as the verbal root (root) or an individual argument (arg0/arg1/arg2)
- **Developed by:** [Micaella Bruton](mailto:micaellabruton@gmail.com)
- **Model type:** Transformers
- **Language(s) (NLP):** Galician (gl), English (en), Portuguese (pt)
- **License:** Apache 2.0
- **Finetuned from model:** [English & Portuguese pre-trained multilingual BERT](https://huggingface.co/liaad/srl-enpt_mbert-base)
### Model Sources
- **Repository:** [GalicianSRL](https://github.com/mbruton0426/GalicianSRL)
- **Paper:** To be updated
## Uses
This model is intended to be used to develop and improve natural language processing tools for Galician.
## Bias, Risks, and Limitations
Galician is a low-resource language which prior to this project lacked a semantic role labeling dataset. As such, the dataset used to train this model is extrememly limited and could benefit from the inclusion of additional sentences and manual validation by native speakers.
## Training Details
### Training Data
This model was pre-trained on both the [OntoNotes 5.0 English SRL corpus](http://catalog.ldc.upenn.edu/LDC2013T19) and the [PropBank.Br Portuguese SRL corpus](http://www.nilc.icmc.usp.br/portlex/index.php/en/projects/propbankbringl).
This model was fine-tuned on the "train" portion of the [GalicianSRL Dataset](https://huggingface.co/datasets/mbruton/galician_srl) produced as part of this same project.
#### Training Hyperparameters
- **Learning Rate:** 2e-5
- **Batch Size:** 16
- **Weight Decay:** 0.01
- **Early Stopping:** 10 epochs
## Evaluation
#### Testing Data
This model was tested on the "test" portion of the [GalicianSRL Dataset](https://huggingface.co/datasets/mbruton/galician_srl) produced as part of this same project.
#### Metrics
[seqeval](https://huggingface.co/spaces/evaluate-metric/seqeval) is a Python framework for sequence labeling evaluation. It can evaluate the performance of chunking tasks such as named-entity recognition, part-of-speech tagging, and semantic role labeling.
It supplies scoring both overall and per label type.
Overall:
- `accuracy`: the average [accuracy](https://huggingface.co/metrics/accuracy), on a scale between 0.0 and 1.0.
- `precision`: the average [precision](https://huggingface.co/metrics/precision), on a scale between 0.0 and 1.0.
- `recall`: the average [recall](https://huggingface.co/metrics/recall), on a scale between 0.0 and 1.0.
- `f1`: the average [F1 score](https://huggingface.co/metrics/f1), which is the harmonic mean of the precision and recall. It also has a scale of 0.0 to 1.0.
Per label type:
- `precision`: the average [precision](https://huggingface.co/metrics/precision), on a scale between 0.0 and 1.0.
- `recall`: the average [recall](https://huggingface.co/metrics/recall), on a scale between 0.0 and 1.0.
- `f1`: the average [F1 score](https://huggingface.co/metrics/f1), on a scale between 0.0 and 1.0.
### Results
| Label | Precision | Recall | f1-score | Support |
| :----------: | :-------: | :----: | :------: | :-----: |
| 0:arg0 | 0.75 | 0.71 | 0.73 | 485 |
| 0:arg1 | 0.68 | 0.72 | 0.70 | 483 |
| 0:arg2 | 0.71 | 0.73 | 0.72 | 264 |
| 0:root | 0.93 | 0.93 | 0.93 | 948 |
| 1:arg0 | 0.66 | 0.62 | 0.64 | 348 |
| 1:arg1 | 0.70 | 0.67 | 0.69 | 443 |
| 1:arg2 | 0.66 | 0.58 | 0.62 | 211 |
| 1:root | 0.87 | 0.84 | 0.86 | 802 |
| 2:arg0 | 0.61 | 0.52 | 0.56 | 240 |
| 2:arg1 | 0.62 | 0.61 | 0.61 | 331 |
| 2:arg2 | 0.57 | 0.51 | 0.54 | 156 |
| 2:root | 0.77 | 0.79 | 0.78 | 579 |
| 3:arg0 | 0.45 | 0.45 | 0.45 | 137 |
| 3:arg1 | 0.52 | 0.52 | 0.52 | 216 |
| 3:arg2 | 0.52 | 0.45 | 0.48 | 110 |
| 3:root | 0.71 | 0.70 | 0.70 | 374 |
| 4:arg0 | 0.48 | 0.46 | 0.47 | 70 |
| 4:arg1 | 0.46 | 0.46 | 0.46 | 109 |
| 4:arg2 | 0.44 | 0.56 | 0.49 | 66 |
| 4:root | 0.61 | 0.66 | 0.63 | 206 |
| 5:arg0 | 0.23 | 0.35 | 0.28 | 20 |
| 5:arg1 | 0.35 | 0.60 | 0.44 | 57 |
| 5:arg2 | 0.38 | 0.21 | 0.27 | 28 |
| 5:root | 0.55 | 0.52 | 0.53 | 102 |
| 6:arg0 | 0.33 | 0.08 | 0.12 | 13 |
| 6:arg1 | 0.25 | 0.08 | 0.12 | 25 |
| 6:arg2 | 0.00 | 0.00 | 0.00 | 8 |
| 6:root | 0.32 | 0.38 | 0.35 | 42 |
| 7:arg0 | 0.00 | 0.00 | 0.00 | 3 |
| 7:arg1 | 0.00 | 0.00 | 0.00 | 8 |
| 7:arg2 | 0.00 | 0.00 | 0.00 | 5 |
| 7:root | 0.00 | 0.00 | 0.00 | 16 |
| 8:arg0 | 0.00 | 0.00 | 0.00 | 1 |
| 8:arg1 | 0.00 | 0.00 | 0.00 | 2 |
| 8:arg2 | 0.00 | 0.00 | 0.00 | 1 |
| 8:root | 0.00 | 0.00 | 0.00 | 7 |
| 9:arg0 | 0.00 | 0.00 | 0.00 | 1 |
| 9:arg1 | 0.00 | 0.00 | 0.00 | 2 |
| 9:arg2 | 0.00 | 0.00 | 0.00 | 1 |
| 9:root | 0.00 | 0.00 | 0.00 | 3 |
| 10:arg1 | 0.00 | 0.00 | 0.00 | 1 |
| 10:root | 0.00 | 0.00 | 0.00 | 2 |
| micro avg | 0.71 | 0.69 | 0.70 | 6926 |
| macro avg | 0.36 | 0.35 | 0.35 | 6926 |
| weighted avg | 0.70 | 0.69 | 0.70 | 6926 |
| tot root avg | 0.43 | 0.44 | 0.43 | 3081 |
| tot A0 avg | 0.35 | 0.32 | 0.33 | 1318 |
| tot A1 avg | 0.33 | 0.33 | 0.32 | 1677 |
| tot A2 avg | 0.33 | 0.30 | 0.31 | 850 |
| tot r0 avg | 0.77 | 0.77 | 0.77 | 2180 |
| tot r1 avg | 0.72 | 0.68 | 0.70 | 1804 |
| tot r2 avg | 0.64 | 0.61 | 0.62 | 1306 |
| tot r3 avg | 0.55 | 0.53 | 0.54 | 837 |
| tot r4 avg | 0.50 | 0.54 | 0.51 | 451 |
| tot r5 avg | 0.38 | 0.42 | 0.38 | 207 |
| tot r6 avg | 0.23 | 0.14 | 0.15 | 88 |
| tot r7 avg | 0.00 | 0.00 | 0.00 | 32 |
| tot r8 avg | 0.00 | 0.00 | 0.00 | 11 |
| tot r9 avg | 0.00 | 0.00 | 0.00 | 7 |
| tot r10 avg | 0.00 | 0.00 | 0.00 | 3 |
## Citation
**BibTeX:**
```
@inproceedings{bruton-beloucif-2023-bertie,
title = "{BERT}ie Bott{'}s Every Flavor Labels: A Tasty Introduction to Semantic Role Labeling for {G}alician",
author = "Bruton, Micaella and
Beloucif, Meriem",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.671",
doi = "10.18653/v1/2023.emnlp-main.671",
pages = "10892--10902",
abstract = "In this paper, we leverage existing corpora, WordNet, and dependency parsing to build the first Galician dataset for training semantic role labeling systems in an effort to expand available NLP resources. Additionally, we introduce verb indexing, a new pre-processing method, which helps increase the performance when semantically parsing highly-complex sentences. We use transfer-learning to test both the resource and the verb indexing method. Our results show that the effects of verb indexing were amplified in scenarios where the model was both pre-trained and fine-tuned on datasets utilizing the method, but improvements are also noticeable when only used during fine-tuning. The best-performing Galician SRL model achieved an f1 score of 0.74, introducing a baseline for future Galician SRL systems. We also tested our method on Spanish where we achieved an f1 score of 0.83, outperforming the baseline set by the 2009 CoNLL Shared Task by 0.025 showing the merits of our verb indexing method for pre-processing.",
}
```
|
mbruton/gal_XLM-R
|
mbruton
| 2024-01-03T14:19:11Z | 91 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"gl",
"dataset:mbruton/galician_srl",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-04-15T12:42:54Z |
---
license: apache-2.0
datasets:
- mbruton/galician_srl
language:
- gl
metrics:
- seqeval
library_name: transformers
pipeline_tag: token-classification
---
# Model Card for GalXLM-R for Semantic Role Labeling
This model is fine-tuned on a version of [XLM RoBERTa Base](https://huggingface.co/xlm-roberta-base) and is one of 24 models introduced as part of [this project](https://github.com/mbruton0426/GalicianSRL). Prior to this work, there were no published Galician datasets or models for SRL.
## Model Details
### Model Description
GalXLM-R for Semantic Role Labeling (SRL) is a transformers model, leveraging XLM-R's extensive pretraining on 100 languages to achieve better SRL predictions for low-resource Galician. It was fine-tuned on Galician with the following objectives:
- Identify up to 13 verbal roots within a sentence.
- Identify available arguments for each verbal root. Due to scarcity of data, this model focused solely on the identification of arguments 0, 1, and 2.
Labels are formatted as: r#:tag, where r# links the token to a specific verbal root of index #, and tag identifies the token as the verbal root (root) or an individual argument (arg0/arg1/arg2)
- **Developed by:** [Micaella Bruton](mailto:micaellabruton@gmail.com)
- **Model type:** Transformers
- **Language(s) (NLP):** Galician (gl)
- **License:** Apache 2.0
- **Finetuned from model:** [XLM RoBERTa Base](https://huggingface.co/xlm-roberta-base)
### Model Sources
- **Repository:** [GalicianSRL](https://github.com/mbruton0426/GalicianSRL)
- **Paper:** To be updated
## Uses
This model is intended to be used to develop and improve natural language processing tools for Galician.
## Bias, Risks, and Limitations
Galician is a low-resource language which prior to this project lacked a semantic role labeling dataset. As such, the dataset used to train this model is extrememly limited and could benefit from the inclusion of additional sentences and manual validation by native speakers.
## Training Details
### Training Data
This model was fine-tuned on the "train" portion of the [GalicianSRL Dataset](https://huggingface.co/datasets/mbruton/galician_srl) produced as part of this same project.
#### Training Hyperparameters
- **Learning Rate:** 2e-5
- **Batch Size:** 16
- **Weight Decay:** 0.01
- **Early Stopping:** 10 epochs
## Evaluation
#### Testing Data
This model was tested on the "test" portion of the [GalicianSRL Dataset](https://huggingface.co/datasets/mbruton/galician_srl) produced as part of this same project.
#### Metrics
[seqeval](https://huggingface.co/spaces/evaluate-metric/seqeval) is a Python framework for sequence labeling evaluation. It can evaluate the performance of chunking tasks such as named-entity recognition, part-of-speech tagging, and semantic role labeling.
It supplies scoring both overall and per label type.
Overall:
- `accuracy`: the average [accuracy](https://huggingface.co/metrics/accuracy), on a scale between 0.0 and 1.0.
- `precision`: the average [precision](https://huggingface.co/metrics/precision), on a scale between 0.0 and 1.0.
- `recall`: the average [recall](https://huggingface.co/metrics/recall), on a scale between 0.0 and 1.0.
- `f1`: the average [F1 score](https://huggingface.co/metrics/f1), which is the harmonic mean of the precision and recall. It also has a scale of 0.0 to 1.0.
Per label type:
- `precision`: the average [precision](https://huggingface.co/metrics/precision), on a scale between 0.0 and 1.0.
- `recall`: the average [recall](https://huggingface.co/metrics/recall), on a scale between 0.0 and 1.0.
- `f1`: the average [F1 score](https://huggingface.co/metrics/f1), on a scale between 0.0 and 1.0.
### Results
| Label | Precision | Recall | f1-score | Support |
| :----------: | :-------: | :----: | :------: | :-----: |
| 0:arg0 | 0.77 | 0.77 | 0.77 | 485 |
| 0:arg1 | 0.79 | 0.71 | 0.75 | 483 |
| 0:arg2 | 0.72 | 0.72 | 0.72 | 264 |
| 0:root | 0.94 | 0.94 | 0.94 | 948 |
| 1:arg0 | 0.62 | 0.67 | 0.64 | 348 |
| 1:arg1 | 0.69 | 0.68 | 0.69 | 443 |
| 1:arg2 | 0.65 | 0.68 | 0.67 | 211 |
| 1:root | 0.85 | 0.88 | 0.86 | 802 |
| 2:arg0 | 0.58 | 0.57 | 0.58 | 240 |
| 2:arg1 | 0.61 | 0.60 | 0.61 | 331 |
| 2:arg2 | 0.52 | 0.65 | 0.58 | 156 |
| 2:root | 0.77 | 0.77 | 0.77 | 579 |
| 3:arg0 | 0.46 | 0.42 | 0.44 | 137 |
| 3:arg1 | 0.53 | 0.56 | 0.55 | 216 |
| 3:arg2 | 0.45 | 0.53 | 0.49 | 110 |
| 3:root | 0.63 | 0.74 | 0.68 | 374 |
| 4:arg0 | 0.40 | 0.27 | 0.32 | 70 |
| 4:arg1 | 0.53 | 0.44 | 0.48 | 109 |
| 4:arg2 | 0.43 | 0.56 | 0.49 | 66 |
| 4:root | 0.53 | 0.59 | 0.56 | 206 |
| 5:arg0 | 0.33 | 0.10 | 0.15 | 20 |
| 5:arg1 | 0.39 | 0.51 | 0.44 | 57 |
| 5:arg2 | 0.30 | 0.11 | 0.16 | 28 |
| 5:root | 0.40 | 0.38 | 0.39 | 102 |
| 6:arg0 | 0.25 | 0.08 | 0.12 | 13 |
| 6:arg1 | 0.00 | 0.00 | 0.00 | 25 |
| 6:arg2 | 0.00 | 0.00 | 0.00 | 8 |
| 6:root | 0.10 | 0.05 | 0.06 | 42 |
| 7:arg0 | 0.00 | 0.00 | 0.00 | 3 |
| 7:arg1 | 0.00 | 0.00 | 0.00 | 8 |
| 7:arg2 | 0.00 | 0.00 | 0.00 | 5 |
| 7:root | 0.00 | 0.00 | 0.00 | 16 |
| 8:arg0 | 0.00 | 0.00 | 0.00 | 1 |
| 8:arg1 | 0.00 | 0.00 | 0.00 | 2 |
| 8:arg2 | 0.00 | 0.00 | 0.00 | 1 |
| 8:root | 0.00 | 0.00 | 0.00 | 7 |
| 9:arg0 | 0.00 | 0.00 | 0.00 | 1 |
| 9:arg1 | 0.00 | 0.00 | 0.00 | 2 |
| 9:arg2 | 0.00 | 0.00 | 0.00 | 1 |
| 9:root | 0.00 | 0.00 | 0.00 | 3 |
| 10:arg1 | 0.00 | 0.00 | 0.00 | 1 |
| 10:root | 0.00 | 0.00 | 0.00 | 2 |
| micro avg | 0.71 | 0.70 | 0.70 | 6926 |
| macro avg | 0.34 | 0.33 | 0.33 | 6926 |
| weighted avg | 0.70 | 0.70 | 0.70 | 6926 |
| tot root avg | 0.38 | 0.40 | 0.39 | 3081 |
| tot A0 avg | 0.34 | 0.29 | 0.30 | 1318 |
| tot A1 avg | 0.32 | 0.32 | 0.32 | 1677 |
| tot A2 avg | 0.31 | 0.33 | 0.31 | 850 |
| tot r0 avg | 0.81 | 0.79 | 0.80 | 2180 |
| tot r1 avg | 0.70 | 0.73 | 0.72 | 1804 |
| tot r2 avg | 0.62 | 0.65 | 0.64 | 1306 |
| tot r3 avg | 0.52 | 0.56 | 0.54 | 837 |
| tot r4 avg | 0.47 | 0.47 | 0.46 | 451 |
| tot r5 avg | 0.36 | 0.28 | 0.29 | 207 |
| tot r6 avg | 0.09 | 0.03 | 0.05 | 88 |
| tot r7 avg | 0.00 | 0.00 | 0.00 | 32 |
| tot r8 avg | 0.00 | 0.00 | 0.00 | 11 |
| tot r9 avg | 0.00 | 0.00 | 0.00 | 7 |
| tot r10 avg | 0.00 | 0.00 | 0.00 | 3 |
## Citation
**BibTeX:**
```
@inproceedings{bruton-beloucif-2023-bertie,
title = "{BERT}ie Bott{'}s Every Flavor Labels: A Tasty Introduction to Semantic Role Labeling for {G}alician",
author = "Bruton, Micaella and
Beloucif, Meriem",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.671",
doi = "10.18653/v1/2023.emnlp-main.671",
pages = "10892--10902",
abstract = "In this paper, we leverage existing corpora, WordNet, and dependency parsing to build the first Galician dataset for training semantic role labeling systems in an effort to expand available NLP resources. Additionally, we introduce verb indexing, a new pre-processing method, which helps increase the performance when semantically parsing highly-complex sentences. We use transfer-learning to test both the resource and the verb indexing method. Our results show that the effects of verb indexing were amplified in scenarios where the model was both pre-trained and fine-tuned on datasets utilizing the method, but improvements are also noticeable when only used during fine-tuning. The best-performing Galician SRL model achieved an f1 score of 0.74, introducing a baseline for future Galician SRL systems. We also tested our method on Spanish where we achieved an f1 score of 0.83, outperforming the baseline set by the 2009 CoNLL Shared Task by 0.025 showing the merits of our verb indexing method for pre-processing.",
}
```
|
mbruton/gal_pt_XLM-R
|
mbruton
| 2024-01-03T14:18:34Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"gl",
"pt",
"dataset:mbruton/galician_srl",
"dataset:PropBank.Br",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-04-15T12:43:59Z |
---
license: apache-2.0
datasets:
- mbruton/galician_srl
- PropBank.Br
language:
- gl
- pt
metrics:
- seqeval
library_name: transformers
pipeline_tag: token-classification
---
# Model Card for GalXLM-R-pt for Semantic Role Labeling
This model is fine-tuned on a version of [XLM RoBERTa Base](https://huggingface.co/xlm-roberta-base) which is pre-trained on the SRL task for Portuguese, and is one of 24 models introduced as part of [this project](https://github.com/mbruton0426/GalicianSRL). Prior to this work, there were no published Galician datasets or models for SRL.
## Model Details
### Model Description
GalXLM-R-pt for Semantic Role Labeling (SRL) is a transformers model, leveraging XLM-R's extensive pretraining on 100 languages to achieve better SRL predictions for low-resource Galician. This model is additionally pre-trained on the SRL task for Portuguese. It was fine-tuned on Galician with the following objectives:
- Identify up to 13 verbal roots within a sentence.
- Identify available arguments for each verbal root. Due to scarcity of data, this model focused solely on the identification of arguments 0, 1, and 2.
Labels are formatted as: r#:tag, where r# links the token to a specific verbal root of index #, and tag identifies the token as the verbal root (root) or an individual argument (arg0/arg1/arg2)
- **Developed by:** [Micaella Bruton](mailto:micaellabruton@gmail.com)
- **Model type:** Transformers
- **Language(s) (NLP):** Galician (gl), Portuguese (pt)
- **License:** Apache 2.0
- **Finetuned from model:** [Portuguese pre-trained XLM RoBERTa Base](https://huggingface.co/liaad/srl-pt_xlmr-base)
### Model Sources
- **Repository:** [GalicianSRL](https://github.com/mbruton0426/GalicianSRL)
- **Paper:** To be updated
## Uses
This model is intended to be used to develop and improve natural language processing tools for Galician.
## Bias, Risks, and Limitations
Galician is a low-resource language which prior to this project lacked a semantic role labeling dataset. As such, the dataset used to train this model is extrememly limited and could benefit from the inclusion of additional sentences and manual validation by native speakers.
## Training Details
### Training Data
This model was pre-trained on the [PropBank.Br Portuguese SRL corpus](http://www.nilc.icmc.usp.br/portlex/index.php/en/projects/propbankbringl).
This model was fine-tuned on the "train" portion of the [GalicianSRL Dataset](https://huggingface.co/datasets/mbruton/galician_srl) produced as part of this same project.
#### Training Hyperparameters
- **Learning Rate:** 2e-5
- **Batch Size:** 16
- **Weight Decay:** 0.01
- **Early Stopping:** 10 epochs
## Evaluation
#### Testing Data
This model was tested on the "test" portion of the [GalicianSRL Dataset](https://huggingface.co/datasets/mbruton/galician_srl) produced as part of this same project.
#### Metrics
[seqeval](https://huggingface.co/spaces/evaluate-metric/seqeval) is a Python framework for sequence labeling evaluation. It can evaluate the performance of chunking tasks such as named-entity recognition, part-of-speech tagging, and semantic role labeling.
It supplies scoring both overall and per label type.
Overall:
- `accuracy`: the average [accuracy](https://huggingface.co/metrics/accuracy), on a scale between 0.0 and 1.0.
- `precision`: the average [precision](https://huggingface.co/metrics/precision), on a scale between 0.0 and 1.0.
- `recall`: the average [recall](https://huggingface.co/metrics/recall), on a scale between 0.0 and 1.0.
- `f1`: the average [F1 score](https://huggingface.co/metrics/f1), which is the harmonic mean of the precision and recall. It also has a scale of 0.0 to 1.0.
Per label type:
- `precision`: the average [precision](https://huggingface.co/metrics/precision), on a scale between 0.0 and 1.0.
- `recall`: the average [recall](https://huggingface.co/metrics/recall), on a scale between 0.0 and 1.0.
- `f1`: the average [F1 score](https://huggingface.co/metrics/f1), on a scale between 0.0 and 1.0.
### Results
| Label | Precision | Recall | f1-score | Support |
| :----------: | :-------: | :----: | :------: | :-----: |
| 0:arg0 | 0.74 | 0.81 | 0.77 | 485 |
| 0:arg1 | 0.72 | 0.74 | 0.73 | 483 |
| 0:arg2 | 0.69 | 0.74 | 0.71 | 264 |
| 0:root | 0.93 | 0.93 | 0.93 | 948 |
| 1:arg0 | 0.68 | 0.66 | 0.67 | 348 |
| 1:arg1 | 0.72 | 0.67 | 0.69 | 443 |
| 1:arg2 | 0.59 | 0.60 | 0.59 | 211 |
| 1:root | 0.87 | 0.85 | 0.86 | 802 |
| 2:arg0 | 0.54 | 0.56 | 0.55 | 240 |
| 2:arg1 | 0.62 | 0.60 | 0.61 | 331 |
| 2:arg2 | 0.55 | 0.65 | 0.59 | 156 |
| 2:root | 0.77 | 0.76 | 0.77 | 579 |
| 3:arg0 | 0.42 | 0.41 | 0.41 | 137 |
| 3:arg1 | 0.57 | 0.54 | 0.56 | 216 |
| 3:arg2 | 0.44 | 0.49 | 0.46 | 110 |
| 3:root | 0.64 | 0.74 | 0.69 | 374 |
| 4:arg0 | 0.49 | 0.41 | 0.45 | 70 |
| 4:arg1 | 0.53 | 0.47 | 0.50 | 109 |
| 4:arg2 | 0.42 | 0.50 | 0.46 | 66 |
| 4:root | 0.60 | 0.62 | 0.61 | 206 |
| 5:arg0 | 0.34 | 0.50 | 0.41 | 20 |
| 5:arg1 | 0.41 | 0.53 | 0.46 | 57 |
| 5:arg2 | 0.00 | 0.00 | 0.00 | 28 |
| 5:root | 0.56 | 0.48 | 0.52 | 102 |
| 6:arg0 | 0.00 | 0.00 | 0.00 | 13 |
| 6:arg1 | 0.00 | 0.00 | 0.00 | 25 |
| 6:arg2 | 0.00 | 0.00 | 0.00 | 8 |
| 6:root | 0.33 | 0.36 | 0.34 | 42 |
| 7:arg0 | 0.00 | 0.00 | 0.00 | 3 |
| 7:arg1 | 0.00 | 0.00 | 0.00 | 8 |
| 7:arg2 | 0.00 | 0.00 | 0.00 | 5 |
| 7:root | 0.00 | 0.00 | 0.00 | 16 |
| 8:arg0 | 0.00 | 0.00 | 0.00 | 1 |
| 8:arg1 | 0.00 | 0.00 | 0.00 | 2 |
| 8:arg2 | 0.00 | 0.00 | 0.00 | 1 |
| 8:root | 0.00 | 0.00 | 0.00 | 7 |
| 9:arg0 | 0.00 | 0.00 | 0.00 | 1 |
| 9:arg1 | 0.00 | 0.00 | 0.00 | 2 |
| 9:arg2 | 0.00 | 0.00 | 0.00 | 1 |
| 9:root | 0.00 | 0.00 | 0.00 | 3 |
| 10:arg1 | 0.00 | 0.00 | 0.00 | 1 |
| 10:root | 0.00 | 0.00 | 0.00 | 2 |
| micro avg | 0.71 | 0.70 | 0.70 | 6926 |
| macro avg | 0.34 | 0.35 | 0.34 | 6926 |
| weighted avg | 0.70 | 0.70 | 0.70 | 6926 |
| tot root avg | 0.43 | 0.43 | 0.43 | 3081 |
| tot A0 avg | 0.32 | 0.34 | 0.33 | 1318 |
| tot A1 avg | 0.32 | 0.32 | 0.32 | 1677 |
| tot A2 avg | 0.27 | 0.30 | 0.28 | 850 |
| tot r0 avg | 0.77 | 0.81 | 0.79 | 2180 |
| tot r1 avg | 0.72 | 0.70 | 0.70 | 1804 |
| tot r2 avg | 0.62 | 0.64 | 0.63 | 1306 |
| tot r3 avg | 0.52 | 0.55 | 0.53 | 837 |
| tot r4 avg | 0.51 | 0.50 | 0.51 | 451 |
| tot r5 avg | 0.33 | 0.38 | 0.35 | 207 |
| tot r6 avg | 0.08 | 0.09 | 0.09 | 88 |
| tot r7 avg | 0.00 | 0.00 | 0.00 | 32 |
| tot r8 avg | 0.00 | 0.00 | 0.00 | 11 |
| tot r9 avg | 0.00 | 0.00 | 0.00 | 7 |
| tot r10 avg | 0.00 | 0.00 | 0.00 | 3 |
## Citation
**BibTeX:**
```
@inproceedings{bruton-beloucif-2023-bertie,
title = "{BERT}ie Bott{'}s Every Flavor Labels: A Tasty Introduction to Semantic Role Labeling for {G}alician",
author = "Bruton, Micaella and
Beloucif, Meriem",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.671",
doi = "10.18653/v1/2023.emnlp-main.671",
pages = "10892--10902",
abstract = "In this paper, we leverage existing corpora, WordNet, and dependency parsing to build the first Galician dataset for training semantic role labeling systems in an effort to expand available NLP resources. Additionally, we introduce verb indexing, a new pre-processing method, which helps increase the performance when semantically parsing highly-complex sentences. We use transfer-learning to test both the resource and the verb indexing method. Our results show that the effects of verb indexing were amplified in scenarios where the model was both pre-trained and fine-tuned on datasets utilizing the method, but improvements are also noticeable when only used during fine-tuning. The best-performing Galician SRL model achieved an f1 score of 0.74, introducing a baseline for future Galician SRL systems. We also tested our method on Spanish where we achieved an f1 score of 0.83, outperforming the baseline set by the 2009 CoNLL Shared Task by 0.025 showing the merits of our verb indexing method for pre-processing.",
}
```
|
rogerpolo/distilbert-base-uncased-finetuned-emotion
|
rogerpolo
| 2024-01-03T14:17:23Z | 89 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-03T14:06:21Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9265
- name: F1
type: f1
value: 0.926431311696564
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2178
- Accuracy: 0.9265
- F1: 0.9264
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8383 | 1.0 | 250 | 0.3088 | 0.912 | 0.9111 |
| 0.2476 | 2.0 | 500 | 0.2178 | 0.9265 | 0.9264 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.2+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
mbruton/gal_ptsp_mBERT
|
mbruton
| 2024-01-03T14:16:09Z | 90 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"gl",
"pt",
"es",
"dataset:mbruton/galician_srl",
"dataset:PropBank.Br",
"dataset:mbruton/spanish_srl",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-04-15T16:25:52Z |
---
license: apache-2.0
datasets:
- mbruton/galician_srl
- PropBank.Br
- mbruton/spanish_srl
language:
- gl
- pt
- es
metrics:
- seqeval
library_name: transformers
pipeline_tag: token-classification
---
# Model Card for GalBERT-ptsp for Semantic Role Labeling (cased)
This model is fine-tuned on a version of [multilingual BERT](https://huggingface.co/bert-base-multilingual-cased) which is pre-trained on the SRL task for Portuguese and Spanish, and is one of 24 models introduced as part of [this project](https://github.com/mbruton0426/GalicianSRL). Prior to this work, there were no published Galician datasets or models for SRL.
## Model Details
### Model Description
GalBERT-ptsp for Semantic Role Labeling (SRL) is a transformers model, leveraging mBERT's extensive pretraining on 104 languages to achieve better SRL predictions for low-resource Galician. This model is additionally pre-trained on the SRL task for Portuguese and Spanish. This model is cased: it makes a difference between english and English. It was fine-tuned on Galician with the following objectives:
- Identify up to 13 verbal roots within a sentence.
- Identify available arguments for each verbal root. Due to scarcity of data, this model focused solely on the identification of arguments 0, 1, and 2.
Labels are formatted as: r#:tag, where r# links the token to a specific verbal root of index #, and tag identifies the token as the verbal root (root) or an individual argument (arg0/arg1/arg2)
- **Developed by:** [Micaella Bruton](mailto:micaellabruton@gmail.com)
- **Model type:** Transformers
- **Language(s) (NLP):** Galician (gl), Portuguese (pt), Spanish (es)
- **License:** Apache 2.0
- **Finetuned from model:** [Portuguese & Spanish pre-trained multilingual BERT](https://huggingface.co/mbruton/spa_pt_mBERT)
### Model Sources
- **Repository:** [GalicianSRL](https://github.com/mbruton0426/GalicianSRL)
- **Paper:** To be updated
## Uses
This model is intended to be used to develop and improve natural language processing tools for Galician.
## Bias, Risks, and Limitations
Galician is a low-resource language which prior to this project lacked a semantic role labeling dataset. As such, the dataset used to train this model is extrememly limited and could benefit from the inclusion of additional sentences and manual validation by native speakers.
## Training Details
### Training Data
This model was pre-trained on both the [PropBank.Br Portuguese SRL corpus](http://www.nilc.icmc.usp.br/portlex/index.php/en/projects/propbankbringl) and the [SpanishSRL Dataset](https://huggingface.co/datasets/mbruton/spanish_srl) produced as part of this same project.
This model was fine-tuned on the "train" portion of the [GalicianSRL Dataset](https://huggingface.co/datasets/mbruton/galician_srl) produced as part of this same project.
#### Training Hyperparameters
- **Learning Rate:** 2e-5
- **Batch Size:** 16
- **Weight Decay:** 0.01
- **Early Stopping:** 10 epochs
## Evaluation
#### Testing Data
This model was tested on the "test" portion of the [GalicianSRL Dataset](https://huggingface.co/datasets/mbruton/galician_srl) produced as part of this same project.
#### Metrics
[seqeval](https://huggingface.co/spaces/evaluate-metric/seqeval) is a Python framework for sequence labeling evaluation. It can evaluate the performance of chunking tasks such as named-entity recognition, part-of-speech tagging, and semantic role labeling.
It supplies scoring both overall and per label type.
Overall:
- `accuracy`: the average [accuracy](https://huggingface.co/metrics/accuracy), on a scale between 0.0 and 1.0.
- `precision`: the average [precision](https://huggingface.co/metrics/precision), on a scale between 0.0 and 1.0.
- `recall`: the average [recall](https://huggingface.co/metrics/recall), on a scale between 0.0 and 1.0.
- `f1`: the average [F1 score](https://huggingface.co/metrics/f1), which is the harmonic mean of the precision and recall. It also has a scale of 0.0 to 1.0.
Per label type:
- `precision`: the average [precision](https://huggingface.co/metrics/precision), on a scale between 0.0 and 1.0.
- `recall`: the average [recall](https://huggingface.co/metrics/recall), on a scale between 0.0 and 1.0.
- `f1`: the average [F1 score](https://huggingface.co/metrics/f1), on a scale between 0.0 and 1.0.
### Results
| Label | Precision | Recall | f1-score | Support |
| :----------: | :-------: | :----: | :------: | :-----: |
| 0:arg0 | 0.83 | 0.62 | 0.71 | 485 |
| 0:arg1 | 0.63 | 0.75 | 0.68 | 483 |
| 0:arg2 | 0.70 | 0.70 | 0.70 | 264 |
| 0:root | 0.93 | 0.92 | 0.92 | 948 |
| 1:arg0 | 0.63 | 0.63 | 0.63 | 348 |
| 1:arg1 | 0.67 | 0.63 | 0.65 | 443 |
| 1:arg2 | 0.60 | 0.62 | 0.61 | 211 |
| 1:root | 0.85 | 0.83 | 0.84 | 802 |
| 2:arg0 | 0.61 | 0.53 | 0.57 | 240 |
| 2:arg1 | 0.62 | 0.60 | 0.61 | 331 |
| 2:arg2 | 0.61 | 0.53 | 0.57 | 156 |
| 2:root | 0.76 | 0.77 | 0.77 | 579 |
| 3:arg0 | 0.55 | 0.46 | 0.50 | 137 |
| 3:arg1 | 0.57 | 0.54 | 0.56 | 216 |
| 3:arg2 | 0.44 | 0.66 | 0.53 | 110 |
| 3:root | 0.67 | 0.69 | 0.68 | 374 |
| 4:arg0 | 0.48 | 0.41 | 0.44 | 70 |
| 4:arg1 | 0.48 | 0.57 | 0.52 | 109 |
| 4:arg2 | 0.63 | 0.26 | 0.37 | 66 |
| 4:root | 0.58 | 0.67 | 0.62 | 206 |
| 5:arg0 | 0.50 | 0.45 | 0.47 | 20 |
| 5:arg1 | 0.49 | 0.49 | 0.49 | 57 |
| 5:arg2 | 0.50 | 0.18 | 0.26 | 28 |
| 5:root | 0.56 | 0.52 | 0.54 | 102 |
| 6:arg0 | 0.46 | 0.46 | 0.46 | 13 |
| 6:arg1 | 0.27 | 0.16 | 0.20 | 25 |
| 6:arg2 | 0.00 | 0.00 | 0.00 | 8 |
| 6:root | 0.36 | 0.40 | 0.38 | 42 |
| 7:arg0 | 0.00 | 0.00 | 0.00 | 3 |
| 7:arg1 | 0.00 | 0.00 | 0.00 | 8 |
| 7:arg2 | 0.00 | 0.00 | 0.00 | 5 |
| 7:root | 0.00 | 0.00 | 0.00 | 16 |
| 8:arg0 | 0.00 | 0.00 | 0.00 | 1 |
| 8:arg1 | 0.00 | 0.00 | 0.00 | 2 |
| 8:arg2 | 0.00 | 0.00 | 0.00 | 1 |
| 8:root | 0.25 | 0.29 | 0.27 | 7 |
| 9:arg0 | 0.00 | 0.00 | 0.00 | 1 |
| 9:arg1 | 0.00 | 0.00 | 0.00 | 2 |
| 9:arg2 | 0.00 | 0.00 | 0.00 | 1 |
| 9:root | 0.00 | 0.00 | 0.00 | 3 |
| 10:arg1 | 0.00 | 0.00 | 0.00 | 1 |
| 10:root | 0.00 | 0.00 | 0.00 | 2 |
| micro avg | 0.70 | 0.68 | 0.69 | 6926 |
| macro avg | 0.39 | 0.36 | 0.37 | 6926 |
| weighted avg | 0.70 | 0.68 | 0.69 | 6926 |
| tot root avg | 0.45 | 0.46 | 0.46 | 3081 |
| tot A0 avg | 0.41 | 0.36 | 0.38 | 1318 |
| tot A1 avg | 0.34 | 0.34 | 0.34 | 1677 |
| tot A2 avg | 0.35 | 0.30 | 0.30 | 850 |
| tot r0 avg | 0.77 | 0.75 | 0.75 | 2180 |
| tot r1 avg | 0.69 | 0.68 | 0.68 | 1804 |
| tot r2 avg | 0.65 | 0.61 | 0.63 | 1306 |
| tot r3 avg | 0.56 | 0.59 | 0.57 | 837 |
| tot r4 avg | 0.54 | 0.48 | 0.49 | 451 |
| tot r5 avg | 0.51 | 0.41 | 0.44 | 207 |
| tot r6 avg | 0.27 | 0.26 | 0.26 | 88 |
| tot r7 avg | 0.00 | 0.00 | 0.00 | 32 |
| tot r8 avg | 0.06 | 0.07 | 0.07 | 11 |
| tot r9 avg | 0.00 | 0.00 | 0.00 | 7 |
| tot r10 avg | 0.00 | 0.00 | 0.00 | 3 |
## Citation
**BibTeX:**
```
@inproceedings{bruton-beloucif-2023-bertie,
title = "{BERT}ie Bott{'}s Every Flavor Labels: A Tasty Introduction to Semantic Role Labeling for {G}alician",
author = "Bruton, Micaella and
Beloucif, Meriem",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.671",
doi = "10.18653/v1/2023.emnlp-main.671",
pages = "10892--10902",
abstract = "In this paper, we leverage existing corpora, WordNet, and dependency parsing to build the first Galician dataset for training semantic role labeling systems in an effort to expand available NLP resources. Additionally, we introduce verb indexing, a new pre-processing method, which helps increase the performance when semantically parsing highly-complex sentences. We use transfer-learning to test both the resource and the verb indexing method. Our results show that the effects of verb indexing were amplified in scenarios where the model was both pre-trained and fine-tuned on datasets utilizing the method, but improvements are also noticeable when only used during fine-tuning. The best-performing Galician SRL model achieved an f1 score of 0.74, introducing a baseline for future Galician SRL systems. We also tested our method on Spanish where we achieved an f1 score of 0.83, outperforming the baseline set by the 2009 CoNLL Shared Task by 0.025 showing the merits of our verb indexing method for pre-processing.",
}
```
|
mbruton/gal_enptsp_XLM-R
|
mbruton
| 2024-01-03T14:14:32Z | 87 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"gl",
"en",
"pt",
"es",
"dataset:mbruton/galician_srl",
"dataset:CoNLL-2012",
"dataset:PropBank.Br",
"dataset:mbruton/spanish_srl",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-04-15T15:20:48Z |
---
license: apache-2.0
datasets:
- mbruton/galician_srl
- CoNLL-2012
- PropBank.Br
- mbruton/spanish_srl
language:
- gl
- en
- pt
- es
metrics:
- seqeval
library_name: transformers
pipeline_tag: token-classification
---
# Model Card for GalXLM-R-enptsp for Semantic Role Labeling
This model is fine-tuned on a version of [XLM RoBERTa Base](https://huggingface.co/xlm-roberta-base) which is pre-trained on the SRL task for English, Portuguese, and Spanish, and is one of 24 models introduced as part of [this project](https://github.com/mbruton0426/GalicianSRL). Prior to this work, there were no published Galician datasets or models for SRL.
## Model Details
### Model Description
GalXLM-R-enptsp for Semantic Role Labeling (SRL) is a transformers model, leveraging XLM-R's extensive pretraining on 100 languages to achieve better SRL predictions for low-resource Galician. This model is additionally pre-trained on the SRL task for English, Portuguese, and Spanish. It was fine-tuned on Galician with the following objectives:
- Identify up to 13 verbal roots within a sentence.
- Identify available arguments for each verbal root. Due to scarcity of data, this model focused solely on the identification of arguments 0, 1, and 2.
Labels are formatted as: r#:tag, where r# links the token to a specific verbal root of index #, and tag identifies the token as the verbal root (root) or an individual argument (arg0/arg1/arg2)
- **Developed by:** [Micaella Bruton](mailto:micaellabruton@gmail.com)
- **Model type:** Transformers
- **Language(s) (NLP):** Galician (gl), English (en), Portuguese (pt), Spanish (es)
- **License:** Apache 2.0
- **Finetuned from model:** [English, Portuguese, and Spanish pre-trained XLM RoBERTa Base](https://huggingface.co/mbruton/spa_enpt_XLM-R)
### Model Sources
- **Repository:** [GalicianSRL](https://github.com/mbruton0426/GalicianSRL)
- **Paper:** To be updated
## Uses
This model is intended to be used to develop and improve natural language processing tools for Galician.
## Bias, Risks, and Limitations
Galician is a low-resource language which prior to this project lacked a semantic role labeling dataset. As such, the dataset used to train this model is extrememly limited and could benefit from the inclusion of additional sentences and manual validation by native speakers.
## Training Details
### Training Data
This model was pre-trained on the [OntoNotes 5.0 English SRL corpus](http://catalog.ldc.upenn.edu/LDC2013T19), [PropBank.Br Portuguese SRL corpus](http://www.nilc.icmc.usp.br/portlex/index.php/en/projects/propbankbringl), and the [SpanishSRL Dataset](https://huggingface.co/datasets/mbruton/spanish_srl) produced as part of this same project.
This model was fine-tuned on the "train" portion of the [GalicianSRL Dataset](https://huggingface.co/datasets/mbruton/galician_srl) produced as part of this same project.
#### Training Hyperparameters
- **Learning Rate:** 2e-5
- **Batch Size:** 16
- **Weight Decay:** 0.01
- **Early Stopping:** 10 epochs
## Evaluation
#### Testing Data
This model was tested on the "test" portion of the [GalicianSRL Dataset](https://huggingface.co/datasets/mbruton/galician_srl) produced as part of this same project.
#### Metrics
[seqeval](https://huggingface.co/spaces/evaluate-metric/seqeval) is a Python framework for sequence labeling evaluation. It can evaluate the performance of chunking tasks such as named-entity recognition, part-of-speech tagging, and semantic role labeling.
It supplies scoring both overall and per label type.
Overall:
- `accuracy`: the average [accuracy](https://huggingface.co/metrics/accuracy), on a scale between 0.0 and 1.0.
- `precision`: the average [precision](https://huggingface.co/metrics/precision), on a scale between 0.0 and 1.0.
- `recall`: the average [recall](https://huggingface.co/metrics/recall), on a scale between 0.0 and 1.0.
- `f1`: the average [F1 score](https://huggingface.co/metrics/f1), which is the harmonic mean of the precision and recall. It also has a scale of 0.0 to 1.0.
Per label type:
- `precision`: the average [precision](https://huggingface.co/metrics/precision), on a scale between 0.0 and 1.0.
- `recall`: the average [recall](https://huggingface.co/metrics/recall), on a scale between 0.0 and 1.0.
- `f1`: the average [F1 score](https://huggingface.co/metrics/f1), on a scale between 0.0 and 1.0.
### Results
| Label | Precision | Recall | f1-score | Support |
| :----------: | :-------: | :----: | :------: | :-----: |
| 0:arg0 | 0.80 | 0.67 | 0.73 | 485 |
| 0:arg1 | 0.66 | 0.74 | 0.69 | 483 |
| 0:arg2 | 0.68 | 0.73 | 0.70 | 264 |
| 0:root | 0.93 | 0.93 | 0.93 | 948 |
| 1:arg0 | 0.67 | 0.64 | 0.66 | 348 |
| 1:arg1 | 0.68 | 0.70 | 0.69 | 443 |
| 1:arg2 | 0.57 | 0.67 | 0.61 | 211 |
| 1:root | 0.84 | 0.86 | 0.85 | 802 |
| 2:arg0 | 0.58 | 0.60 | 0.59 | 240 |
| 2:arg1 | 0.63 | 0.65 | 0.64 | 331 |
| 2:arg2 | 0.54 | 0.69 | 0.61 | 156 |
| 2:root | 0.75 | 0.80 | 0.77 | 579 |
| 3:arg0 | 0.48 | 0.49 | 0.49 | 137 |
| 3:arg1 | 0.62 | 0.55 | 0.58 | 216 |
| 3:arg2 | 0.46 | 0.66 | 0.55 | 110 |
| 3:root | 0.69 | 0.73 | 0.71 | 374 |
| 4:arg0 | 0.54 | 0.47 | 0.50 | 70 |
| 4:arg1 | 0.55 | 0.60 | 0.57 | 109 |
| 4:arg2 | 0.44 | 0.42 | 0.43 | 66 |
| 4:root | 0.61 | 0.60 | 0.60 | 206 |
| 5:arg0 | 0.37 | 0.50 | 0.43 | 20 |
| 5:arg1 | 0.56 | 0.47 | 0.51 | 57 |
| 5:arg2 | 0.33 | 0.32 | 0.33 | 28 |
| 5:root | 0.57 | 0.51 | 0.54 | 102 |
| 6:arg0 | 0.38 | 0.23 | 0.29 | 13 |
| 6:arg1 | 0.26 | 0.36 | 0.31 | 25 |
| 6:arg2 | 0.00 | 0.00 | 0.00 | 8 |
| 6:root | 0.40 | 0.38 | 0.39 | 42 |
| 7:arg0 | 0.00 | 0.00 | 0.00 | 3 |
| 7:arg1 | 1.00 | 0.12 | 0.22 | 8 |
| 7:arg2 | 0.00 | 0.00 | 0.00 | 5 |
| 7:root | 0.20 | 0.19 | 0.19 | 16 |
| 8:arg0 | 0.00 | 0.00 | 0.00 | 1 |
| 8:arg1 | 0.00 | 0.00 | 0.00 | 2 |
| 8:arg2 | 0.00 | 0.00 | 0.00 | 1 |
| 8:root | 0.00 | 0.00 | 0.00 | 7 |
| 9:arg0 | 0.00 | 0.00 | 0.00 | 1 |
| 9:arg1 | 0.00 | 0.00 | 0.00 | 2 |
| 9:arg2 | 0.00 | 0.00 | 0.00 | 1 |
| 9:root | 0.00 | 0.00 | 0.00 | 3 |
| 10:arg1 | 0.00 | 0.00 | 0.00 | 1 |
| 10:root | 0.00 | 0.00 | 0.00 | 2 |
| micro avg | 0.70 | 0.72 | 0.71 | 6926 |
| macro avg | 0.40 | 0.39 | 0.38 | 6926 |
| weighted avg | 0.70 | 0.72 | 0.71 | 6926 |
| tot root avg | 0.45 | 0.45 | 0.45 | 3081 |
| tot A0 avg | 0.38 | 0.36 | 0.37 | 1318 |
| tot A1 avg | 0.45 | 0.38 | 0.38 | 1677 |
| tot A2 avg | 0.30 | 0.35 | 0.32 | 850 |
| tot r0 avg | 0.77 | 0.77 | 0.76 | 2180 |
| tot r1 avg | 0.69 | 0.72 | 0.70 | 1804 |
| tot r2 avg | 0.63 | 0.69 | 0.65 | 1306 |
| tot r3 avg | 0.56 | 0.61 | 0.58 | 837 |
| tot r4 avg | 0.54 | 0.52 | 0.53 | 451 |
| tot r5 avg | 0.46 | 0.45 | 0.45 | 207 |
| tot r6 avg | 0.26 | 0.24 | 0.25 | 88 |
| tot r7 avg | 0.30 | 0.08 | 0.10 | 32 |
| tot r8 avg | 0.00 | 0.00 | 0.00 | 11 |
| tot r9 avg | 0.00 | 0.00 | 0.00 | 7 |
| tot r10 avg | 0.00 | 0.00 | 0.00 | 3 |
## Citation
**BibTeX:**
```
@inproceedings{bruton-beloucif-2023-bertie,
title = "{BERT}ie Bott{'}s Every Flavor Labels: A Tasty Introduction to Semantic Role Labeling for {G}alician",
author = "Bruton, Micaella and
Beloucif, Meriem",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.671",
doi = "10.18653/v1/2023.emnlp-main.671",
pages = "10892--10902",
abstract = "In this paper, we leverage existing corpora, WordNet, and dependency parsing to build the first Galician dataset for training semantic role labeling systems in an effort to expand available NLP resources. Additionally, we introduce verb indexing, a new pre-processing method, which helps increase the performance when semantically parsing highly-complex sentences. We use transfer-learning to test both the resource and the verb indexing method. Our results show that the effects of verb indexing were amplified in scenarios where the model was both pre-trained and fine-tuned on datasets utilizing the method, but improvements are also noticeable when only used during fine-tuning. The best-performing Galician SRL model achieved an f1 score of 0.74, introducing a baseline for future Galician SRL systems. We also tested our method on Spanish where we achieved an f1 score of 0.83, outperforming the baseline set by the 2009 CoNLL Shared Task by 0.025 showing the merits of our verb indexing method for pre-processing.",
}
```
|
amyeroberts/temp_upload_test_local_7
|
amyeroberts
| 2024-01-03T14:14:12Z | 5 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-cased",
"base_model:finetune:distilbert/distilbert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-28T16:22:36Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
base_model: distilbert-base-cased
model-index:
- name: amyeroberts/temp_upload_test_local_7
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# amyeroberts/temp_upload_test_local_7
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2260
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 0.001, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 0.7381 | 0 |
| 0.2260 | 1 |
### Framework versions
- Transformers 4.24.0.dev0
- TensorFlow 2.10.0
- Datasets 2.6.2.dev0
- Tokenizers 0.12.1
|
mbruton/spa_mBERT
|
mbruton
| 2024-01-03T14:14:12Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"es",
"dataset:mbruton/spanish_srl",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-04-14T17:08:20Z |
---
license: apache-2.0
datasets:
- mbruton/spanish_srl
language:
- es
metrics:
- seqeval
library_name: transformers
pipeline_tag: token-classification
---
# Model Card for SpaBERT for Semantic Role Labeling (cased)
This model is fine-tuned on a version of [multilingual BERT](https://huggingface.co/bert-base-multilingual-cased) and is one of 24 models introduced as part of [this project](https://github.com/mbruton0426/GalicianSRL).
## Model Details
### Model Description
SpaBERT for Semantic Role Labeling (SRL) is a transformers model, leveraging mBERT's extensive pretraining on 104 languages to achieve better SRL predictions for Spanish. It was fine-tuned on Spanish with the following objectives:
- Identify up to 16 verbal roots within a sentence.
- Identify available arguments and thematic roles for each verbal root.
Labels are formatted as: r#:tag, where r# links the token to a specific verbal root of index #, and tag identifies the token as the verbal root (root) or an individual argument (arg0/arg1/arg2/arg3/argM) and it's thematic role (adv/agt/atr/ben/cau/cot/des/efi/ein/exp/ext/fin/ins/loc/mnr/ori/pat/src/tem/tmp)
- **Developed by:** [Micaella Bruton](mailto:micaellabruton@gmail.com)
- **Model type:** Transformers
- **Language(s) (NLP):** Spanish (es)
- **License:** Apache 2.0
- **Finetuned from model:** [multilingual BERT](https://huggingface.co/bert-base-multilingual-cased)
### Model Sources
- **Repository:** [GalicianSRL](https://github.com/mbruton0426/GalicianSRL)
- **Paper:** To be updated
## Uses
This model is intended to be used to develop and improve natural language processing tools for Spanish.
## Bias, Risks, and Limitations
The Spanish training set lacked highly complex sentences and as such, performs much better on sentences of mid- to low-complexity.
## Training Details
### Training Data
This model was fine-tuned on the "train" portion of the [SpanishSRL Dataset](https://huggingface.co/datasets/mbruton/spanish_srl) produced as part of this same project.
#### Training Hyperparameters
- **Learning Rate:** 2e-5
- **Batch Size:** 16
- **Weight Decay:** 0.01
- **Early Stopping:** 10 epochs
## Evaluation
#### Testing Data
This model was tested on the "test" portion of the [SpanishSRL Dataset](https://huggingface.co/datasets/mbruton/spanish_srl) produced as part of this same project.
#### Metrics
[seqeval](https://huggingface.co/spaces/evaluate-metric/seqeval) is a Python framework for sequence labeling evaluation. It can evaluate the performance of chunking tasks such as named-entity recognition, part-of-speech tagging, and semantic role labeling.
It supplies scoring both overall and per label type.
Overall:
- `accuracy`: the average [accuracy](https://huggingface.co/metrics/accuracy), on a scale between 0.0 and 1.0.
- `precision`: the average [precision](https://huggingface.co/metrics/precision), on a scale between 0.0 and 1.0.
- `recall`: the average [recall](https://huggingface.co/metrics/recall), on a scale between 0.0 and 1.0.
- `f1`: the average [F1 score](https://huggingface.co/metrics/f1), which is the harmonic mean of the precision and recall. It also has a scale of 0.0 to 1.0.
Per label type:
- `precision`: the average [precision](https://huggingface.co/metrics/precision), on a scale between 0.0 and 1.0.
- `recall`: the average [recall](https://huggingface.co/metrics/recall), on a scale between 0.0 and 1.0.
- `f1`: the average [F1 score](https://huggingface.co/metrics/f1), on a scale between 0.0 and 1.0.
### Results
| Label | Precision | Recall | f1-score | Support |
| :----------: | :-------: | :----: | :------: | :-----: |
| 0:arg0:agt | 0.94 | 0.91 | 0.92 | 867 |
| 0:arg0:cau | 0.68 | 0.70 | 0.69 | 57 |
| 0:arg0:src | 0.00 | 0.00 | 0.00 | 1 |
| 0:arg1:ext | 0.00 | 0.00 | 0.00 | 3 |
| 0:arg1:pat | 0.89 | 0.89 | 0.89 | 536 |
| 0:arg1:tem | 0.88 | 0.89 | 0.88 | 589 |
| 0:arg2:atr | 0.85 | 0.91 | 0.88 | 278 |
| 0:arg2:ben | 0.75 | 0.85 | 0.80 | 78 |
| 0:arg2:efi | 0.80 | 0.57 | 0.67 | 7 |
| 0:arg2:exp | 0.00 | 0.00 | 0.00 | 6 |
| 0:arg2:ext | 0.47 | 0.53 | 0.50 | 15 |
| 0:arg2:loc | 0.51 | 0.53 | 0.52 | 57 |
| 0:arg3:ben | 0.00 | 0.00 | 0.00 | 5 |
| 0:arg3:ein | 0.00 | 0.00 | 0.00 | 1 |
| 0:arg3:fin | 1.00 | 0.50 | 0.67 | 2 |
| 0:arg3:ori | 0.55 | 0.60 | 0.57 | 10 |
| 0:arg4:des | 0.47 | 0.88 | 0.61 | 16 |
| 0:arg4:efi | 0.00 | 0.00 | 0.00 | 5 |
| 0:argM:adv | 0.64 | 0.55 | 0.59 | 268 |
| 0:argM:atr | 0.54 | 0.54 | 0.54 | 24 |
| 0:argM:cau | 0.72 | 0.56 | 0.63 | 41 |
| 0:argM:ext | 0.00 | 0.00 | 0.00 | 5 |
| 0:argM:fin | 0.79 | 0.74 | 0.76 | 46 |
| 0:argM:loc | 0.72 | 0.77 | 0.74 | 186 |
| 0:argM:mnr | 0.59 | 0.52 | 0.55 | 66 |
| 0:argM:tmp | 0.83 | 0.88 | 0.85 | 411 |
| 0:root | 0.99 | 0.99 | 0.99 | 1662 |
| 1:arg0:agt | 0.91 | 0.92 | 0.92 | 564 |
| 1:arg0:cau | 0.83 | 0.77 | 0.80 | 44 |
| 1:arg1:ext | 0.00 | 0.00 | 0.00 | 2 |
| 1:arg1:pat | 0.88 | 0.89 | 0.88 | 482 |
| 1:arg1:tem | 0.90 | 0.88 | 0.89 | 390 |
| 1:arg2:atr | 0.85 | 0.88 | 0.86 | 197 |
| 1:arg2:ben | 0.76 | 0.83 | 0.80 | 66 |
| 1:arg2:efi | 0.67 | 0.33 | 0.44 | 6 |
| 1:arg2:ext | 0.56 | 0.71 | 0.63 | 7 |
| 1:arg2:ins | 0.00 | 0.00 | 0.00 | 1 |
| 1:arg2:loc | 0.55 | 0.50 | 0.52 | 44 |
| 1:arg3:ben | 0.00 | 0.00 | 0.00 | 2 |
| 1:arg3:ein | 0.00 | 0.00 | 0.00 | 3 |
| 1:arg3:fin | 1.00 | 0.50 | 0.67 | 2 |
| 1:arg3:ori | 0.17 | 0.50 | 0.25 | 2 |
| 1:arg4:des | 0.56 | 1.00 | 0.71 | 10 |
| 1:arg4:efi | 0.00 | 0.00 | 0.00 | 2 |
| 1:argM:adv | 0.68 | 0.53 | 0.59 | 220 |
| 1:argM:atr | 0.61 | 0.74 | 0.67 | 19 |
| 1:argM:cau | 0.45 | 0.66 | 0.53 | 35 |
| 1:argM:ext | 0.00 | 0.00 | 0.00 | 7 |
| 1:argM:fin | 0.54 | 0.74 | 0.62 | 38 |
| 1:argM:loc | 0.68 | 0.76 | 0.72 | 156 |
| 1:argM:mnr | 0.52 | 0.50 | 0.51 | 44 |
| 1:argM:tmp | 0.79 | 0.80 | 0.79 | 247 |
| 1:root | 0.96 | 0.97 | 0.96 | 1323 |
| 2:arg0:agt | 0.86 | 0.88 | 0.87 | 336 |
| 2:arg0:cau | 0.81 | 0.71 | 0.76 | 35 |
| 2:arg0:exp | 0.00 | 0.00 | 0.00 | 1 |
| 2:arg0:src | 0.00 | 0.00 | 0.00 | 1 |
| 2:arg1:pat | 0.86 | 0.84 | 0.85 | 333 |
| 2:arg1:tem | 0.85 | 0.82 | 0.84 | 291 |
| 2:arg2:atr | 0.87 | 0.89 | 0.88 | 124 |
| 2:arg2:ben | 0.70 | 0.81 | 0.75 | 43 |
| 2:arg2:efi | 1.00 | 0.78 | 0.88 | 9 |
| 2:arg2:ext | 0.17 | 0.20 | 0.18 | 5 |
| 2:arg2:ins | 0.00 | 0.00 | 0.00 | 1 |
| 2:arg2:loc | 0.51 | 0.67 | 0.58 | 27 |
| 2:arg3:ben | 0.00 | 0.00 | 0.00 | 4 |
| 2:arg3:ein | 0.00 | 0.00 | 0.00 | 1 |
| 2:arg3:ori | 0.29 | 0.67 | 0.40 | 3 |
| 2:arg4:des | 0.57 | 0.81 | 0.67 | 16 |
| 2:arg4:efi | 0.00 | 0.00 | 0.00 | 6 |
| 2:argM:adv | 0.60 | 0.51 | 0.55 | 176 |
| 2:argM:atr | 0.70 | 0.47 | 0.56 | 15 |
| 2:argM:cau | 0.45 | 0.53 | 0.49 | 17 |
| 2:argM:ext | 0.00 | 0.00 | 0.00 | 4 |
| 2:argM:fin | 0.83 | 0.69 | 0.76 | 36 |
| 2:argM:ins | 0.00 | 0.00 | 0.00 | 1 |
| 2:argM:loc | 0.66 | 0.70 | 0.68 | 117 |
| 2:argM:mnr | 0.35 | 0.23 | 0.28 | 35 |
| 2:argM:tmp | 0.74 | 0.77 | 0.76 | 161 |
| 2:root | 0.95 | 0.94 | 0.94 | 913 |
| 3:arg0:agt | 0.81 | 0.83 | 0.82 | 227 |
| 3:arg0:cau | 0.67 | 0.86 | 0.75 | 14 |
| 3:arg1:pat | 0.78 | 0.82 | 0.80 | 199 |
| 3:arg1:tem | 0.74 | 0.78 | 0.76 | 160 |
| 3:arg2:atr | 0.75 | 0.80 | 0.77 | 79 |
| 3:arg2:ben | 0.80 | 0.89 | 0.84 | 27 |
| 3:arg2:efi | 0.00 | 0.00 | 0.00 | 1 |
| 3:arg2:ext | 0.00 | 0.00 | 0.00 | 3 |
| 3:arg2:loc | 0.50 | 0.38 | 0.43 | 21 |
| 3:arg3:ben | 0.00 | 0.00 | 0.00 | 3 |
| 3:arg3:ein | 0.00 | 0.00 | 0.00 | 2 |
| 3:arg3:ori | 0.00 | 0.00 | 0.00 | 3 |
| 3:arg4:des | 0.44 | 1.00 | 0.61 | 7 |
| 3:arg4:efi | 0.00 | 0.00 | 0.00 | 5 |
| 3:argM:adv | 0.48 | 0.43 | 0.45 | 98 |
| 3:argM:atr | 1.00 | 0.14 | 0.25 | 7 |
| 3:argM:cau | 0.42 | 0.38 | 0.40 | 13 |
| 3:argM:ext | 0.00 | 0.00 | 0.00 | 1 |
| 3:argM:fin | 0.45 | 0.67 | 0.54 | 15 |
| 3:argM:loc | 0.58 | 0.65 | 0.62 | 69 |
| 3:argM:mnr | 0.33 | 0.45 | 0.38 | 11 |
| 3:argM:tmp | 0.78 | 0.76 | 0.77 | 92 |
| 3:root | 0.89 | 0.92 | 0.91 | 569 |
| 4:arg0:agt | 0.82 | 0.82 | 0.82 | 119 |
| 4:arg0:cau | 0.67 | 0.67 | 0.67 | 6 |
| 4:arg1:pat | 0.74 | 0.75 | 0.74 | 87 |
| 4:arg1:tem | 0.81 | 0.75 | 0.78 | 109 |
| 4:arg2:atr | 0.74 | 0.74 | 0.74 | 53 |
| 4:arg2:ben | 0.62 | 0.45 | 0.53 | 11 |
| 4:arg2:ext | 0.00 | 0.00 | 0.00 | 1 |
| 4:arg2:loc | 0.50 | 0.73 | 0.59 | 11 |
| 4:arg3:ein | 0.00 | 0.00 | 0.00 | 1 |
| 4:arg3:ori | 0.00 | 0.00 | 0.00 | 1 |
| 4:arg4:des | 0.88 | 0.70 | 0.78 | 10 |
| 4:arg4:efi | 0.00 | 0.00 | 0.00 | 1 |
| 4:argM:adv | 0.41 | 0.52 | 0.46 | 50 |
| 4:argM:atr | 0.00 | 0.00 | 0.00 | 4 |
| 4:argM:cau | 0.12 | 0.33 | 0.18 | 3 |
| 4:argM:ext | 0.00 | 0.00 | 0.00 | 1 |
| 4:argM:fin | 0.50 | 0.55 | 0.52 | 11 |
| 4:argM:loc | 0.56 | 0.83 | 0.67 | 24 |
| 4:argM:mnr | 0.00 | 0.00 | 0.00 | 16 |
| 4:argM:tmp | 0.70 | 0.73 | 0.72 | 52 |
| 4:root | 0.86 | 0.83 | 0.84 | 322 |
| 5:arg0:agt | 0.74 | 0.82 | 0.78 | 72 |
| 5:arg0:cau | 1.00 | 0.40 | 0.57 | 5 |
| 5:arg1:pat | 0.60 | 0.75 | 0.66 | 71 |
| 5:arg1:tem | 0.82 | 0.68 | 0.75 | 41 |
| 5:arg2:atr | 0.65 | 0.62 | 0.63 | 21 |
| 5:arg2:ben | 0.33 | 0.67 | 0.44 | 6 |
| 5:arg2:efi | 0.00 | 0.00 | 0.00 | 1 |
| 5:arg2:ext | 0.00 | 0.00 | 0.00 | 1 |
| 5:arg2:loc | 0.00 | 0.00 | 0.00 | 1 |
| 5:arg3:ein | 0.00 | 0.00 | 0.00 | 1 |
| 5:arg4:des | 0.00 | 0.00 | 0.00 | 1 |
| 5:arg4:efi | 0.00 | 0.00 | 0.00 | 1 |
| 5:argM:adv | 0.47 | 0.54 | 0.50 | 26 |
| 5:argM:cau | 0.00 | 0.00 | 0.00 | 3 |
| 5:argM:fin | 0.33 | 0.40 | 0.36 | 5 |
| 5:argM:loc | 0.75 | 0.57 | 0.65 | 21 |
| 5:argM:mnr | 0.00 | 0.00 | 0.00 | 7 |
| 5:argM:tmp | 0.71 | 0.73 | 0.72 | 30 |
| 5:root | 0.76 | 0.79 | 0.78 | 173 |
| 6:arg0:agt | 0.71 | 0.59 | 0.65 | 34 |
| 6:arg0:cau | 0.00 | 0.00 | 0.00 | 1 |
| 6:arg1:loc | 0.00 | 0.00 | 0.00 | 1 |
| 6:arg1:pat | 0.39 | 0.54 | 0.45 | 28 |
| 6:arg1:tem | 0.40 | 0.50 | 0.44 | 16 |
| 6:arg2:atr | 0.30 | 0.46 | 0.36 | 13 |
| 6:arg2:ben | 0.27 | 0.60 | 0.37 | 5 |
| 6:arg2:loc | 0.00 | 0.00 | 0.00 | 1 |
| 6:arg3:ben | 0.00 | 0.00 | 0.00 | 1 |
| 6:argM:adv | 0.21 | 0.40 | 0.28 | 10 |
| 6:argM:atr | 0.00 | 0.00 | 0.00 | 2 |
| 6:argM:cau | 0.00 | 0.00 | 0.00 | 1 |
| 6:argM:fin | 0.00 | 0.00 | 0.00 | 2 |
| 6:argM:loc | 0.38 | 0.71 | 0.50 | 7 |
| 6:argM:mnr | 0.00 | 0.00 | 0.00 | 5 |
| 6:argM:tmp | 0.14 | 0.29 | 0.19 | 7 |
| 6:root | 0.62 | 0.68 | 0.65 | 82 |
| 7:arg0:agt | 0.39 | 0.76 | 0.52 | 17 |
| 7:arg1:pat | 0.47 | 0.53 | 0.50 | 17 |
| 7:arg1:tem | 0.54 | 0.47 | 0.50 | 15 |
| 7:arg2:atr | 0.30 | 0.20 | 0.24 | 15 |
| 7:arg2:ben | 0.00 | 0.00 | 0.00 | 7 |
| 7:arg2:loc | 0.00 | 0.00 | 0.00 | 1 |
| 7:arg3:ori | 0.00 | 0.00 | 0.00 | 1 |
| 7:arg4:des | 0.00 | 0.00 | 0.00 | 1 |
| 7:argM:adv | 0.14 | 0.60 | 0.22 | 5 |
| 7:argM:atr | 0.00 | 0.00 | 0.00 | 1 |
| 7:argM:fin | 0.00 | 0.00 | 0.00 | 1 |
| 7:argM:loc | 0.00 | 0.00 | 0.00 | 3 |
| 7:argM:tmp | 0.00 | 0.00 | 0.00 | 6 |
| 7:root | 0.69 | 0.53 | 0.60 | 45 |
| 8:arg0:agt | 0.00 | 0.00 | 0.00 | 8 |
| 8:arg0:cau | 0.00 | 0.00 | 0.00 | 1 |
| 8:arg1:pat | 0.00 | 0.00 | 0.00 | 4 |
| 8:arg1:tem | 0.07 | 0.11 | 0.08 | 9 |
| 8:arg2:atr | 0.17 | 0.25 | 0.20 | 4 |
| 8:arg2:ext | 0.00 | 0.00 | 0.00 | 1 |
| 8:arg2:loc | 0.00 | 0.00 | 0.00 | 2 |
| 8:arg3:ori | 0.00 | 0.00 | 0.00 | 1 |
| 8:argM:adv | 0.00 | 0.00 | 0.00 | 8 |
| 8:argM:ext | 0.00 | 0.00 | 0.00 | 1 |
| 8:argM:fin | 0.00 | 0.00 | 0.00 | 1 |
| 8:argM:loc | 0.00 | 0.00 | 0.00 | 4 |
| 8:argM:mnr | 0.00 | 0.00 | 0.00 | 1 |
| 8:argM:tmp | 0.00 | 0.00 | 0.00 | 1 |
| 8:root | 0.48 | 0.60 | 0.54 | 25 |
| 9:arg0:agt | 0.00 | 0.00 | 0.00 | 6 |
| 9:arg0:cau | 0.00 | 0.00 | 0.00 | 1 |
| 9:arg1:pat | 0.00 | 0.00 | 0.00 | 4 |
| 9:arg1:tem | 0.00 | 0.00 | 0.00 | 5 |
| 9:arg2:atr | 0.00 | 0.00 | 0.00 | 3 |
| 9:arg2:ben | 0.00 | 0.00 | 0.00 | 1 |
| 9:argM:adv | 0.00 | 0.00 | 0.00 | 6 |
| 9:argM:cau | 0.00 | 0.00 | 0.00 | 1 |
| 9:argM:fin | 0.00 | 0.00 | 0.00 | 2 |
| 9:argM:loc | 0.00 | 0.00 | 0.00 | 2 |
| 9:argM:tmp | 0.00 | 0.00 | 0.00 | 1 |
| 9:root | 0.25 | 0.88 | 0.39 | 17 |
| 10:arg0:agt | 0.00 | 0.00 | 0.00 | 3 |
| 10:arg1:pat | 0.00 | 0.00 | 0.00 | 5 |
| 10:arg1:tem | 0.00 | 0.00 | 0.00 | 3 |
| 10:arg2:atr | 0.00 | 0.00 | 0.00 | 1 |
| 10:arg2:ben | 0.00 | 0.00 | 0.00 | 2 |
| 10:argM:adv | 0.00 | 0.00 | 0.00 | 3 |
| 10:argM:fin | 0.00 | 0.00 | 0.00 | 1 |
| 10:argM:tmp | 0.00 | 0.00 | 0.00 | 1 |
| 10:root | 0.00 | 0.00 | 0.00 | 12 |
| 11:arg0:agt | 0.00 | 0.00 | 0.00 | 1 |
| 11:arg0:cau | 0.00 | 0.00 | 0.00 | 1 |
| 11:arg1:pat | 0.00 | 0.00 | 0.00 | 2 |
| 11:arg1:tem | 0.00 | 0.00 | 0.00 | 4 |
| 11:arg2:atr | 0.00 | 0.00 | 0.00 | 3 |
| 11:arg2:ben | 0.00 | 0.00 | 0.00 | 1 |
| 11:argM:adv | 0.00 | 0.00 | 0.00 | 4 |
| 11:argM:loc | 0.00 | 0.00 | 0.00 | 1 |
| 11:argM:tmp | 0.00 | 0.00 | 0.00 | 1 |
| 11:root | 0.00 | 0.00 | 0.00 | 9 |
| 12:arg0:agt | 0.00 | 0.00 | 0.00 | 3 |
| 12:arg1:pat | 0.00 | 0.00 | 0.00 | 1 |
| 12:arg1:tem | 0.00 | 0.00 | 0.00 | 2 |
| 12:arg2:atr | 0.00 | 0.00 | 0.00 | 2 |
| 12:argM:adv | 0.00 | 0.00 | 0.00 | 1 |
| 12:argM:cau | 0.00 | 0.00 | 0.00 | 1 |
| 12:argM:tmp | 0.00 | 0.00 | 0.00 | 3 |
| 12:root | 0.00 | 0.00 | 0.00 | 7 |
| 13:arg0:cau | 0.00 | 0.00 | 0.00 | 1 |
| 13:arg1:tem | 0.00 | 0.00 | 0.00 | 1 |
| 13:arg2:atr | 0.00 | 0.00 | 0.00 | 1 |
| 13:argM:adv | 0.00 | 0.00 | 0.00 | 1 |
| 13:argM:atr | 0.00 | 0.00 | 0.00 | 1 |
| 13:argM:loc | 0.00 | 0.00 | 0.00 | 1 |
| 13:root | 0.00 | 0.00 | 0.00 | 4 |
| 14:arg1:pat | 0.00 | 0.00 | 0.00 | 1 |
| 14:arg2:ben | 0.00 | 0.00 | 0.00 | 1 |
| 14:argM:mnr | 0.00 | 0.00 | 0.00 | 1 |
| 14:root | 0.00 | 0.00 | 0.00 | 2 |
| micro avg | 0.83 | 0.83 | 0.83 | 15436 |
| macro avg | 0.31 | 0.33 | 0.31 | 15436 |
| weighted avg | 0.82 | 0.83 | 0.82 | 15436 |
| tot root avg | 0.50 | 0.54 | 0.51 | 5165 |
| tot arg0:agt avg | 0.48 | 0.50 | 0.48 | 2257 |
| tot arg0:cau avg | 0.42 | 0.37 | 0.39 | 166 |
| tot arg0:exp avg | 0.00 | 0.00 | 0.00 | 1 |
| tot arg0:src avg | 0.00 | 0.00 | 0.00 | 2 |
| tot arg0 | 0.40 | 0.39 | 0.39 | 2426 |
| tot arg1:ext avg | 0.00 | 0.00 | 0.00 | 5 |
| tot arg1:loc avg | 0.00 | 0.00 | 0.00 | 1 |
| tot arg1:pat avg | 0.40 | 0.43 | 0.41 | 1770 |
| tot arg1:tem avg | 0.43 | 0.42 | 0.42 | 1635 |
| tot arg1 | 0.37 | 0.38 | 0.38 | 3411 |
| tot arg2:atr avg | 0.39 | 0.41 | 0.40 | 794 |
| tot arg2:ben avg | 0.36 | 0.47 | 0.40 | 255 |
| tot arg2:efi avg | 0.49 | 0.34 | 0.40 | 24 |
| tot arg2:exp avg | 0.00 | 0.00 | 0.00 | 6 |
| tot arg2:ext avg | 0.17 | 0.21 | 0.19 | 33 |
| tot arg2:ins avg | 0.00 | 0.00 | 0.00 | 2 |
| tot arg2:loc avg | 0.29 | 0.31 | 0.29 | 165 |
| tot arg2 | 0.32 | 0.35 | 0.33 | 1279 |
| tot arg3:ben avg | 0.00 | 0.00 | 0.00 | 15 |
| tot arg3:ein avg | 0.00 | 0.00 | 0.00 | 9 |
| tot arg3:fin avg | 1.00 | 0.50 | 0.67 | 4 |
| tot arg3:ori avg | 0.14 | 0.25 | 0.17 | 21 |
| tot arg3 | 0.15 | 0.14 | 0.13 | 49 |
| tot arg4:des avg | 0.42 | 0.63 | 0.48 | 61 |
| tot arg4:efi avg | 0.00 | 0.00 | 0.00 | 20 |
| tot arg4 | 0.22 | 0.34 | 0.26 | 81 |
| tot argM:adv avg | 0.26 | 0.29 | 0.26 | 876 |
| tot argM:atr avg | 0.36 | 0.24 | 0.25 | 73 |
| tot argM:cau avg | 0.24 | 0.27 | 0.25 | 115 |
| tot argM:ext avg | 0.00 | 0.00 | 0.00 | 19 |
| tot argM:fin avg | 0.31 | 0.34 | 0.32 | 158 |
| tot argM:ins avg | 0.00 | 0.00 | 0.00 | 1 |
| tot argM:loc avg | 0.36 | 0.42 | 0.38 | 591 |
| tot argM:mnr avg | 0.20 | 0.19 | 0.19 | 186 |
| tot argM:tmp avg | 0.36 | 0.38 | 0.37 | 1013 |
| tot argM | 0.28 | 0.29 | 0.27 | 3032 |
| tot r0 avg | 0.54 | 0.53 | 0.53 | 5242 |
| tot r1 avg | 0.53 | 0.55 | 0.53 | 3913 |
| tot r2 avg | 0.47 | 0.48 | 0.47 | 2711 |
| tot r3 avg | 0.45 | 0.47 | 0.44 | 1626 |
| tot r4 avg | 0.43 | 0.45 | 0.43 | 892 |
| tot r5 avg | 0.38 | 0.37 | 0.36 | 487 |
| tot r6 avg | 0.20 | 0.28 | 0.23 | 216 |
| tot r7 avg | 0.18 | 0.22 | 0.18 | 135 |
| tot r8 avg | 0.05 | 0.06 | 0.05 | 71 |
| tot r9 avg | 0.02 | 0.07 | 0.03 | 49 |
| tot r10 avg | 0.00 | 0.00 | 0.00 | 31 |
| tot r11 avg | 0.00 | 0.00 | 0.00 | 27 |
| tot r12 avg | 0.00 | 0.00 | 0.00 | 20 |
| tot r13 avg | 0.00 | 0.00 | 0.00 | 10 |
| tot r14 avg | 0.00 | 0.00 | 0.00 | 5 |
## Citation
**BibTeX:**
```
@inproceedings{bruton-beloucif-2023-bertie,
title = "{BERT}ie Bott{'}s Every Flavor Labels: A Tasty Introduction to Semantic Role Labeling for {G}alician",
author = "Bruton, Micaella and
Beloucif, Meriem",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.671",
doi = "10.18653/v1/2023.emnlp-main.671",
pages = "10892--10902",
abstract = "In this paper, we leverage existing corpora, WordNet, and dependency parsing to build the first Galician dataset for training semantic role labeling systems in an effort to expand available NLP resources. Additionally, we introduce verb indexing, a new pre-processing method, which helps increase the performance when semantically parsing highly-complex sentences. We use transfer-learning to test both the resource and the verb indexing method. Our results show that the effects of verb indexing were amplified in scenarios where the model was both pre-trained and fine-tuned on datasets utilizing the method, but improvements are also noticeable when only used during fine-tuning. The best-performing Galician SRL model achieved an f1 score of 0.74, introducing a baseline for future Galician SRL systems. We also tested our method on Spanish where we achieved an f1 score of 0.83, outperforming the baseline set by the 2009 CoNLL Shared Task by 0.025 showing the merits of our verb indexing method for pre-processing.",
}
```
|
mbruton/spa_en_mBERT
|
mbruton
| 2024-01-03T14:13:50Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"es",
"en",
"dataset:mbruton/spanish_srl",
"dataset:CoNLL-2012",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-04-14T18:00:07Z |
---
license: apache-2.0
datasets:
- mbruton/spanish_srl
- CoNLL-2012
language:
- es
- en
metrics:
- seqeval
library_name: transformers
pipeline_tag: token-classification
---
# Model Card for SpaBERT-en for Semantic Role Labeling (cased)
This model is fine-tuned on a version of [multilingual BERT](https://huggingface.co/bert-base-multilingual-cased) which is pre-trained on the SRL task for English, and is one of 24 models introduced as part of [this project](https://github.com/mbruton0426/GalicianSRL).
## Model Details
### Model Description
SpaBERT-en for Semantic Role Labeling (SRL) is a transformers model, leveraging mBERT's extensive pretraining on 104 languages to achieve better SRL predictions for Spanish. This model is additionally pre-trained on the SRL task for English. It was fine-tuned on Spanish with the following objectives:
- Identify up to 16 verbal roots within a sentence.
- Identify available arguments and thematic roles for each verbal root.
Labels are formatted as: r#:tag, where r# links the token to a specific verbal root of index #, and tag identifies the token as the verbal root (root) or an individual argument (arg0/arg1/arg2/arg3/argM) and it's thematic role (adv/agt/atr/ben/cau/cot/des/efi/ein/exp/ext/fin/ins/loc/mnr/ori/pat/src/tem/tmp)
- **Developed by:** [Micaella Bruton](mailto:micaellabruton@gmail.com)
- **Model type:** Transformers
- **Language(s) (NLP):** Spanish (es), English (en)
- **License:** Apache 2.0
- **Finetuned from model:** [English pre-trained multilingual BERT](https://huggingface.co/liaad/srl-en_mbert-base)
### Model Sources
- **Repository:** [GalicianSRL](https://github.com/mbruton0426/GalicianSRL)
- **Paper:** To be updated
## Uses
This model is intended to be used to develop and improve natural language processing tools for Spanish.
## Bias, Risks, and Limitations
The Spanish training set lacked highly complex sentences and as such, performs much better on sentences of mid- to low-complexity.
## Training Details
### Training Data
This model was pre-trained on the [OntoNotes 5.0 English SRL corpus](http://catalog.ldc.upenn.edu/LDC2013T19).
This model was fine-tuned on the "train" portion of the [SpanishSRL Dataset](https://huggingface.co/datasets/mbruton/spanish_srl) produced as part of this same project.
#### Training Hyperparameters
- **Learning Rate:** 2e-5
- **Batch Size:** 16
- **Weight Decay:** 0.01
- **Early Stopping:** 10 epochs
## Evaluation
#### Testing Data
This model was tested on the "test" portion of the [SpanishSRL Dataset](https://huggingface.co/datasets/mbruton/spanish_srl) produced as part of this same project.
#### Metrics
[seqeval](https://huggingface.co/spaces/evaluate-metric/seqeval) is a Python framework for sequence labeling evaluation. It can evaluate the performance of chunking tasks such as named-entity recognition, part-of-speech tagging, and semantic role labeling.
It supplies scoring both overall and per label type.
Overall:
- `accuracy`: the average [accuracy](https://huggingface.co/metrics/accuracy), on a scale between 0.0 and 1.0.
- `precision`: the average [precision](https://huggingface.co/metrics/precision), on a scale between 0.0 and 1.0.
- `recall`: the average [recall](https://huggingface.co/metrics/recall), on a scale between 0.0 and 1.0.
- `f1`: the average [F1 score](https://huggingface.co/metrics/f1), which is the harmonic mean of the precision and recall. It also has a scale of 0.0 to 1.0.
Per label type:
- `precision`: the average [precision](https://huggingface.co/metrics/precision), on a scale between 0.0 and 1.0.
- `recall`: the average [recall](https://huggingface.co/metrics/recall), on a scale between 0.0 and 1.0.
- `f1`: the average [F1 score](https://huggingface.co/metrics/f1), on a scale between 0.0 and 1.0.
### Results
| Label | Precision | Recall | f1-score | Support |
| :----------: | :-------: | :----: | :------: | :-----: |
| 0:arg0:agt | 0.95 | 0.87 | 0.91 | 867 |
| 0:arg0:cau | 0.67 | 0.72 | 0.69 | 57 |
| 0:arg0:src | 0.00 | 0.00 | 0.00 | 1 |
| 0:arg1:ext | 0.00 | 0.00 | 0.00 | 3 |
| 0:arg1:pat | 0.89 | 0.87 | 0.88 | 536 |
| 0:arg1:tem | 0.85 | 0.90 | 0.87 | 589 |
| 0:arg2:atr | 0.84 | 0.89 | 0.87 | 278 |
| 0:arg2:ben | 0.75 | 0.83 | 0.79 | 78 |
| 0:arg2:efi | 0.75 | 0.43 | 0.55 | 7 |
| 0:arg2:exp | 0.00 | 0.00 | 0.00 | 6 |
| 0:arg2:ext | 0.50 | 0.40 | 0.44 | 15 |
| 0:arg2:loc | 0.52 | 0.63 | 0.57 | 57 |
| 0:arg3:ben | 0.00 | 0.00 | 0.00 | 5 |
| 0:arg3:ein | 0.00 | 0.00 | 0.00 | 1 |
| 0:arg3:fin | 0.50 | 0.50 | 0.50 | 2 |
| 0:arg3:ori | 0.50 | 0.50 | 0.50 | 10 |
| 0:arg4:des | 0.48 | 0.81 | 0.60 | 16 |
| 0:arg4:efi | 0.50 | 0.20 | 0.29 | 5 |
| 0:argM:adv | 0.64 | 0.50 | 0.56 | 268 |
| 0:argM:atr | 0.52 | 0.54 | 0.53 | 24 |
| 0:argM:cau | 0.71 | 0.59 | 0.64 | 41 |
| 0:argM:ext | 0.00 | 0.00 | 0.00 | 5 |
| 0:argM:fin | 0.75 | 0.78 | 0.77 | 46 |
| 0:argM:loc | 0.71 | 0.80 | 0.75 | 186 |
| 0:argM:mnr | 0.59 | 0.59 | 0.59 | 66 |
| 0:argM:tmp | 0.87 | 0.88 | 0.87 | 411 |
| 0:root | 0.99 | 0.99 | 0.99 | 1662 |
| 1:arg0:agt | 0.92 | 0.88 | 0.90 | 564 |
| 1:arg0:cau | 0.77 | 0.77 | 0.77 | 44 |
| 1:arg1:ext | 0.00 | 0.00 | 0.00 | 2 |
| 1:arg1:pat | 0.86 | 0.87 | 0.87 | 482 |
| 1:arg1:tem | 0.86 | 0.88 | 0.87 | 390 |
| 1:arg2:atr | 0.85 | 0.89 | 0.87 | 197 |
| 1:arg2:ben | 0.72 | 0.83 | 0.77 | 66 |
| 1:arg2:efi | 1.00 | 0.33 | 0.50 | 6 |
| 1:arg2:ext | 0.36 | 0.57 | 0.44 | 7 |
| 1:arg2:ins | 0.00 | 0.00 | 0.00 | 1 |
| 1:arg2:loc | 0.48 | 0.57 | 0.52 | 44 |
| 1:arg3:ben | 0.00 | 0.00 | 0.00 | 2 |
| 1:arg3:ein | 0.00 | 0.00 | 0.00 | 3 |
| 1:arg3:fin | 0.00 | 0.00 | 0.00 | 2 |
| 1:arg3:ori | 0.12 | 0.50 | 0.20 | 2 |
| 1:arg4:des | 0.39 | 0.90 | 0.55 | 10 |
| 1:arg4:efi | 0.00 | 0.00 | 0.00 | 2 |
| 1:argM:adv | 0.65 | 0.52 | 0.58 | 220 |
| 1:argM:atr | 0.65 | 0.58 | 0.61 | 19 |
| 1:argM:cau | 0.52 | 0.66 | 0.58 | 35 |
| 1:argM:ext | 0.00 | 0.00 | 0.00 | 7 |
| 1:argM:fin | 0.54 | 0.74 | 0.62 | 38 |
| 1:argM:loc | 0.68 | 0.79 | 0.73 | 156 |
| 1:argM:mnr | 0.51 | 0.52 | 0.52 | 44 |
| 1:argM:tmp | 0.79 | 0.84 | 0.81 | 247 |
| 1:root | 0.96 | 0.96 | 0.96 | 1323 |
| 2:arg0:agt | 0.86 | 0.87 | 0.86 | 336 |
| 2:arg0:cau | 0.81 | 0.71 | 0.76 | 35 |
| 2:arg0:exp | 0.00 | 0.00 | 0.00 | 1 |
| 2:arg0:src | 0.00 | 0.00 | 0.00 | 1 |
| 2:arg1:pat | 0.81 | 0.84 | 0.83 | 333 |
| 2:arg1:tem | 0.81 | 0.86 | 0.84 | 291 |
| 2:arg2:atr | 0.83 | 0.92 | 0.87 | 124 |
| 2:arg2:ben | 0.65 | 0.81 | 0.72 | 43 |
| 2:arg2:efi | 0.78 | 0.78 | 0.78 | 9 |
| 2:arg2:ext | 0.50 | 0.40 | 0.44 | 5 |
| 2:arg2:ins | 0.00 | 0.00 | 0.00 | 1 |
| 2:arg2:loc | 0.46 | 0.67 | 0.55 | 27 |
| 2:arg3:ben | 0.00 | 0.00 | 0.00 | 4 |
| 2:arg3:ein | 0.00 | 0.00 | 0.00 | 1 |
| 2:arg3:ori | 0.43 | 1.00 | 0.60 | 3 |
| 2:arg4:des | 0.45 | 0.56 | 0.50 | 16 |
| 2:arg4:efi | 0.00 | 0.00 | 0.00 | 6 |
| 2:argM:adv | 0.53 | 0.44 | 0.48 | 176 |
| 2:argM:atr | 0.50 | 0.40 | 0.44 | 15 |
| 2:argM:cau | 0.45 | 0.59 | 0.51 | 17 |
| 2:argM:ext | 0.00 | 0.00 | 0.00 | 4 |
| 2:argM:fin | 0.77 | 0.64 | 0.70 | 36 |
| 2:argM:ins | 0.00 | 0.00 | 0.00 | 1 |
| 2:argM:loc | 0.68 | 0.78 | 0.73 | 117 |
| 2:argM:mnr | 0.32 | 0.31 | 0.32 | 35 |
| 2:argM:tmp | 0.80 | 0.81 | 0.80 | 161 |
| 2:root | 0.93 | 0.93 | 0.93 | 913 |
| 3:arg0:agt | 0.82 | 0.81 | 0.82 | 227 |
| 3:arg0:cau | 0.69 | 0.79 | 0.73 | 14 |
| 3:arg1:pat | 0.81 | 0.88 | 0.85 | 199 |
| 3:arg1:tem | 0.72 | 0.83 | 0.77 | 160 |
| 3:arg2:atr | 0.70 | 0.80 | 0.75 | 79 |
| 3:arg2:ben | 0.68 | 0.78 | 0.72 | 27 |
| 3:arg2:efi | 0.00 | 0.00 | 0.00 | 1 |
| 3:arg2:ext | 0.00 | 0.00 | 0.00 | 3 |
| 3:arg2:loc | 0.50 | 0.48 | 0.49 | 21 |
| 3:arg3:ben | 0.00 | 0.00 | 0.00 | 3 |
| 3:arg3:ein | 0.00 | 0.00 | 0.00 | 2 |
| 3:arg3:ori | 0.00 | 0.00 | 0.00 | 3 |
| 3:arg4:des | 0.37 | 1.00 | 0.54 | 7 |
| 3:arg4:efi | 0.00 | 0.00 | 0.00 | 5 |
| 3:argM:adv | 0.47 | 0.49 | 0.48 | 98 |
| 3:argM:atr | 0.00 | 0.00 | 0.00 | 7 |
| 3:argM:cau | 0.40 | 0.15 | 0.22 | 13 |
| 3:argM:ext | 0.00 | 0.00 | 0.00 | 1 |
| 3:argM:fin | 0.38 | 0.60 | 0.46 | 15 |
| 3:argM:loc | 0.61 | 0.64 | 0.62 | 69 |
| 3:argM:mnr | 0.36 | 0.45 | 0.40 | 11 |
| 3:argM:tmp | 0.82 | 0.82 | 0.82 | 92 |
| 3:root | 0.88 | 0.93 | 0.90 | 569 |
| 4:arg0:agt | 0.78 | 0.82 | 0.80 | 119 |
| 4:arg0:cau | 0.75 | 0.50 | 0.60 | 6 |
| 4:arg1:pat | 0.74 | 0.76 | 0.75 | 87 |
| 4:arg1:tem | 0.82 | 0.74 | 0.78 | 109 |
| 4:arg2:atr | 0.76 | 0.77 | 0.77 | 53 |
| 4:arg2:ben | 0.47 | 0.64 | 0.54 | 11 |
| 4:arg2:ext | 0.00 | 0.00 | 0.00 | 1 |
| 4:arg2:loc | 0.57 | 0.73 | 0.64 | 11 |
| 4:arg3:ein | 0.00 | 0.00 | 0.00 | 1 |
| 4:arg3:ori | 0.00 | 0.00 | 0.00 | 1 |
| 4:arg4:des | 0.50 | 0.40 | 0.44 | 10 |
| 4:arg4:efi | 0.00 | 0.00 | 0.00 | 1 |
| 4:argM:adv | 0.44 | 0.38 | 0.41 | 50 |
| 4:argM:atr | 0.00 | 0.00 | 0.00 | 4 |
| 4:argM:cau | 0.00 | 0.00 | 0.00 | 3 |
| 4:argM:ext | 0.00 | 0.00 | 0.00 | 1 |
| 4:argM:fin | 0.38 | 0.45 | 0.42 | 11 |
| 4:argM:loc | 0.54 | 0.79 | 0.64 | 24 |
| 4:argM:mnr | 0.00 | 0.00 | 0.00 | 16 |
| 4:argM:tmp | 0.67 | 0.71 | 0.69 | 52 |
| 4:root | 0.82 | 0.83 | 0.83 | 322 |
| 5:arg0:agt | 0.67 | 0.64 | 0.65 | 72 |
| 5:arg0:cau | 1.00 | 0.20 | 0.33 | 5 |
| 5:arg1:pat | 0.65 | 0.70 | 0.68 | 71 |
| 5:arg1:tem | 0.65 | 0.54 | 0.59 | 41 |
| 5:arg2:atr | 0.69 | 0.52 | 0.59 | 21 |
| 5:arg2:ben | 0.44 | 0.67 | 0.53 | 6 |
| 5:arg2:efi | 0.00 | 0.00 | 0.00 | 1 |
| 5:arg2:ext | 0.00 | 0.00 | 0.00 | 1 |
| 5:arg2:loc | 0.00 | 0.00 | 0.00 | 1 |
| 5:arg3:ein | 0.00 | 0.00 | 0.00 | 1 |
| 5:arg4:des | 0.00 | 0.00 | 0.00 | 1 |
| 5:arg4:efi | 0.00 | 0.00 | 0.00 | 1 |
| 5:argM:adv | 0.35 | 0.46 | 0.40 | 26 |
| 5:argM:cau | 0.00 | 0.00 | 0.00 | 3 |
| 5:argM:fin | 0.33 | 0.60 | 0.43 | 5 |
| 5:argM:loc | 0.56 | 0.48 | 0.51 | 21 |
| 5:argM:mnr | 0.00 | 0.00 | 0.00 | 7 |
| 5:argM:tmp | 0.59 | 0.57 | 0.58 | 30 |
| 5:root | 0.74 | 0.73 | 0.74 | 173 |
| 6:arg0:agt | 0.61 | 0.50 | 0.55 | 34 |
| 6:arg0:cau | 0.00 | 0.00 | 0.00 | 1 |
| 6:arg1:loc | 0.00 | 0.00 | 0.00 | 1 |
| 6:arg1:pat | 0.50 | 0.61 | 0.55 | 28 |
| 6:arg1:tem | 0.25 | 0.25 | 0.25 | 16 |
| 6:arg2:atr | 0.29 | 0.62 | 0.39 | 13 |
| 6:arg2:ben | 0.33 | 1.00 | 0.50 | 5 |
| 6:arg2:loc | 0.00 | 0.00 | 0.00 | 1 |
| 6:arg3:ben | 0.00 | 0.00 | 0.00 | 1 |
| 6:argM:adv | 0.11 | 0.20 | 0.14 | 10 |
| 6:argM:atr | 0.00 | 0.00 | 0.00 | 2 |
| 6:argM:cau | 0.00 | 0.00 | 0.00 | 1 |
| 6:argM:fin | 0.00 | 0.00 | 0.00 | 2 |
| 6:argM:loc | 0.50 | 0.71 | 0.59 | 7 |
| 6:argM:mnr | 0.00 | 0.00 | 0.00 | 5 |
| 6:argM:tmp | 0.21 | 0.43 | 0.29 | 7 |
| 6:root | 0.64 | 0.57 | 0.61 | 82 |
| 7:arg0:agt | 0.41 | 0.82 | 0.55 | 17 |
| 7:arg1:pat | 0.58 | 0.65 | 0.61 | 17 |
| 7:arg1:tem | 0.32 | 0.60 | 0.42 | 15 |
| 7:arg2:atr | 0.25 | 0.20 | 0.22 | 15 |
| 7:arg2:ben | 0.00 | 0.00 | 0.00 | 7 |
| 7:arg2:loc | 0.00 | 0.00 | 0.00 | 1 |
| 7:arg3:ori | 0.00 | 0.00 | 0.00 | 1 |
| 7:arg4:des | 0.00 | 0.00 | 0.00 | 1 |
| 7:argM:adv | 0.04 | 0.20 | 0.07 | 5 |
| 7:argM:atr | 0.00 | 0.00 | 0.00 | 1 |
| 7:argM:fin | 0.00 | 0.00 | 0.00 | 1 |
| 7:argM:loc | 0.00 | 0.00 | 0.00 | 3 |
| 7:argM:tmp | 0.17 | 0.17 | 0.17 | 6 |
| 7:root | 0.56 | 0.44 | 0.49 | 45 |
| 8:arg0:agt | 0.00 | 0.00 | 0.00 | 8 |
| 8:arg0:cau | 0.00 | 0.00 | 0.00 | 1 |
| 8:arg1:pat | 0.00 | 0.00 | 0.00 | 4 |
| 8:arg1:tem | 0.07 | 0.11 | 0.08 | 9 |
| 8:arg2:atr | 0.00 | 0.00 | 0.00 | 4 |
| 8:arg2:ext | 0.00 | 0.00 | 0.00 | 1 |
| 8:arg2:loc | 0.00 | 0.00 | 0.00 | 2 |
| 8:arg3:ori | 0.00 | 0.00 | 0.00 | 1 |
| 8:argM:adv | 0.00 | 0.00 | 0.00 | 8 |
| 8:argM:ext | 0.00 | 0.00 | 0.00 | 1 |
| 8:argM:fin | 0.00 | 0.00 | 0.00 | 1 |
| 8:argM:loc | 0.00 | 0.00 | 0.00 | 4 |
| 8:argM:mnr | 0.00 | 0.00 | 0.00 | 1 |
| 8:argM:tmp | 0.00 | 0.00 | 0.00 | 1 |
| 8:root | 0.38 | 0.68 | 0.49 | 25 |
| 9:arg0:agt | 0.00 | 0.00 | 0.00 | 6 |
| 9:arg0:cau | 0.00 | 0.00 | 0.00 | 1 |
| 9:arg1:pat | 0.00 | 0.00 | 0.00 | 4 |
| 9:arg1:tem | 0.00 | 0.00 | 0.00 | 5 |
| 9:arg2:atr | 0.00 | 0.00 | 0.00 | 3 |
| 9:arg2:ben | 0.00 | 0.00 | 0.00 | 1 |
| 9:argM:adv | 0.00 | 0.00 | 0.00 | 6 |
| 9:argM:cau | 0.00 | 0.00 | 0.00 | 1 |
| 9:argM:fin | 0.00 | 0.00 | 0.00 | 2 |
| 9:argM:loc | 0.00 | 0.00 | 0.00 | 2 |
| 9:argM:tmp | 0.00 | 0.00 | 0.00 | 1 |
| 9:root | 0.25 | 0.76 | 0.37 | 17 |
| 10:arg0:agt | 0.00 | 0.00 | 0.00 | 3 |
| 10:arg1:pat | 0.00 | 0.00 | 0.00 | 5 |
| 10:arg1:tem | 0.00 | 0.00 | 0.00 | 3 |
| 10:arg2:atr | 0.00 | 0.00 | 0.00 | 1 |
| 10:arg2:ben | 0.00 | 0.00 | 0.00 | 2 |
| 10:argM:adv | 0.00 | 0.00 | 0.00 | 3 |
| 10:argM:fin | 0.00 | 0.00 | 0.00 | 1 |
| 10:argM:tmp | 0.00 | 0.00 | 0.00 | 1 |
| 10:root | 0.00 | 0.00 | 0.00 | 12 |
| 11:arg0:agt | 0.00 | 0.00 | 0.00 | 1 |
| 11:arg0:cau | 0.00 | 0.00 | 0.00 | 1 |
| 11:arg1:pat | 0.00 | 0.00 | 0.00 | 2 |
| 11:arg1:tem | 0.00 | 0.00 | 0.00 | 4 |
| 11:arg2:atr | 0.00 | 0.00 | 0.00 | 3 |
| 11:arg2:ben | 0.00 | 0.00 | 0.00 | 1 |
| 11:argM:adv | 0.00 | 0.00 | 0.00 | 4 |
| 11:argM:loc | 0.00 | 0.00 | 0.00 | 1 |
| 11:argM:tmp | 0.00 | 0.00 | 0.00 | 1 |
| 11:root | 0.00 | 0.00 | 0.00 | 9 |
| 12:arg0:agt | 0.00 | 0.00 | 0.00 | 3 |
| 12:arg1:pat | 0.00 | 0.00 | 0.00 | 1 |
| 12:arg1:tem | 0.00 | 0.00 | 0.00 | 2 |
| 12:arg2:atr | 0.00 | 0.00 | 0.00 | 2 |
| 12:argM:adv | 0.00 | 0.00 | 0.00 | 1 |
| 12:argM:cau | 0.00 | 0.00 | 0.00 | 1 |
| 12:argM:tmp | 0.00 | 0.00 | 0.00 | 3 |
| 12:root | 0.00 | 0.00 | 0.00 | 7 |
| 13:arg0:cau | 0.00 | 0.00 | 0.00 | 1 |
| 13:arg1:tem | 0.00 | 0.00 | 0.00 | 1 |
| 13:arg2:atr | 0.00 | 0.00 | 0.00 | 1 |
| 13:argM:adv | 0.00 | 0.00 | 0.00 | 1 |
| 13:argM:atr | 0.00 | 0.00 | 0.00 | 1 |
| 13:argM:loc | 0.00 | 0.00 | 0.00 | 1 |
| 13:root | 0.00 | 0.00 | 0.00 | 4 |
| 14:arg1:pat | 0.00 | 0.00 | 0.00 | 1 |
| 14:arg2:ben | 0.00 | 0.00 | 0.00 | 1 |
| 14:argM:mnr | 0.00 | 0.00 | 0.00 | 1 |
| 14:root | 0.00 | 0.00 | 0.00 | 2 |
| micro avg | 0.82 | 0.82 | 0.82 | 15436 |
| macro avg | 0.29 | 0.32 | 0.30 | 15436 |
| weighted avg | 0.81 | 0.82 | 0.81 | 15436 |
| tot root avg | 0.48 | 0.52 | 0.49 | 5165 |
| tot arg0:agt avg | 0.46 | 0.48 | 0.46 | 2257 |
| tot arg0:cau avg | 0.43 | 0.34 | 0.35 | 166 |
| tot arg0:exp avg | 0.00 | 0.00 | 0.00 | 1 |
| tot arg0:src avg | 0.00 | 0.00 | 0.00 | 2 |
| tot arg0 | 0.40 | 0.37 | 0.37 | 2426 |
| tot arg1:ext avg | 0.00 | 0.00 | 0.00 | 5 |
| tot arg1:loc avg | 0.00 | 0.00 | 0.00 | 1 |
| tot arg1:pat avg | 0.42 | 0.44 | 0.43 | 1770 |
| tot arg1:tem avg | 0.38 | 0.41 | 0.39 | 1635 |
| tot arg1 | 0.36 | 0.38 | 0.37 | 3411 |
| tot arg2:atr avg | 0.37 | 0.40 | 0.38 | 794 |
| tot arg2:ben avg | 0.34 | 0.50 | 0.39 | 255 |
| tot arg2:efi avg | 0.51 | 0.31 | 0.37 | 24 |
| tot arg2:exp avg | 0.00 | 0.00 | 0.00 | 6 |
| tot arg2:ext avg | 0.19 | 0.20 | 0.19 | 33 |
| tot arg2:ins avg | 0.00 | 0.00 | 0.00 | 2 |
| tot arg2:loc avg | 0.28 | 0.34 | 0.31 | 165 |
| tot arg2 | 0.31 | 0.36 | 0.32 | 1279 |
| tot arg3:ben avg | 0.00 | 0.00 | 0.00 | 15 |
| tot arg3:ein avg | 0.00 | 0.00 | 0.00 | 9 |
| tot arg3:fin avg | 0.25 | 0.25 | 0.25 | 4 |
| tot arg3:ori avg | 0.15 | 0.29 | 0.19 | 21 |
| tot arg3 | 0.08 | 0.13 | 0.09 | 49 |
| tot arg4:des avg | 0.31 | 0.52 | 0.38 | 61 |
| tot arg4:efi avg | 0.08 | 0.03 | 0.05 | 20 |
| tot arg4 | 0.21 | 0.30 | 0.22 | 81 |
| tot argM:adv avg | 0.23 | 0.23 | 0.22 | 876 |
| tot argM:atr avg | 0.21 | 0.19 | 0.20 | 73 |
| tot argM:cau avg | 0.23 | 0.22 | 0.22 | 115 |
| tot argM:ext avg | 0.00 | 0.00 | 0.00 | 19 |
| tot argM:fin avg | 0.29 | 0.35 | 0.31 | 158 |
| tot argM:ins avg | 0.00 | 0.00 | 0.00 | 1 |
| tot argM:loc avg | 0.36 | 0.42 | 0.38 | 591 |
| tot argM:mnr avg | 0.20 | 0.21 | 0.20 | 186 |
| tot argM:tmp avg | 0.38 | 0.40 | 0.39 | 1013 |
| tot argM | 0.25 | 0.27 | 0.26 | 3032 |
| tot r0 avg | 0.54 | 0.53 | 0.52 | 5242 |
| tot r1 avg | 0.49 | 0.52 | 0.49 | 3913 |
| tot r2 avg | 0.46 | 0.49 | 0.47 | 2711 |
| tot r3 avg | 0.40 | 0.45 | 0.42 | 1626 |
| tot r4 avg | 0.39 | 0.41 | 0.40 | 892 |
| tot r5 avg | 0.35 | 0.32 | 0.32 | 487 |
| tot r6 avg | 0.20 | 0.29 | 0.23 | 216 |
| tot r7 avg | 0.17 | 0.22 | 0.18 | 135 |
| tot r8 avg | 0.03 | 0.05 | 0.04 | 71 |
| tot r9 avg | 0.02 | 0.06 | 0.03 | 49 |
| tot r10 avg | 0.00 | 0.00 | 0.00 | 31 |
| tot r11 avg | 0.00 | 0.00 | 0.00 | 27 |
| tot r12 avg | 0.00 | 0.00 | 0.00 | 20 |
| tot r13 avg | 0.00 | 0.00 | 0.00 | 10 |
| tot r14 avg | 0.00 | 0.00 | 0.00 | 5 |
## Citation
**BibTeX:**
```
@inproceedings{bruton-beloucif-2023-bertie,
title = "{BERT}ie Bott{'}s Every Flavor Labels: A Tasty Introduction to Semantic Role Labeling for {G}alician",
author = "Bruton, Micaella and
Beloucif, Meriem",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.671",
doi = "10.18653/v1/2023.emnlp-main.671",
pages = "10892--10902",
abstract = "In this paper, we leverage existing corpora, WordNet, and dependency parsing to build the first Galician dataset for training semantic role labeling systems in an effort to expand available NLP resources. Additionally, we introduce verb indexing, a new pre-processing method, which helps increase the performance when semantically parsing highly-complex sentences. We use transfer-learning to test both the resource and the verb indexing method. Our results show that the effects of verb indexing were amplified in scenarios where the model was both pre-trained and fine-tuned on datasets utilizing the method, but improvements are also noticeable when only used during fine-tuning. The best-performing Galician SRL model achieved an f1 score of 0.74, introducing a baseline for future Galician SRL systems. We also tested our method on Spanish where we achieved an f1 score of 0.83, outperforming the baseline set by the 2009 CoNLL Shared Task by 0.025 showing the merits of our verb indexing method for pre-processing.",
}
```
|
mbruton/spa_pt_mBERT
|
mbruton
| 2024-01-03T14:13:29Z | 84 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"es",
"pt",
"dataset:mbruton/spanish_srl",
"dataset:PropBank.Br",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-04-14T18:25:40Z |
---
license: apache-2.0
datasets:
- mbruton/spanish_srl
- PropBank.Br
language:
- es
- pt
metrics:
- seqeval
library_name: transformers
pipeline_tag: token-classification
---
# Model Card for SpaBERT-pt for Semantic Role Labeling (cased)
This model is fine-tuned on a version of [multilingual BERT](https://huggingface.co/bert-base-multilingual-cased) which is pre-trained on the SRL task for Portuguese, and is one of 24 models introduced as part of [this project](https://github.com/mbruton0426/GalicianSRL).
## Model Details
### Model Description
SpaBERT-pt for Semantic Role Labeling (SRL) is a transformers model, leveraging mBERT's extensive pretraining on 104 languages to achieve better SRL predictions for Spanish. This model is additionally pre-trained on the SRL task for Portuguese. It was fine-tuned on Spanish with the following objectives:
- Identify up to 16 verbal roots within a sentence.
- Identify available arguments and thematic roles for each verbal root.
Labels are formatted as: r#:tag, where r# links the token to a specific verbal root of index #, and tag identifies the token as the verbal root (root) or an individual argument (arg0/arg1/arg2/arg3/argM) and it's thematic role (adv/agt/atr/ben/cau/cot/des/efi/ein/exp/ext/fin/ins/loc/mnr/ori/pat/src/tem/tmp)
- **Developed by:** [Micaella Bruton](mailto:micaellabruton@gmail.com)
- **Model type:** Transformers
- **Language(s) (NLP):** Spanish (es), Portuguese (pt)
- **License:** Apache 2.0
- **Finetuned from model:** [Portuguese pre-trained multilingual BERT](https://huggingface.co/liaad/srl-pt_mbert-base)
### Model Sources
- **Repository:** [GalicianSRL](https://github.com/mbruton0426/GalicianSRL)
- **Paper:** To be updated
## Uses
This model is intended to be used to develop and improve natural language processing tools for Spanish.
## Bias, Risks, and Limitations
The Spanish training set lacked highly complex sentences and as such, performs much better on sentences of mid- to low-complexity.
## Training Details
### Training Data
This model was pre-trained on the [PropBank.Br Portuguese SRL corpus](http://www.nilc.icmc.usp.br/portlex/index.php/en/projects/propbankbringl).
This model was fine-tuned on the "train" portion of the [SpanishSRL Dataset](https://huggingface.co/datasets/mbruton/spanish_srl) produced as part of this same project.
#### Training Hyperparameters
- **Learning Rate:** 2e-5
- **Batch Size:** 16
- **Weight Decay:** 0.01
- **Early Stopping:** 10 epochs
## Evaluation
#### Testing Data
This model was tested on the "test" portion of the [SpanishSRL Dataset](https://huggingface.co/datasets/mbruton/spanish_srl) produced as part of this same project.
#### Metrics
[seqeval](https://huggingface.co/spaces/evaluate-metric/seqeval) is a Python framework for sequence labeling evaluation. It can evaluate the performance of chunking tasks such as named-entity recognition, part-of-speech tagging, and semantic role labeling.
It supplies scoring both overall and per label type.
Overall:
- `accuracy`: the average [accuracy](https://huggingface.co/metrics/accuracy), on a scale between 0.0 and 1.0.
- `precision`: the average [precision](https://huggingface.co/metrics/precision), on a scale between 0.0 and 1.0.
- `recall`: the average [recall](https://huggingface.co/metrics/recall), on a scale between 0.0 and 1.0.
- `f1`: the average [F1 score](https://huggingface.co/metrics/f1), which is the harmonic mean of the precision and recall. It also has a scale of 0.0 to 1.0.
Per label type:
- `precision`: the average [precision](https://huggingface.co/metrics/precision), on a scale between 0.0 and 1.0.
- `recall`: the average [recall](https://huggingface.co/metrics/recall), on a scale between 0.0 and 1.0.
- `f1`: the average [F1 score](https://huggingface.co/metrics/f1), on a scale between 0.0 and 1.0.
### Results
| Label | Precision | Recall | f1-score | Support |
| :----------: | :-------: | :----: | :------: | :-----: |
| 0:arg0:agt | 0.92 | 0.92 | 0.92 | 867 |
| 0:arg0:cau | 0.67 | 0.67 | 0.67 | 57 |
| 0:arg0:src | 0.00 | 0.00 | 0.00 | 1 |
| 0:arg1:ext | 0.00 | 0.00 | 0.00 | 3 |
| 0:arg1:pat | 0.89 | 0.88 | 0.88 | 536 |
| 0:arg1:tem | 0.88 | 0.88 | 0.88 | 589 |
| 0:arg2:atr | 0.88 | 0.86 | 0.87 | 278 |
| 0:arg2:ben | 0.81 | 0.79 | 0.80 | 78 |
| 0:arg2:efi | 0.75 | 0.43 | 0.55 | 7 |
| 0:arg2:exp | 0.50 | 0.33 | 0.40 | 6 |
| 0:arg2:ext | 0.67 | 0.53 | 0.59 | 15 |
| 0:arg2:loc | 0.61 | 0.39 | 0.47 | 57 |
| 0:arg3:ben | 0.50 | 0.20 | 0.29 | 5 |
| 0:arg3:ein | 0.50 | 1.00 | 0.67 | 1 |
| 0:arg3:fin | 0.50 | 0.50 | 0.50 | 2 |
| 0:arg3:ori | 0.50 | 0.40 | 0.44 | 10 |
| 0:arg4:des | 0.52 | 0.69 | 0.59 | 16 |
| 0:arg4:efi | 0.40 | 0.40 | 0.40 | 5 |
| 0:argM:adv | 0.53 | 0.63 | 0.58 | 268 |
| 0:argM:atr | 0.53 | 0.67 | 0.59 | 24 |
| 0:argM:cau | 0.68 | 0.63 | 0.66 | 41 |
| 0:argM:ext | 0.00 | 0.00 | 0.00 | 5 |
| 0:argM:fin | 0.78 | 0.70 | 0.74 | 46 |
| 0:argM:loc | 0.70 | 0.74 | 0.72 | 186 |
| 0:argM:mnr | 0.66 | 0.41 | 0.50 | 66 |
| 0:argM:tmp | 0.85 | 0.86 | 0.86 | 411 |
| 0:root | 0.98 | 0.98 | 0.98 | 1662 |
| 1:arg0:agt | 0.91 | 0.90 | 0.91 | 564 |
| 1:arg0:cau | 0.71 | 0.84 | 0.77 | 44 |
| 1:arg1:ext | 0.00 | 0.00 | 0.00 | 2 |
| 1:arg1:pat | 0.89 | 0.85 | 0.87 | 482 |
| 1:arg1:tem | 0.88 | 0.88 | 0.88 | 390 |
| 1:arg2:atr | 0.88 | 0.88 | 0.88 | 197 |
| 1:arg2:ben | 0.82 | 0.82 | 0.82 | 66 |
| 1:arg2:efi | 0.83 | 0.83 | 0.83 | 6 |
| 1:arg2:ext | 0.50 | 0.71 | 0.59 | 7 |
| 1:arg2:ins | 0.00 | 0.00 | 0.00 | 1 |
| 1:arg2:loc | 0.69 | 0.45 | 0.55 | 44 |
| 1:arg3:ben | 0.00 | 0.00 | 0.00 | 2 |
| 1:arg3:ein | 0.00 | 0.00 | 0.00 | 3 |
| 1:arg3:fin | 1.00 | 1.00 | 1.00 | 2 |
| 1:arg3:ori | 0.17 | 0.50 | 0.25 | 2 |
| 1:arg4:des | 0.56 | 0.90 | 0.69 | 10 |
| 1:arg4:efi | 0.00 | 0.00 | 0.00 | 2 |
| 1:argM:adv | 0.59 | 0.59 | 0.59 | 220 |
| 1:argM:atr | 0.71 | 0.79 | 0.75 | 19 |
| 1:argM:cau | 0.59 | 0.69 | 0.63 | 35 |
| 1:argM:ext | 0.00 | 0.00 | 0.00 | 7 |
| 1:argM:fin | 0.60 | 0.66 | 0.62 | 38 |
| 1:argM:loc | 0.74 | 0.68 | 0.71 | 156 |
| 1:argM:mnr | 0.68 | 0.39 | 0.49 | 44 |
| 1:argM:tmp | 0.80 | 0.84 | 0.82 | 247 |
| 1:root | 0.96 | 0.95 | 0.96 | 1323 |
| 2:arg0:agt | 0.87 | 0.88 | 0.87 | 336 |
| 2:arg0:cau | 0.81 | 0.74 | 0.78 | 35 |
| 2:arg0:exp | 0.00 | 0.00 | 0.00 | 1 |
| 2:arg0:src | 0.00 | 0.00 | 0.00 | 1 |
| 2:arg1:pat | 0.85 | 0.83 | 0.84 | 333 |
| 2:arg1:tem | 0.82 | 0.84 | 0.83 | 291 |
| 2:arg2:atr | 0.84 | 0.87 | 0.86 | 124 |
| 2:arg2:ben | 0.69 | 0.77 | 0.73 | 43 |
| 2:arg2:efi | 0.70 | 0.78 | 0.74 | 9 |
| 2:arg2:ext | 0.14 | 0.20 | 0.17 | 5 |
| 2:arg2:ins | 0.00 | 0.00 | 0.00 | 1 |
| 2:arg2:loc | 0.44 | 0.44 | 0.44 | 27 |
| 2:arg3:ben | 0.00 | 0.00 | 0.00 | 4 |
| 2:arg3:ein | 0.00 | 0.00 | 0.00 | 1 |
| 2:arg3:ori | 0.43 | 1.00 | 0.60 | 3 |
| 2:arg4:des | 0.50 | 0.75 | 0.60 | 16 |
| 2:arg4:efi | 0.00 | 0.00 | 0.00 | 6 |
| 2:argM:adv | 0.52 | 0.54 | 0.53 | 176 |
| 2:argM:atr | 0.80 | 0.53 | 0.64 | 15 |
| 2:argM:cau | 0.48 | 0.76 | 0.59 | 17 |
| 2:argM:ext | 0.00 | 0.00 | 0.00 | 4 |
| 2:argM:fin | 0.78 | 0.78 | 0.78 | 36 |
| 2:argM:ins | 0.00 | 0.00 | 0.00 | 1 |
| 2:argM:loc | 0.70 | 0.68 | 0.69 | 117 |
| 2:argM:mnr | 0.42 | 0.31 | 0.36 | 35 |
| 2:argM:tmp | 0.76 | 0.77 | 0.76 | 161 |
| 2:root | 0.93 | 0.93 | 0.93 | 913 |
| 3:arg0:agt | 0.84 | 0.86 | 0.85 | 227 |
| 3:arg0:cau | 0.71 | 0.86 | 0.77 | 14 |
| 3:arg1:pat | 0.81 | 0.85 | 0.83 | 199 |
| 3:arg1:tem | 0.76 | 0.76 | 0.76 | 160 |
| 3:arg2:atr | 0.72 | 0.80 | 0.75 | 79 |
| 3:arg2:ben | 0.82 | 0.85 | 0.84 | 27 |
| 3:arg2:efi | 0.00 | 0.00 | 0.00 | 1 |
| 3:arg2:ext | 0.00 | 0.00 | 0.00 | 3 |
| 3:arg2:loc | 0.47 | 0.38 | 0.42 | 21 |
| 3:arg3:ben | 0.00 | 0.00 | 0.00 | 3 |
| 3:arg3:ein | 0.00 | 0.00 | 0.00 | 2 |
| 3:arg3:ori | 0.25 | 0.33 | 0.29 | 3 |
| 3:arg4:des | 0.46 | 0.86 | 0.60 | 7 |
| 3:arg4:efi | 0.00 | 0.00 | 0.00 | 5 |
| 3:argM:adv | 0.43 | 0.42 | 0.42 | 98 |
| 3:argM:atr | 0.00 | 0.00 | 0.00 | 7 |
| 3:argM:cau | 0.56 | 0.69 | 0.62 | 13 |
| 3:argM:ext | 0.00 | 0.00 | 0.00 | 1 |
| 3:argM:fin | 0.64 | 0.60 | 0.62 | 15 |
| 3:argM:loc | 0.64 | 0.51 | 0.56 | 69 |
| 3:argM:mnr | 0.33 | 0.27 | 0.30 | 11 |
| 3:argM:tmp | 0.86 | 0.76 | 0.81 | 92 |
| 3:root | 0.90 | 0.91 | 0.90 | 569 |
| 4:arg0:agt | 0.77 | 0.92 | 0.84 | 119 |
| 4:arg0:cau | 0.67 | 0.67 | 0.67 | 6 |
| 4:arg1:pat | 0.72 | 0.83 | 0.77 | 87 |
| 4:arg1:tem | 0.82 | 0.77 | 0.80 | 109 |
| 4:arg2:atr | 0.74 | 0.75 | 0.75 | 53 |
| 4:arg2:ben | 0.60 | 0.55 | 0.57 | 11 |
| 4:arg2:ext | 0.00 | 0.00 | 0.00 | 1 |
| 4:arg2:loc | 0.83 | 0.45 | 0.59 | 11 |
| 4:arg3:ein | 0.00 | 0.00 | 0.00 | 1 |
| 4:arg3:ori | 0.00 | 0.00 | 0.00 | 1 |
| 4:arg4:des | 0.64 | 0.70 | 0.67 | 10 |
| 4:arg4:efi | 0.00 | 0.00 | 0.00 | 1 |
| 4:argM:adv | 0.48 | 0.48 | 0.48 | 50 |
| 4:argM:atr | 0.00 | 0.00 | 0.00 | 4 |
| 4:argM:cau | 0.00 | 0.00 | 0.00 | 3 |
| 4:argM:ext | 0.00 | 0.00 | 0.00 | 1 |
| 4:argM:fin | 0.60 | 0.55 | 0.57 | 11 |
| 4:argM:loc | 0.67 | 0.75 | 0.71 | 24 |
| 4:argM:mnr | 0.50 | 0.12 | 0.20 | 16 |
| 4:argM:tmp | 0.72 | 0.75 | 0.74 | 52 |
| 4:root | 0.85 | 0.89 | 0.87 | 322 |
| 5:arg0:agt | 0.68 | 0.72 | 0.70 | 72 |
| 5:arg0:cau | 1.00 | 0.20 | 0.33 | 5 |
| 5:arg1:pat | 0.66 | 0.72 | 0.69 | 71 |
| 5:arg1:tem | 0.77 | 0.73 | 0.75 | 41 |
| 5:arg2:atr | 0.55 | 0.52 | 0.54 | 21 |
| 5:arg2:ben | 0.40 | 0.67 | 0.50 | 6 |
| 5:arg2:efi | 0.00 | 0.00 | 0.00 | 1 |
| 5:arg2:ext | 0.00 | 0.00 | 0.00 | 1 |
| 5:arg2:loc | 0.00 | 0.00 | 0.00 | 1 |
| 5:arg3:ein | 0.00 | 0.00 | 0.00 | 1 |
| 5:arg4:des | 0.00 | 0.00 | 0.00 | 1 |
| 5:arg4:efi | 0.00 | 0.00 | 0.00 | 1 |
| 5:argM:adv | 0.42 | 0.50 | 0.46 | 26 |
| 5:argM:cau | 1.00 | 0.33 | 0.50 | 3 |
| 5:argM:fin | 0.60 | 0.60 | 0.60 | 5 |
| 5:argM:loc | 0.56 | 0.43 | 0.49 | 21 |
| 5:argM:mnr | 0.00 | 0.00 | 0.00 | 7 |
| 5:argM:tmp | 0.69 | 0.60 | 0.64 | 30 |
| 5:root | 0.75 | 0.80 | 0.77 | 173 |
| 6:arg0:agt | 0.52 | 0.44 | 0.48 | 34 |
| 6:arg0:cau | 0.00 | 0.00 | 0.00 | 1 |
| 6:arg1:loc | 0.00 | 0.00 | 0.00 | 1 |
| 6:arg1:pat | 0.57 | 0.57 | 0.57 | 28 |
| 6:arg1:tem | 0.38 | 0.50 | 0.43 | 16 |
| 6:arg2:atr | 0.26 | 0.46 | 0.33 | 13 |
| 6:arg2:ben | 0.38 | 0.60 | 0.46 | 5 |
| 6:arg2:loc | 0.00 | 0.00 | 0.00 | 1 |
| 6:arg3:ben | 0.00 | 0.00 | 0.00 | 1 |
| 6:argM:adv | 0.22 | 0.40 | 0.29 | 10 |
| 6:argM:atr | 0.00 | 0.00 | 0.00 | 2 |
| 6:argM:cau | 0.00 | 0.00 | 0.00 | 1 |
| 6:argM:fin | 0.25 | 0.50 | 0.33 | 2 |
| 6:argM:loc | 0.33 | 0.43 | 0.38 | 7 |
| 6:argM:mnr | 0.00 | 0.00 | 0.00 | 5 |
| 6:argM:tmp | 0.23 | 0.43 | 0.30 | 7 |
| 6:root | 0.60 | 0.59 | 0.59 | 82 |
| 7:arg0:agt | 0.26 | 0.41 | 0.32 | 17 |
| 7:arg1:pat | 0.42 | 0.65 | 0.51 | 17 |
| 7:arg1:tem | 0.30 | 0.20 | 0.24 | 15 |
| 7:arg2:atr | 0.25 | 0.13 | 0.17 | 15 |
| 7:arg2:ben | 0.00 | 0.00 | 0.00 | 7 |
| 7:arg2:loc | 0.00 | 0.00 | 0.00 | 1 |
| 7:arg3:ori | 0.00 | 0.00 | 0.00 | 1 |
| 7:arg4:des | 0.00 | 0.00 | 0.00 | 1 |
| 7:argM:adv | 0.03 | 0.20 | 0.06 | 5 |
| 7:argM:atr | 0.00 | 0.00 | 0.00 | 1 |
| 7:argM:fin | 0.00 | 0.00 | 0.00 | 1 |
| 7:argM:loc | 0.00 | 0.00 | 0.00 | 3 |
| 7:argM:tmp | 0.00 | 0.00 | 0.00 | 6 |
| 7:root | 0.40 | 0.56 | 0.46 | 45 |
| 8:arg0:agt | 0.00 | 0.00 | 0.00 | 8 |
| 8:arg0:cau | 0.00 | 0.00 | 0.00 | 1 |
| 8:arg1:pat | 0.00 | 0.00 | 0.00 | 4 |
| 8:arg1:tem | 0.10 | 0.22 | 0.14 | 9 |
| 8:arg2:atr | 0.00 | 0.00 | 0.00 | 4 |
| 8:arg2:ben | 0.00 | 0.00 | 0.00 | 0 |
| 8:arg2:ext | 0.00 | 0.00 | 0.00 | 1 |
| 8:arg2:loc | 0.00 | 0.00 | 0.00 | 2 |
| 8:arg3:ori | 0.00 | 0.00 | 0.00 | 1 |
| 8:argM:adv | 0.00 | 0.00 | 0.00 | 8 |
| 8:argM:ext | 0.00 | 0.00 | 0.00 | 1 |
| 8:argM:fin | 0.00 | 0.00 | 0.00 | 1 |
| 8:argM:loc | 0.00 | 0.00 | 0.00 | 4 |
| 8:argM:mnr | 0.00 | 0.00 | 0.00 | 1 |
| 8:argM:tmp | 0.00 | 0.00 | 0.00 | 1 |
| 8:root | 0.12 | 0.12 | 0.12 | 25 |
| 9:arg0:agt | 0.00 | 0.00 | 0.00 | 6 |
| 9:arg0:cau | 0.00 | 0.00 | 0.00 | 1 |
| 9:arg1:pat | 0.00 | 0.00 | 0.00 | 4 |
| 9:arg1:tem | 0.00 | 0.00 | 0.00 | 5 |
| 9:arg2:atr | 0.00 | 0.00 | 0.00 | 3 |
| 9:arg2:ben | 0.00 | 0.00 | 0.00 | 1 |
| 9:argM:adv | 0.00 | 0.00 | 0.00 | 6 |
| 9:argM:cau | 0.00 | 0.00 | 0.00 | 1 |
| 9:argM:fin | 0.00 | 0.00 | 0.00 | 2 |
| 9:argM:loc | 0.00 | 0.00 | 0.00 | 2 |
| 9:argM:tmp | 0.00 | 0.00 | 0.00 | 1 |
| 9:root | 0.07 | 0.12 | 0.09 | 17 |
| 10:arg0:agt | 0.00 | 0.00 | 0.00 | 3 |
| 10:arg1:pat | 0.00 | 0.00 | 0.00 | 5 |
| 10:arg1:tem | 0.00 | 0.00 | 0.00 | 3 |
| 10:arg2:atr | 0.00 | 0.00 | 0.00 | 1 |
| 10:arg2:ben | 0.00 | 0.00 | 0.00 | 2 |
| 10:argM:adv | 0.00 | 0.00 | 0.00 | 3 |
| 10:argM:fin | 0.00 | 0.00 | 0.00 | 1 |
| 10:argM:tmp | 0.00 | 0.00 | 0.00 | 1 |
| 10:root | 0.00 | 0.00 | 0.00 | 12 |
| 11:arg0:agt | 0.00 | 0.00 | 0.00 | 1 |
| 11:arg0:cau | 0.00 | 0.00 | 0.00 | 1 |
| 11:arg1:pat | 0.00 | 0.00 | 0.00 | 2 |
| 11:arg1:tem | 0.00 | 0.00 | 0.00 | 4 |
| 11:arg2:atr | 0.00 | 0.00 | 0.00 | 3 |
| 11:arg2:ben | 0.00 | 0.00 | 0.00 | 1 |
| 11:argM:adv | 0.00 | 0.00 | 0.00 | 4 |
| 11:argM:loc | 0.00 | 0.00 | 0.00 | 1 |
| 11:argM:tmp | 0.00 | 0.00 | 0.00 | 1 |
| 11:root | 0.00 | 0.00 | 0.00 | 9 |
| 12:arg0:agt | 0.00 | 0.00 | 0.00 | 3 |
| 12:arg1:pat | 0.00 | 0.00 | 0.00 | 1 |
| 12:arg1:tem | 0.00 | 0.00 | 0.00 | 2 |
| 12:arg2:atr | 0.00 | 0.00 | 0.00 | 2 |
| 12:argM:adv | 0.00 | 0.00 | 0.00 | 1 |
| 12:argM:cau | 0.00 | 0.00 | 0.00 | 1 |
| 12:argM:tmp | 0.00 | 0.00 | 0.00 | 3 |
| 12:root | 0.00 | 0.00 | 0.00 | 7 |
| 13:arg0:cau | 0.00 | 0.00 | 0.00 | 1 |
| 13:arg1:tem | 0.00 | 0.00 | 0.00 | 1 |
| 13:arg2:atr | 0.00 | 0.00 | 0.00 | 1 |
| 13:argM:adv | 0.00 | 0.00 | 0.00 | 1 |
| 13:argM:atr | 0.00 | 0.00 | 0.00 | 1 |
| 13:argM:loc | 0.00 | 0.00 | 0.00 | 1 |
| 13:root | 0.00 | 0.00 | 0.00 | 4 |
| 14:arg1:pat | 0.00 | 0.00 | 0.00 | 1 |
| 14:arg2:ben | 0.00 | 0.00 | 0.00 | 1 |
| 14:argM:mnr | 0.00 | 0.00 | 0.00 | 1 |
| 14:root | 0.00 | 0.00 | 0.00 | 2 |
| micro avg | 0.82 | 0.82 | 0.82 | 15436 |
| macro avg | 0.32 | 0.33 | 0.32 | 15436 |
| weighted avg | 0.82 | 0.82 | 0.82 | 15436 |
| tot root avg | 0.44 | 0.46 | 0.44 | 5165 |
| tot arg0:agt avg | 0.44 | 0.47 | 0.45 | 2257 |
| tot arg0:cau avg | 0.42 | 0.36 | 0.36 | 166 |
| tot arg0:exp avg | 0.00 | 0.00 | 0.00 | 1 |
| tot arg0:src avg | 0.00 | 0.00 | 0.00 | 2 |
| tot arg0 | 0.38 | 0.37 | 0.37 | 2426 |
| tot arg1:ext avg | 0.00 | 0.00 | 0.00 | 5 |
| tot arg1:loc avg | 0.00 | 0.00 | 0.00 | 1 |
| tot arg1:pat avg | 0.42 | 0.44 | 0.43 | 1770 |
| tot arg1:tem avg | 0.41 | 0.41 | 0.41 | 1635 |
| tot arg1 | 0.37 | 0.39 | 0.38 | 3411 |
| tot arg2:atr avg | 0.37 | 0.38 | 0.37 | 794 |
| tot arg2:ben avg | 0.35 | 0.39 | 0.36 | 248 |
| tot arg2:efi avg | 0.46 | 0.41 | 0.42 | 24 |
| tot arg2:exp avg | 0.50 | 0.33 | 0.40 | 6 |
| tot arg2:ext avg | 0.19 | 0.21 | 0.19 | 33 |
| tot arg2:ins avg | 0.00 | 0.00 | 0.00 | 2 |
| tot arg2:loc avg | 0.34 | 0.23 | 0.27 | 165 |
| tot arg2 | 0.33 | 0.32 | 0.32 | 1272 |
| tot arg3:ben avg | 0.10 | 0.04 | 0.06 | 15 |
| tot arg3:ein avg | 0.08 | 0.17 | 0.11 | 9 |
| tot arg3:fin avg | 0.75 | 0.75 | 0.75 | 4 |
| tot arg3:ori avg | 0.19 | 0.32 | 0.23 | 21 |
| tot arg3 | 0.19 | 0.25 | 0.20 | 49 |
| tot arg4:des avg | 0.38 | 0.56 | 0.45 | 61 |
| tot arg4:efi avg | 0.07 | 0.07 | 0.07 | 20 |
| tot arg4 | 0.24 | 0.33 | 0.27 | 81 |
| tot argM:adv avg | 0.23 | 0.27 | 0.24 | 876 |
| tot argM:atr avg | 0.26 | 0.25 | 0.25 | 73 |
| tot argM:cau avg | 0.37 | 0.34 | 0.33 | 115 |
| tot argM:ext avg | 0.00 | 0.00 | 0.00 | 19 |
| tot argM:fin avg | 0.39 | 0.40 | 0.39 | 158 |
| tot argM:ins avg | 0.00 | 0.00 | 0.00 | 1 |
| tot argM:loc avg | 0.36 | 0.35 | 0.36 | 591 |
| tot argM:mnr avg | 0.29 | 0.17 | 0.21 | 186 |
| tot argM:tmp avg | 0.38 | 0.39 | 0.38 | 1013 |
| tot argM | 0.30 | 0.29 | 0.29 | 3032 |
| tot r0 avg | 0.60 | 0.57 | 0.58 | 5242 |
| tot r1 avg | 0.56 | 0.58 | 0.56 | 3913 |
| tot r2 avg | 0.46 | 0.50 | 0.47 | 2711 |
| tot r3 avg | 0.44 | 0.47 | 0.45 | 1626 |
| tot r4 avg | 0.43 | 0.41 | 0.42 | 843 |
| tot r5 avg | 0.43 | 0.36 | 0.37 | 487 |
| tot r6 avg | 0.22 | 0.29 | 0.24 | 216 |
| tot r7 avg | 0.12 | 0.15 | 0.13 | 135 |
| tot r8 avg | 0.01 | 0.02 | 0.02 | 71 |
| tot r9 avg | 0.01 | 0.01 | 0.01 | 49 |
| tot r10 avg | 0.00 | 0.00 | 0.00 | 31 |
| tot r11 avg | 0.00 | 0.00 | 0.00 | 27 |
| tot r12 avg | 0.00 | 0.00 | 0.00 | 20 |
| tot r13 avg | 0.00 | 0.00 | 0.00 | 10 |
| tot r14 avg | 0.00 | 0.00 | 0.00 | 5 |
## Citation
**BibTeX:**
```
@inproceedings{bruton-beloucif-2023-bertie,
title = "{BERT}ie Bott{'}s Every Flavor Labels: A Tasty Introduction to Semantic Role Labeling for {G}alician",
author = "Bruton, Micaella and
Beloucif, Meriem",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.671",
doi = "10.18653/v1/2023.emnlp-main.671",
pages = "10892--10902",
abstract = "In this paper, we leverage existing corpora, WordNet, and dependency parsing to build the first Galician dataset for training semantic role labeling systems in an effort to expand available NLP resources. Additionally, we introduce verb indexing, a new pre-processing method, which helps increase the performance when semantically parsing highly-complex sentences. We use transfer-learning to test both the resource and the verb indexing method. Our results show that the effects of verb indexing were amplified in scenarios where the model was both pre-trained and fine-tuned on datasets utilizing the method, but improvements are also noticeable when only used during fine-tuning. The best-performing Galician SRL model achieved an f1 score of 0.74, introducing a baseline for future Galician SRL systems. We also tested our method on Spanish where we achieved an f1 score of 0.83, outperforming the baseline set by the 2009 CoNLL Shared Task by 0.025 showing the merits of our verb indexing method for pre-processing.",
}
```
|
mbruton/spa_enpt_mBERT
|
mbruton
| 2024-01-03T14:13:07Z | 93 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"es",
"en",
"pt",
"dataset:mbruton/spanish_srl",
"dataset:CoNLL-2012",
"dataset:PropBank.Br",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-04-14T19:15:11Z |
---
license: apache-2.0
datasets:
- mbruton/spanish_srl
- CoNLL-2012
- PropBank.Br
language:
- es
- en
- pt
metrics:
- seqeval
library_name: transformers
pipeline_tag: token-classification
---
# Model Card for SpaBERT-enpt for Semantic Role Labeling (cased)
This model is fine-tuned on a version of [multilingual BERT](https://huggingface.co/bert-base-multilingual-cased) which is pre-trained on the SRL task for English and Portuguese, and is one of 24 models introduced as part of [this project](https://github.com/mbruton0426/GalicianSRL).
## Model Details
### Model Description
SpaBERT-enpt for Semantic Role Labeling (SRL) is a transformers model, leveraging mBERT's extensive pretraining on 104 languages to achieve better SRL predictions for Spanish. This model is additionally pre-trained on the SRL task for English and Portuguese. It was fine-tuned on Spanish with the following objectives:
- Identify up to 16 verbal roots within a sentence.
- Identify available arguments and thematic roles for each verbal root.
Labels are formatted as: r#:tag, where r# links the token to a specific verbal root of index #, and tag identifies the token as the verbal root (root) or an individual argument (arg0/arg1/arg2/arg3/argM) and it's thematic role (adv/agt/atr/ben/cau/cot/des/efi/ein/exp/ext/fin/ins/loc/mnr/ori/pat/src/tem/tmp)
- **Developed by:** [Micaella Bruton](mailto:micaellabruton@gmail.com)
- **Model type:** Transformers
- **Language(s) (NLP):** Spanish (es), English (en), Portuguese (pt)
- **License:** Apache 2.0
- **Finetuned from model:** [English & Portuguese pre-trained multilingual BERT](https://huggingface.co/liaad/srl-enpt_mbert-base)
### Model Sources
- **Repository:** [GalicianSRL](https://github.com/mbruton0426/GalicianSRL)
- **Paper:** To be updated
## Uses
This model is intended to be used to develop and improve natural language processing tools for Spanish.
## Bias, Risks, and Limitations
The Spanish training set lacked highly complex sentences and as such, performs much better on sentences of mid- to low-complexity.
## Training Details
### Training Data
This model was pre-trained on the [OntoNotes 5.0 English SRL corpus](http://catalog.ldc.upenn.edu/LDC2013T19) and [PropBank.Br Portuguese SRL corpus](http://www.nilc.icmc.usp.br/portlex/index.php/en/projects/propbankbringl).
This model was fine-tuned on the "train" portion of the [SpanishSRL Dataset](https://huggingface.co/datasets/mbruton/spanish_srl) produced as part of this same project.
#### Training Hyperparameters
- **Learning Rate:** 2e-5
- **Batch Size:** 16
- **Weight Decay:** 0.01
- **Early Stopping:** 10 epochs
## Evaluation
#### Testing Data
This model was tested on the "test" portion of the [SpanishSRL Dataset](https://huggingface.co/datasets/mbruton/spanish_srl) produced as part of this same project.
#### Metrics
[seqeval](https://huggingface.co/spaces/evaluate-metric/seqeval) is a Python framework for sequence labeling evaluation. It can evaluate the performance of chunking tasks such as named-entity recognition, part-of-speech tagging, and semantic role labeling.
It supplies scoring both overall and per label type.
Overall:
- `accuracy`: the average [accuracy](https://huggingface.co/metrics/accuracy), on a scale between 0.0 and 1.0.
- `precision`: the average [precision](https://huggingface.co/metrics/precision), on a scale between 0.0 and 1.0.
- `recall`: the average [recall](https://huggingface.co/metrics/recall), on a scale between 0.0 and 1.0.
- `f1`: the average [F1 score](https://huggingface.co/metrics/f1), which is the harmonic mean of the precision and recall. It also has a scale of 0.0 to 1.0.
Per label type:
- `precision`: the average [precision](https://huggingface.co/metrics/precision), on a scale between 0.0 and 1.0.
- `recall`: the average [recall](https://huggingface.co/metrics/recall), on a scale between 0.0 and 1.0.
- `f1`: the average [F1 score](https://huggingface.co/metrics/f1), on a scale between 0.0 and 1.0.
### Results
| Label | Precision | Recall | f1-score | Support |
| :----------: | :-------: | :----: | :------: | :-----: |
| 0:arg0:agt | 0.93 | 0.90 | 0.92 | 867 |
| 0:arg0:cau | 0.64 | 0.68 | 0.66 | 57 |
| 0:arg0:src | 0.00 | 0.00 | 0.00 | 1 |
| 0:arg1:ext | 0.00 | 0.00 | 0.00 | 3 |
| 0:arg1:pat | 0.87 | 0.88 | 0.87 | 536 |
| 0:arg1:tem | 0.88 | 0.88 | 0.88 | 589 |
| 0:arg2:atr | 0.86 | 0.91 | 0.88 | 278 |
| 0:arg2:ben | 0.75 | 0.86 | 0.80 | 78 |
| 0:arg2:efi | 0.71 | 0.71 | 0.71 | 7 |
| 0:arg2:exp | 0.00 | 0.00 | 0.00 | 6 |
| 0:arg2:ext | 0.44 | 0.53 | 0.48 | 15 |
| 0:arg2:loc | 0.59 | 0.56 | 0.58 | 57 |
| 0:arg3:ben | 1.00 | 0.20 | 0.33 | 5 |
| 0:arg3:ein | 0.00 | 0.00 | 0.00 | 1 |
| 0:arg3:fin | 0.50 | 0.50 | 0.50 | 2 |
| 0:arg3:ori | 0.55 | 0.60 | 0.57 | 10 |
| 0:arg4:des | 0.52 | 0.81 | 0.63 | 16 |
| 0:arg4:efi | 0.25 | 0.20 | 0.22 | 5 |
| 0:argM:adv | 0.67 | 0.53 | 0.59 | 268 |
| 0:argM:atr | 0.57 | 0.50 | 0.53 | 24 |
| 0:argM:cau | 0.64 | 0.44 | 0.52 | 41 |
| 0:argM:ext | 0.00 | 0.00 | 0.00 | 5 |
| 0:argM:fin | 0.77 | 0.78 | 0.77 | 46 |
| 0:argM:loc | 0.72 | 0.77 | 0.75 | 186 |
| 0:argM:mnr | 0.62 | 0.62 | 0.62 | 66 |
| 0:argM:tmp | 0.84 | 0.86 | 0.85 | 411 |
| 0:root | 0.99 | 0.99 | 0.99 | 1662 |
| 1:arg0:agt | 0.93 | 0.90 | 0.91 | 564 |
| 1:arg0:cau | 0.81 | 0.77 | 0.79 | 44 |
| 1:arg1:ext | 0.00 | 0.00 | 0.00 | 2 |
| 1:arg1:pat | 0.85 | 0.90 | 0.87 | 482 |
| 1:arg1:tem | 0.89 | 0.88 | 0.88 | 390 |
| 1:arg2:atr | 0.83 | 0.89 | 0.86 | 197 |
| 1:arg2:ben | 0.71 | 0.83 | 0.76 | 66 |
| 1:arg2:efi | 0.67 | 0.33 | 0.44 | 6 |
| 1:arg2:ext | 0.57 | 0.57 | 0.57 | 7 |
| 1:arg2:ins | 0.00 | 0.00 | 0.00 | 1 |
| 1:arg2:loc | 0.48 | 0.48 | 0.48 | 44 |
| 1:arg3:ben | 0.00 | 0.00 | 0.00 | 2 |
| 1:arg3:ein | 0.00 | 0.00 | 0.00 | 3 |
| 1:arg3:fin | 1.00 | 1.00 | 1.00 | 2 |
| 1:arg3:ori | 0.12 | 0.50 | 0.20 | 2 |
| 1:arg4:des | 0.50 | 0.90 | 0.64 | 10 |
| 1:arg4:efi | 0.00 | 0.00 | 0.00 | 2 |
| 1:argM:adv | 0.67 | 0.49 | 0.57 | 220 |
| 1:argM:atr | 0.65 | 0.58 | 0.61 | 19 |
| 1:argM:cau | 0.58 | 0.74 | 0.65 | 35 |
| 1:argM:ext | 0.33 | 0.14 | 0.20 | 7 |
| 1:argM:fin | 0.54 | 0.74 | 0.62 | 38 |
| 1:argM:loc | 0.66 | 0.77 | 0.71 | 156 |
| 1:argM:mnr | 0.60 | 0.48 | 0.53 | 44 |
| 1:argM:tmp | 0.78 | 0.83 | 0.80 | 247 |
| 1:root | 0.96 | 0.96 | 0.96 | 1323 |
| 2:arg0:agt | 0.86 | 0.88 | 0.87 | 336 |
| 2:arg0:cau | 0.78 | 0.71 | 0.75 | 35 |
| 2:arg0:exp | 0.00 | 0.00 | 0.00 | 1 |
| 2:arg0:src | 0.00 | 0.00 | 0.00 | 1 |
| 2:arg1:pat | 0.82 | 0.85 | 0.83 | 333 |
| 2:arg1:tem | 0.85 | 0.84 | 0.84 | 291 |
| 2:arg2:atr | 0.83 | 0.85 | 0.84 | 124 |
| 2:arg2:ben | 0.69 | 0.79 | 0.74 | 43 |
| 2:arg2:efi | 0.67 | 0.44 | 0.53 | 9 |
| 2:arg2:ext | 0.25 | 0.20 | 0.22 | 5 |
| 2:arg2:ins | 0.00 | 0.00 | 0.00 | 1 |
| 2:arg2:loc | 0.42 | 0.63 | 0.51 | 27 |
| 2:arg3:ben | 0.00 | 0.00 | 0.00 | 4 |
| 2:arg3:ein | 0.00 | 0.00 | 0.00 | 1 |
| 2:arg3:ori | 0.43 | 1.00 | 0.60 | 3 |
| 2:arg4:des | 0.60 | 0.75 | 0.67 | 16 |
| 2:arg4:efi | 0.00 | 0.00 | 0.00 | 6 |
| 2:argM:adv | 0.52 | 0.46 | 0.49 | 176 |
| 2:argM:atr | 0.58 | 0.47 | 0.52 | 15 |
| 2:argM:cau | 0.50 | 0.59 | 0.54 | 17 |
| 2:argM:ext | 0.00 | 0.00 | 0.00 | 4 |
| 2:argM:fin | 0.74 | 0.69 | 0.71 | 36 |
| 2:argM:ins | 0.00 | 0.00 | 0.00 | 1 |
| 2:argM:loc | 0.67 | 0.70 | 0.68 | 117 |
| 2:argM:mnr | 0.44 | 0.31 | 0.37 | 35 |
| 2:argM:tmp | 0.74 | 0.77 | 0.76 | 161 |
| 2:root | 0.93 | 0.93 | 0.93 | 913 |
| 3:arg0:agt | 0.86 | 0.81 | 0.84 | 227 |
| 3:arg0:cau | 0.69 | 0.64 | 0.67 | 14 |
| 3:arg1:pat | 0.81 | 0.83 | 0.82 | 199 |
| 3:arg1:tem | 0.71 | 0.81 | 0.76 | 160 |
| 3:arg2:atr | 0.73 | 0.81 | 0.77 | 79 |
| 3:arg2:ben | 0.75 | 0.78 | 0.76 | 27 |
| 3:arg2:efi | 0.00 | 0.00 | 0.00 | 1 |
| 3:arg2:ext | 0.00 | 0.00 | 0.00 | 3 |
| 3:arg2:loc | 0.45 | 0.43 | 0.44 | 21 |
| 3:arg3:ben | 0.00 | 0.00 | 0.00 | 3 |
| 3:arg3:ein | 0.00 | 0.00 | 0.00 | 2 |
| 3:arg3:ori | 0.00 | 0.00 | 0.00 | 3 |
| 3:arg4:des | 0.40 | 0.86 | 0.55 | 7 |
| 3:arg4:efi | 0.00 | 0.00 | 0.00 | 5 |
| 3:argM:adv | 0.54 | 0.44 | 0.49 | 98 |
| 3:argM:atr | 0.00 | 0.00 | 0.00 | 7 |
| 3:argM:cau | 0.60 | 0.46 | 0.52 | 13 |
| 3:argM:ext | 0.00 | 0.00 | 0.00 | 1 |
| 3:argM:fin | 0.42 | 0.67 | 0.51 | 15 |
| 3:argM:loc | 0.57 | 0.57 | 0.57 | 69 |
| 3:argM:mnr | 0.23 | 0.27 | 0.25 | 11 |
| 3:argM:tmp | 0.80 | 0.72 | 0.75 | 92 |
| 3:root | 0.90 | 0.90 | 0.90 | 569 |
| 4:arg0:agt | 0.77 | 0.82 | 0.80 | 119 |
| 4:arg0:cau | 0.60 | 0.50 | 0.55 | 6 |
| 4:arg1:pat | 0.70 | 0.80 | 0.75 | 87 |
| 4:arg1:tem | 0.79 | 0.64 | 0.71 | 109 |
| 4:arg2:atr | 0.70 | 0.79 | 0.74 | 53 |
| 4:arg2:ben | 0.64 | 0.64 | 0.64 | 11 |
| 4:arg2:ext | 0.00 | 0.00 | 0.00 | 1 |
| 4:arg2:loc | 0.86 | 0.55 | 0.67 | 11 |
| 4:arg3:ein | 0.00 | 0.00 | 0.00 | 1 |
| 4:arg3:ori | 0.00 | 0.00 | 0.00 | 1 |
| 4:arg4:des | 0.83 | 0.50 | 0.62 | 10 |
| 4:arg4:efi | 0.00 | 0.00 | 0.00 | 1 |
| 4:argM:adv | 0.47 | 0.48 | 0.48 | 50 |
| 4:argM:atr | 0.00 | 0.00 | 0.00 | 4 |
| 4:argM:cau | 0.00 | 0.00 | 0.00 | 3 |
| 4:argM:ext | 0.00 | 0.00 | 0.00 | 1 |
| 4:argM:fin | 0.36 | 0.36 | 0.36 | 11 |
| 4:argM:loc | 0.54 | 0.88 | 0.67 | 24 |
| 4:argM:mnr | 1.00 | 0.25 | 0.40 | 16 |
| 4:argM:tmp | 0.70 | 0.63 | 0.67 | 52 |
| 4:root | 0.83 | 0.84 | 0.83 | 322 |
| 5:arg0:agt | 0.71 | 0.78 | 0.74 | 72 |
| 5:arg0:cau | 1.00 | 0.20 | 0.33 | 5 |
| 5:arg1:pat | 0.63 | 0.79 | 0.70 | 71 |
| 5:arg1:tem | 0.69 | 0.49 | 0.57 | 41 |
| 5:arg2:atr | 0.38 | 0.48 | 0.43 | 21 |
| 5:arg2:ben | 0.33 | 0.67 | 0.44 | 6 |
| 5:arg2:efi | 0.00 | 0.00 | 0.00 | 1 |
| 5:arg2:ext | 0.00 | 0.00 | 0.00 | 1 |
| 5:arg2:loc | 0.00 | 0.00 | 0.00 | 1 |
| 5:arg3:ein | 0.00 | 0.00 | 0.00 | 1 |
| 5:arg4:des | 0.50 | 1.00 | 0.67 | 1 |
| 5:arg4:efi | 0.00 | 0.00 | 0.00 | 1 |
| 5:argM:adv | 0.39 | 0.46 | 0.42 | 26 |
| 5:argM:cau | 1.00 | 0.33 | 0.50 | 3 |
| 5:argM:fin | 0.33 | 0.40 | 0.36 | 5 |
| 5:argM:loc | 0.73 | 0.52 | 0.61 | 21 |
| 5:argM:mnr | 0.00 | 0.00 | 0.00 | 7 |
| 5:argM:tmp | 0.58 | 0.70 | 0.64 | 30 |
| 5:root | 0.74 | 0.75 | 0.74 | 173 |
| 6:arg0:agt | 0.62 | 0.53 | 0.57 | 34 |
| 6:arg0:cau | 0.00 | 0.00 | 0.00 | 1 |
| 6:arg1:loc | 0.00 | 0.00 | 0.00 | 1 |
| 6:arg1:pat | 0.47 | 0.50 | 0.48 | 28 |
| 6:arg1:tem | 0.56 | 0.56 | 0.56 | 16 |
| 6:arg2:atr | 0.17 | 0.23 | 0.19 | 13 |
| 6:arg2:ben | 0.00 | 0.00 | 0.00 | 5 |
| 6:arg2:loc | 0.00 | 0.00 | 0.00 | 1 |
| 6:arg3:ben | 0.00 | 0.00 | 0.00 | 1 |
| 6:argM:adv | 0.15 | 0.40 | 0.22 | 10 |
| 6:argM:atr | 0.00 | 0.00 | 0.00 | 2 |
| 6:argM:cau | 0.00 | 0.00 | 0.00 | 1 |
| 6:argM:fin | 0.00 | 0.00 | 0.00 | 2 |
| 6:argM:loc | 0.29 | 0.71 | 0.42 | 7 |
| 6:argM:mnr | 0.00 | 0.00 | 0.00 | 5 |
| 6:argM:tmp | 0.15 | 0.29 | 0.20 | 7 |
| 6:root | 0.68 | 0.62 | 0.65 | 82 |
| 7:arg0:agt | 0.26 | 0.53 | 0.35 | 17 |
| 7:arg1:pat | 0.25 | 0.29 | 0.27 | 17 |
| 7:arg1:tem | 0.36 | 0.53 | 0.43 | 15 |
| 7:arg2:atr | 0.17 | 0.13 | 0.15 | 15 |
| 7:arg2:ben | 0.00 | 0.00 | 0.00 | 7 |
| 7:arg2:loc | 0.00 | 0.00 | 0.00 | 1 |
| 7:arg3:ori | 0.00 | 0.00 | 0.00 | 1 |
| 7:arg4:des | 0.00 | 0.00 | 0.00 | 1 |
| 7:argM:adv | 0.00 | 0.00 | 0.00 | 5 |
| 7:argM:atr | 0.00 | 0.00 | 0.00 | 1 |
| 7:argM:fin | 0.00 | 0.00 | 0.00 | 1 |
| 7:argM:loc | 0.00 | 0.00 | 0.00 | 3 |
| 7:argM:tmp | 0.00 | 0.00 | 0.00 | 6 |
| 7:root | 0.64 | 0.64 | 0.64 | 45 |
| 8:arg0:agt | 0.00 | 0.00 | 0.00 | 8 |
| 8:arg0:cau | 0.00 | 0.00 | 0.00 | 1 |
| 8:arg1:pat | 0.00 | 0.00 | 0.00 | 4 |
| 8:arg1:tem | 0.00 | 0.00 | 0.00 | 9 |
| 8:arg2:atr | 0.00 | 0.00 | 0.00 | 4 |
| 8:arg2:ext | 0.00 | 0.00 | 0.00 | 1 |
| 8:arg2:loc | 0.00 | 0.00 | 0.00 | 2 |
| 8:arg3:ori | 0.00 | 0.00 | 0.00 | 1 |
| 8:argM:adv | 0.00 | 0.00 | 0.00 | 8 |
| 8:argM:ext | 0.00 | 0.00 | 0.00 | 1 |
| 8:argM:fin | 0.00 | 0.00 | 0.00 | 1 |
| 8:argM:loc | 0.00 | 0.00 | 0.00 | 4 |
| 8:argM:mnr | 0.00 | 0.00 | 0.00 | 1 |
| 8:argM:tmp | 0.00 | 0.00 | 0.00 | 1 |
| 8:root | 0.38 | 0.68 | 0.49 | 25 |
| 9:arg0:agt | 0.00 | 0.00 | 0.00 | 6 |
| 9:arg0:cau | 0.00 | 0.00 | 0.00 | 1 |
| 9:arg1:pat | 0.00 | 0.00 | 0.00 | 4 |
| 9:arg1:tem | 0.00 | 0.00 | 0.00 | 5 |
| 9:arg2:atr | 0.00 | 0.00 | 0.00 | 3 |
| 9:arg2:ben | 0.00 | 0.00 | 0.00 | 1 |
| 9:argM:adv | 0.00 | 0.00 | 0.00 | 6 |
| 9:argM:cau | 0.00 | 0.00 | 0.00 | 1 |
| 9:argM:fin | 0.00 | 0.00 | 0.00 | 2 |
| 9:argM:loc | 0.00 | 0.00 | 0.00 | 2 |
| 9:argM:tmp | 0.00 | 0.00 | 0.00 | 1 |
| 9:root | 0.00 | 0.00 | 0.00 | 17 |
| 10:arg0:agt | 0.00 | 0.00 | 0.00 | 3 |
| 10:arg1:pat | 0.00 | 0.00 | 0.00 | 5 |
| 10:arg1:tem | 0.00 | 0.00 | 0.00 | 3 |
| 10:arg2:atr | 0.00 | 0.00 | 0.00 | 1 |
| 10:arg2:ben | 0.00 | 0.00 | 0.00 | 2 |
| 10:argM:adv | 0.00 | 0.00 | 0.00 | 3 |
| 10:argM:fin | 0.00 | 0.00 | 0.00 | 1 |
| 10:argM:tmp | 0.00 | 0.00 | 0.00 | 1 |
| 10:root | 0.00 | 0.00 | 0.00 | 12 |
| 11:arg0:agt | 0.00 | 0.00 | 0.00 | 1 |
| 11:arg0:cau | 0.00 | 0.00 | 0.00 | 1 |
| 11:arg1:pat | 0.00 | 0.00 | 0.00 | 2 |
| 11:arg1:tem | 0.00 | 0.00 | 0.00 | 4 |
| 11:arg2:atr | 0.00 | 0.00 | 0.00 | 3 |
| 11:arg2:ben | 0.00 | 0.00 | 0.00 | 1 |
| 11:argM:adv | 0.00 | 0.00 | 0.00 | 4 |
| 11:argM:loc | 0.00 | 0.00 | 0.00 | 1 |
| 11:argM:tmp | 0.00 | 0.00 | 0.00 | 1 |
| 11:root | 0.00 | 0.00 | 0.00 | 9 |
| 12:arg0:agt | 0.00 | 0.00 | 0.00 | 3 |
| 12:arg1:pat | 0.00 | 0.00 | 0.00 | 1 |
| 12:arg1:tem | 0.00 | 0.00 | 0.00 | 2 |
| 12:arg2:atr | 0.00 | 0.00 | 0.00 | 2 |
| 12:argM:adv | 0.00 | 0.00 | 0.00 | 1 |
| 12:argM:cau | 0.00 | 0.00 | 0.00 | 1 |
| 12:argM:tmp | 0.00 | 0.00 | 0.00 | 3 |
| 12:root | 0.00 | 0.00 | 0.00 | 7 |
| 13:arg0:cau | 0.00 | 0.00 | 0.00 | 1 |
| 13:arg1:tem | 0.00 | 0.00 | 0.00 | 1 |
| 13:arg2:atr | 0.00 | 0.00 | 0.00 | 1 |
| 13:argM:adv | 0.00 | 0.00 | 0.00 | 1 |
| 13:argM:atr | 0.00 | 0.00 | 0.00 | 1 |
| 13:argM:loc | 0.00 | 0.00 | 0.00 | 1 |
| 13:root | 0.00 | 0.00 | 0.00 | 4 |
| 14:arg1:pat | 0.00 | 0.00 | 0.00 | 1 |
| 14:arg2:ben | 0.00 | 0.00 | 0.00 | 1 |
| 14:argM:mnr | 0.00 | 0.00 | 0.00 | 1 |
| 14:root | 0.00 | 0.00 | 0.00 | 2 |
| micro avg | 0.82 | 0.82 | 0.82 | 15436 |
| macro avg | 0.31 | 0.31 | 0.30 | 15436 |
| weighted avg | 0.81 | 0.82 | 0.81 | 15436 |
| tot root avg | 0.47 | 0.49 | 0.48 | 344 |
| tot arg0:agt avg | 0.46 | 0.47 | 0.46 | 2257 |
| tot arg0:cau avg | 0.41 | 0.32 | 0.34 | 166 |
| tot arg0:exp avg | 0.00 | 0.00 | 0.00 | 1 |
| tot arg0:src avg | 0.00 | 0.00 | 0.00 | 2 |
| tot arg0 | 0.39 | 0.36 | 0.36 | 2426 |
| tot arg1:ext avg | 0.00 | 0.00 | 0.00 | 5 |
| tot arg1:loc avg | 0.00 | 0.00 | 0.00 | 1 |
| tot arg1:pat avg | 0.39 | 0.42 | 0.40 | 1770 |
| tot arg1:tem avg | 0.41 | 0.40 | 0.40 | 1635 |
| tot arg1 | 0.36 | 0.37 | 0.36 | 3411 |
| tot arg2:atr avg | 0.33 | 0.36 | 0.35 | 794 |
| tot arg2:ben avg | 0.33 | 0.42 | 0.36 | 255 |
| tot arg2:efi avg | 0.41 | 0.30 | 0.34 | 24 |
| tot arg2:exp avg | 0.00 | 0.00 | 0.00 | 6 |
| tot arg2:ext avg | 0.18 | 0.19 | 0.18 | 33 |
| tot arg2:ins avg | 0.00 | 0.00 | 0.00 | 2 |
| tot arg2:loc avg | 0.31 | 0.29 | 0.30 | 165 |
| tot arg2 | 0.30 | 0.31 | 0.30 | 1279 |
| tot arg3:ben avg | 0.20 | 0.04 | 0.07 | 15 |
| tot arg3:ein avg | 0.00 | 0.00 | 0.00 | 9 |
| tot arg3:fin avg | 0.75 | 0.75 | 0.75 | 4 |
| tot arg3:ori avg | 0.16 | 0.30 | 0.20 | 21 |
| tot arg3 | 0.18 | 0.19 | 0.16 | 49 |
| tot arg4:des avg | 0.48 | 0.69 | 0.54 | 61 |
| tot arg4:efi avg | 0.04 | 0.03 | 0.04 | 20 |
| tot arg4 | 0.28 | 0.39 | 0.31 | 81 |
| tot argM:adv avg | 0.24 | 0.23 | 0.23 | 876 |
| tot argM:atr avg | 0.23 | 0.19 | 0.21 | 73 |
| tot argM:cau avg | 0.37 | 0.28 | 0.30 | 115 |
| tot argM:ext avg | 0.06 | 0.02 | 0.03 | 19 |
| tot argM:fin avg | 0.29 | 0.33 | 0.30 | 158 |
| tot argM:ins avg | 0.00 | 0.00 | 0.00 | 1 |
| tot argM:loc avg | 0.35 | 0.41 | 0.37 | 591 |
| tot argM:mnr avg | 0.32 | 0.21 | 0.24 | 186 |
| tot argM:tmp avg | 0.35 | 0.37 | 0.36 | 1013 |
| tot argM | 0.29 | 0.27 | 0.27 | 3032 |
| tot r0 avg | 0.57 | 0.54 | 0.54 | 5242 |
| tot r1 avg | 0.54 | 0.56 | 0.54 | 3913 |
| tot r2 avg | 0.46 | 0.48 | 0.46 | 2711 |
| tot r3 avg | 0.41 | 0.43 | 0.42 | 1626 |
| tot r4 avg | 0.47 | 0.41 | 0.42 | 892 |
| tot r5 avg | 0.42 | 0.40 | 0.38 | 487 |
| tot r6 avg | 0.18 | 0.23 | 0.19 | 216 |
| tot r7 avg | 0.12 | 0.15 | 0.13 | 135 |
| tot r8 avg | 0.03 | 0.05 | 0.03 | 71 |
| tot r9 avg | 0.00 | 0.00 | 0.00 | 49 |
| tot r10 avg | 0.00 | 0.00 | 0.00 | 31 |
| tot r11 avg | 0.00 | 0.00 | 0.00 | 27 |
| tot r12 avg | 0.00 | 0.00 | 0.00 | 20 |
| tot r13 avg | 0.00 | 0.00 | 0.00 | 10 |
| tot r14 avg | 0.00 | 0.00 | 0.00 | 5 |
## Citation
**BibTeX:**
```
@inproceedings{bruton-beloucif-2023-bertie,
title = "{BERT}ie Bott{'}s Every Flavor Labels: A Tasty Introduction to Semantic Role Labeling for {G}alician",
author = "Bruton, Micaella and
Beloucif, Meriem",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.671",
doi = "10.18653/v1/2023.emnlp-main.671",
pages = "10892--10902",
abstract = "In this paper, we leverage existing corpora, WordNet, and dependency parsing to build the first Galician dataset for training semantic role labeling systems in an effort to expand available NLP resources. Additionally, we introduce verb indexing, a new pre-processing method, which helps increase the performance when semantically parsing highly-complex sentences. We use transfer-learning to test both the resource and the verb indexing method. Our results show that the effects of verb indexing were amplified in scenarios where the model was both pre-trained and fine-tuned on datasets utilizing the method, but improvements are also noticeable when only used during fine-tuning. The best-performing Galician SRL model achieved an f1 score of 0.74, introducing a baseline for future Galician SRL systems. We also tested our method on Spanish where we achieved an f1 score of 0.83, outperforming the baseline set by the 2009 CoNLL Shared Task by 0.025 showing the merits of our verb indexing method for pre-processing.",
}
```
|
mbruton/spa_XLM-R
|
mbruton
| 2024-01-03T14:12:50Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"es",
"dataset:mbruton/spanish_srl",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-04-14T20:27:53Z |
---
license: apache-2.0
datasets:
- mbruton/spanish_srl
language:
- es
metrics:
- seqeval
library_name: transformers
pipeline_tag: token-classification
---
# Model Card for SpaXLM-R for Semantic Role Labeling
This model is fine-tuned on a version of [XLM RoBERTa Base](https://huggingface.co/xlm-roberta-base) and is one of 24 models introduced as part of [this project](https://github.com/mbruton0426/GalicianSRL).
## Model Details
### Model Description
SpaXLM-R for Semantic Role Labeling (SRL) is a transformers model, leveraging XLM-R's extensive pretraining on 100 languages to achieve better SRL predictions for Spanish. It was fine-tuned on Spanish with the following objectives:
- Identify up to 16 verbal roots within a sentence.
- Identify available arguments and thematic roles for each verbal root.
Labels are formatted as: r#:tag, where r# links the token to a specific verbal root of index #, and tag identifies the token as the verbal root (root) or an individual argument (arg0/arg1/arg2/arg3/argM) and it's thematic role (adv/agt/atr/ben/cau/cot/des/efi/ein/exp/ext/fin/ins/loc/mnr/ori/pat/src/tem/tmp)
- **Developed by:** [Micaella Bruton](mailto:micaellabruton@gmail.com)
- **Model type:** Transformers
- **Language(s) (NLP):** Spanish (es), English (en), Portuguese (pt)
- **License:** Apache 2.0
- **Finetuned from model:** [XLM RoBERTa Base](https://huggingface.co/xlm-roberta-base)
### Model Sources
- **Repository:** [GalicianSRL](https://github.com/mbruton0426/GalicianSRL)
- **Paper:** To be updated
## Uses
This model is intended to be used to develop and improve natural language processing tools for Spanish.
## Bias, Risks, and Limitations
The Spanish training set lacked highly complex sentences and as such, performs much better on sentences of mid- to low-complexity.
## Training Details
### Training Data
This model was fine-tuned on the "train" portion of the [SpanishSRL Dataset](https://huggingface.co/datasets/mbruton/spanish_srl) produced as part of this same project.
#### Training Hyperparameters
- **Learning Rate:** 2e-5
- **Batch Size:** 16
- **Weight Decay:** 0.01
- **Early Stopping:** 10 epochs
## Evaluation
#### Testing Data
This model was tested on the "test" portion of the [SpanishSRL Dataset](https://huggingface.co/datasets/mbruton/spanish_srl) produced as part of this same project.
#### Metrics
[seqeval](https://huggingface.co/spaces/evaluate-metric/seqeval) is a Python framework for sequence labeling evaluation. It can evaluate the performance of chunking tasks such as named-entity recognition, part-of-speech tagging, and semantic role labeling.
It supplies scoring both overall and per label type.
Overall:
- `accuracy`: the average [accuracy](https://huggingface.co/metrics/accuracy), on a scale between 0.0 and 1.0.
- `precision`: the average [precision](https://huggingface.co/metrics/precision), on a scale between 0.0 and 1.0.
- `recall`: the average [recall](https://huggingface.co/metrics/recall), on a scale between 0.0 and 1.0.
- `f1`: the average [F1 score](https://huggingface.co/metrics/f1), which is the harmonic mean of the precision and recall. It also has a scale of 0.0 to 1.0.
Per label type:
- `precision`: the average [precision](https://huggingface.co/metrics/precision), on a scale between 0.0 and 1.0.
- `recall`: the average [recall](https://huggingface.co/metrics/recall), on a scale between 0.0 and 1.0.
- `f1`: the average [F1 score](https://huggingface.co/metrics/f1), on a scale between 0.0 and 1.0.
### Results
| Label | Precision | Recall | f1-score | Support |
| :----------: | :-------: | :----: | :------: | :-----: |
| 0:arg0:agt | 0.94 | 0.92 | 0.93 | 867 |
| 0:arg0:cau | 0.71 | 0.70 | 0.71 | 57 |
| 0:arg0:src | 0.00 | 0.00 | 0.00 | 1 |
| 0:arg1:ext | 0.00 | 0.00 | 0.00 | 3 |
| 0:arg1:pat | 0.90 | 0.91 | 0.90 | 536 |
| 0:arg1:tem | 0.88 | 0.90 | 0.89 | 589 |
| 0:arg2:atr | 0.86 | 0.90 | 0.88 | 278 |
| 0:arg2:ben | 0.85 | 0.87 | 0.86 | 78 |
| 0:arg2:efi | 0.75 | 0.43 | 0.55 | 7 |
| 0:arg2:exp | 0.57 | 0.67 | 0.62 | 6 |
| 0:arg2:ext | 0.75 | 0.60 | 0.67 | 15 |
| 0:arg2:loc | 0.71 | 0.56 | 0.63 | 57 |
| 0:arg3:ben | 0.00 | 0.00 | 0.00 | 5 |
| 0:arg3:ein | 1.00 | 1.00 | 1.00 | 1 |
| 0:arg3:fin | 0.50 | 0.50 | 0.50 | 2 |
| 0:arg3:ori | 0.56 | 0.50 | 0.53 | 10 |
| 0:arg4:des | 0.53 | 1.00 | 0.70 | 16 |
| 0:arg4:efi | 0.50 | 0.40 | 0.44 | 5 |
| 0:argM:adv | 0.59 | 0.59 | 0.59 | 268 |
| 0:argM:atr | 0.62 | 0.62 | 0.62 | 24 |
| 0:argM:cau | 0.64 | 0.61 | 0.62 | 41 |
| 0:argM:ext | 0.00 | 0.00 | 0.00 | 5 |
| 0:argM:fin | 0.77 | 0.65 | 0.71 | 46 |
| 0:argM:loc | 0.74 | 0.77 | 0.76 | 186 |
| 0:argM:mnr | 0.73 | 0.45 | 0.56 | 66 |
| 0:argM:tmp | 0.85 | 0.88 | 0.86 | 411 |
| 0:root | 0.99 | 0.99 | 0.99 | 1662 |
| 1:arg0:agt | 0.93 | 0.92 | 0.92 | 564 |
| 1:arg0:cau | 0.77 | 0.82 | 0.79 | 44 |
| 1:arg1:ext | 0.00 | 0.00 | 0.00 | 2 |
| 1:arg1:pat | 0.88 | 0.87 | 0.88 | 482 |
| 1:arg1:tem | 0.89 | 0.90 | 0.89 | 390 |
| 1:arg2:atr | 0.87 | 0.88 | 0.88 | 197 |
| 1:arg2:ben | 0.79 | 0.88 | 0.83 | 66 |
| 1:arg2:efi | 0.75 | 0.50 | 0.60 | 6 |
| 1:arg2:ext | 0.62 | 0.71 | 0.67 | 7 |
| 1:arg2:ins | 0.00 | 0.00 | 0.00 | 1 |
| 1:arg2:loc | 0.67 | 0.55 | 0.60 | 44 |
| 1:arg3:ben | 0.00 | 0.00 | 0.00 | 2 |
| 1:arg3:ein | 0.00 | 0.00 | 0.00 | 3 |
| 1:arg3:fin | 1.00 | 0.50 | 0.67 | 2 |
| 1:arg3:ori | 0.25 | 1.00 | 0.40 | 2 |
| 1:arg4:des | 0.50 | 0.90 | 0.64 | 10 |
| 1:arg4:efi | 0.00 | 0.00 | 0.00 | 2 |
| 1:argM:adv | 0.62 | 0.58 | 0.60 | 220 |
| 1:argM:atr | 0.64 | 0.84 | 0.73 | 19 |
| 1:argM:cau | 0.69 | 0.69 | 0.69 | 35 |
| 1:argM:ext | 0.00 | 0.00 | 0.00 | 7 |
| 1:argM:fin | 0.53 | 0.61 | 0.57 | 38 |
| 1:argM:loc | 0.75 | 0.74 | 0.75 | 156 |
| 1:argM:mnr | 0.65 | 0.25 | 0.36 | 44 |
| 1:argM:tmp | 0.82 | 0.81 | 0.81 | 247 |
| 1:root | 0.96 | 0.96 | 0.96 | 1323 |
| 2:arg0:agt | 0.82 | 0.92 | 0.87 | 336 |
| 2:arg0:cau | 0.84 | 0.77 | 0.81 | 35 |
| 2:arg0:exp | 0.00 | 0.00 | 0.00 | 1 |
| 2:arg0:src | 0.00 | 0.00 | 0.00 | 1 |
| 2:arg1:pat | 0.86 | 0.85 | 0.86 | 333 |
| 2:arg1:tem | 0.84 | 0.82 | 0.83 | 291 |
| 2:arg2:atr | 0.87 | 0.90 | 0.89 | 124 |
| 2:arg2:ben | 0.64 | 0.84 | 0.73 | 43 |
| 2:arg2:efi | 0.89 | 0.89 | 0.89 | 9 |
| 2:arg2:ext | 0.60 | 0.60 | 0.60 | 5 |
| 2:arg2:ins | 0.00 | 0.00 | 0.00 | 1 |
| 2:arg2:loc | 0.44 | 0.56 | 0.49 | 27 |
| 2:arg3:ben | 0.00 | 0.00 | 0.00 | 4 |
| 2:arg3:ein | 0.00 | 0.00 | 0.00 | 1 |
| 2:arg3:ori | 0.29 | 0.67 | 0.40 | 3 |
| 2:arg4:des | 0.61 | 0.88 | 0.72 | 16 |
| 2:arg4:efi | 0.25 | 0.17 | 0.20 | 6 |
| 2:argM:adv | 0.61 | 0.55 | 0.57 | 176 |
| 2:argM:atr | 0.83 | 0.33 | 0.48 | 15 |
| 2:argM:cau | 0.41 | 0.53 | 0.46 | 17 |
| 2:argM:ext | 0.00 | 0.00 | 0.00 | 4 |
| 2:argM:fin | 0.76 | 0.69 | 0.72 | 36 |
| 2:argM:ins | 0.00 | 0.00 | 0.00 | 1 |
| 2:argM:loc | 0.69 | 0.73 | 0.71 | 117 |
| 2:argM:mnr | 0.46 | 0.31 | 0.37 | 35 |
| 2:argM:tmp | 0.71 | 0.76 | 0.73 | 161 |
| 2:root | 0.92 | 0.94 | 0.93 | 913 |
| 3:arg0:agt | 0.82 | 0.84 | 0.83 | 227 |
| 3:arg0:cau | 0.61 | 0.79 | 0.69 | 14 |
| 3:arg1:pat | 0.77 | 0.88 | 0.82 | 199 |
| 3:arg1:tem | 0.78 | 0.78 | 0.78 | 160 |
| 3:arg2:atr | 0.76 | 0.78 | 0.77 | 79 |
| 3:arg2:ben | 0.83 | 0.93 | 0.88 | 27 |
| 3:arg2:efi | 0.00 | 0.00 | 0.00 | 1 |
| 3:arg2:ext | 0.00 | 0.00 | 0.00 | 3 |
| 3:arg2:loc | 0.32 | 0.33 | 0.33 | 21 |
| 3:arg3:ben | 0.00 | 0.00 | 0.00 | 3 |
| 3:arg3:ein | 0.00 | 0.00 | 0.00 | 2 |
| 3:arg3:ori | 0.00 | 0.00 | 0.00 | 3 |
| 3:arg4:des | 0.32 | 1.00 | 0.48 | 7 |
| 3:arg4:efi | 0.00 | 0.00 | 0.00 | 5 |
| 3:argM:adv | 0.48 | 0.49 | 0.49 | 98 |
| 3:argM:atr | 1.00 | 0.29 | 0.44 | 7 |
| 3:argM:cau | 0.40 | 0.46 | 0.43 | 13 |
| 3:argM:ext | 0.00 | 0.00 | 0.00 | 1 |
| 3:argM:fin | 0.32 | 0.40 | 0.35 | 15 |
| 3:argM:loc | 0.63 | 0.68 | 0.65 | 69 |
| 3:argM:mnr | 0.38 | 0.27 | 0.32 | 11 |
| 3:argM:tmp | 0.79 | 0.73 | 0.76 | 92 |
| 3:root | 0.89 | 0.91 | 0.90 | 569 |
| 4:arg0:agt | 0.76 | 0.79 | 0.77 | 119 |
| 4:arg0:cau | 0.67 | 0.67 | 0.67 | 6 |
| 4:arg1:pat | 0.63 | 0.72 | 0.67 | 87 |
| 4:arg1:tem | 0.81 | 0.72 | 0.76 | 109 |
| 4:arg2:atr | 0.83 | 0.83 | 0.83 | 53 |
| 4:arg2:ben | 0.55 | 0.55 | 0.55 | 11 |
| 4:arg2:ext | 0.00 | 0.00 | 0.00 | 1 |
| 4:arg2:loc | 0.50 | 0.36 | 0.42 | 11 |
| 4:arg3:ein | 0.00 | 0.00 | 0.00 | 1 |
| 4:arg3:ori | 0.00 | 0.00 | 0.00 | 1 |
| 4:arg4:des | 0.50 | 0.50 | 0.50 | 10 |
| 4:arg4:efi | 0.00 | 0.00 | 0.00 | 1 |
| 4:argM:adv | 0.30 | 0.34 | 0.32 | 50 |
| 4:argM:atr | 0.00 | 0.00 | 0.00 | 4 |
| 4:argM:cau | 0.00 | 0.00 | 0.00 | 3 |
| 4:argM:ext | 0.00 | 0.00 | 0.00 | 1 |
| 4:argM:fin | 0.20 | 0.18 | 0.19 | 11 |
| 4:argM:loc | 0.44 | 0.50 | 0.47 | 24 |
| 4:argM:mnr | 0.00 | 0.00 | 0.00 | 16 |
| 4:argM:tmp | 0.66 | 0.71 | 0.69 | 52 |
| 4:root | 0.82 | 0.84 | 0.83 | 322 |
| 5:arg0:agt | 0.69 | 0.69 | 0.69 | 72 |
| 5:arg0:cau | 1.00 | 0.40 | 0.57 | 5 |
| 5:arg1:pat | 0.68 | 0.68 | 0.68 | 71 |
| 5:arg1:tem | 0.69 | 0.54 | 0.60 | 41 |
| 5:arg2:atr | 0.63 | 0.57 | 0.60 | 21 |
| 5:arg2:ben | 0.25 | 0.50 | 0.33 | 6 |
| 5:arg2:efi | 0.00 | 0.00 | 0.00 | 1 |
| 5:arg2:ext | 0.00 | 0.00 | 0.00 | 1 |
| 5:arg2:loc | 0.00 | 0.00 | 0.00 | 1 |
| 5:arg3:ein | 0.00 | 0.00 | 0.00 | 1 |
| 5:arg4:des | 0.00 | 0.00 | 0.00 | 1 |
| 5:arg4:efi | 0.00 | 0.00 | 0.00 | 1 |
| 5:argM:adv | 0.39 | 0.27 | 0.32 | 26 |
| 5:argM:cau | 0.00 | 0.00 | 0.00 | 3 |
| 5:argM:fin | 0.00 | 0.00 | 0.00 | 5 |
| 5:argM:loc | 0.47 | 0.38 | 0.42 | 21 |
| 5:argM:mnr | 0.00 | 0.00 | 0.00 | 7 |
| 5:argM:tmp | 0.56 | 0.50 | 0.53 | 30 |
| 5:root | 0.73 | 0.73 | 0.73 | 173 |
| 6:arg0:agt | 0.43 | 0.38 | 0.41 | 34 |
| 6:arg0:cau | 0.00 | 0.00 | 0.00 | 1 |
| 6:arg1:loc | 0.00 | 0.00 | 0.00 | 1 |
| 6:arg1:pat | 0.46 | 0.46 | 0.46 | 28 |
| 6:arg1:tem | 0.33 | 0.38 | 0.35 | 16 |
| 6:arg2:atr | 0.29 | 0.62 | 0.39 | 13 |
| 6:arg2:ben | 0.20 | 0.20 | 0.20 | 5 |
| 6:arg2:loc | 0.00 | 0.00 | 0.00 | 1 |
| 6:arg3:ben | 0.00 | 0.00 | 0.00 | 1 |
| 6:argM:adv | 0.17 | 0.40 | 0.24 | 10 |
| 6:argM:atr | 0.00 | 0.00 | 0.00 | 2 |
| 6:argM:cau | 0.00 | 0.00 | 0.00 | 1 |
| 6:argM:fin | 0.00 | 0.00 | 0.00 | 2 |
| 6:argM:loc | 0.08 | 0.14 | 0.10 | 7 |
| 6:argM:mnr | 0.00 | 0.00 | 0.00 | 5 |
| 6:argM:tmp | 0.14 | 0.14 | 0.14 | 7 |
| 6:root | 0.61 | 0.56 | 0.59 | 82 |
| 7:arg0:agt | 0.15 | 0.18 | 0.16 | 17 |
| 7:arg1:pat | 0.30 | 0.35 | 0.32 | 17 |
| 7:arg1:tem | 0.64 | 0.47 | 0.54 | 15 |
| 7:arg2:atr | 0.33 | 0.07 | 0.11 | 15 |
| 7:arg2:ben | 0.00 | 0.00 | 0.00 | 7 |
| 7:arg2:loc | 0.00 | 0.00 | 0.00 | 1 |
| 7:arg3:ori | 0.00 | 0.00 | 0.00 | 1 |
| 7:arg4:des | 0.00 | 0.00 | 0.00 | 1 |
| 7:argM:adv | 0.00 | 0.00 | 0.00 | 5 |
| 7:argM:atr | 0.00 | 0.00 | 0.00 | 1 |
| 7:argM:fin | 0.00 | 0.00 | 0.00 | 1 |
| 7:argM:loc | 0.00 | 0.00 | 0.00 | 3 |
| 7:argM:tmp | 0.00 | 0.00 | 0.00 | 6 |
| 7:root | 0.43 | 0.40 | 0.41 | 45 |
| 8:arg0:agt | 0.00 | 0.00 | 0.00 | 8 |
| 8:arg0:cau | 0.00 | 0.00 | 0.00 | 1 |
| 8:arg1:pat | 0.00 | 0.00 | 0.00 | 4 |
| 8:arg1:tem | 0.17 | 0.44 | 0.25 | 9 |
| 8:arg2:atr | 0.00 | 0.00 | 0.00 | 4 |
| 8:arg2:ext | 0.00 | 0.00 | 0.00 | 1 |
| 8:arg2:loc | 0.00 | 0.00 | 0.00 | 2 |
| 8:arg3:ori | 0.00 | 0.00 | 0.00 | 1 |
| 8:argM:adv | 0.00 | 0.00 | 0.00 | 8 |
| 8:argM:ext | 0.00 | 0.00 | 0.00 | 1 |
| 8:argM:fin | 0.00 | 0.00 | 0.00 | 1 |
| 8:argM:loc | 0.00 | 0.00 | 0.00 | 4 |
| 8:argM:mnr | 0.00 | 0.00 | 0.00 | 1 |
| 8:argM:tmp | 0.00 | 0.00 | 0.00 | 1 |
| 8:root | 0.16 | 0.20 | 0.18 | 25 |
| 9:arg0:agt | 0.00 | 0.00 | 0.00 | 6 |
| 9:arg0:cau | 0.00 | 0.00 | 0.00 | 1 |
| 9:arg1:pat | 0.00 | 0.00 | 0.00 | 4 |
| 9:arg1:tem | 0.00 | 0.00 | 0.00 | 5 |
| 9:arg2:atr | 0.00 | 0.00 | 0.00 | 3 |
| 9:arg2:ben | 0.00 | 0.00 | 0.00 | 1 |
| 9:argM:adv | 0.00 | 0.00 | 0.00 | 6 |
| 9:argM:cau | 0.00 | 0.00 | 0.00 | 1 |
| 9:argM:fin | 0.00 | 0.00 | 0.00 | 2 |
| 9:argM:loc | 0.00 | 0.00 | 0.00 | 2 |
| 9:argM:tmp | 0.00 | 0.00 | 0.00 | 1 |
| 9:root | 0.04 | 0.06 | 0.05 | 17 |
| 10:arg0:agt | 0.00 | 0.00 | 0.00 | 3 |
| 10:arg1:pat | 0.00 | 0.00 | 0.00 | 5 |
| 10:arg1:tem | 0.00 | 0.00 | 0.00 | 3 |
| 10:arg2:atr | 0.00 | 0.00 | 0.00 | 1 |
| 10:arg2:ben | 0.00 | 0.00 | 0.00 | 2 |
| 10:argM:adv | 0.00 | 0.00 | 0.00 | 3 |
| 10:argM:fin | 0.00 | 0.00 | 0.00 | 1 |
| 10:argM:tmp | 0.00 | 0.00 | 0.00 | 1 |
| 10:root | 0.00 | 0.00 | 0.00 | 12 |
| 11:arg0:agt | 0.00 | 0.00 | 0.00 | 1 |
| 11:arg0:cau | 0.00 | 0.00 | 0.00 | 1 |
| 11:arg1:pat | 0.00 | 0.00 | 0.00 | 2 |
| 11:arg1:tem | 0.00 | 0.00 | 0.00 | 4 |
| 11:arg2:atr | 0.00 | 0.00 | 0.00 | 3 |
| 11:arg2:ben | 0.00 | 0.00 | 0.00 | 1 |
| 11:argM:adv | 0.00 | 0.00 | 0.00 | 4 |
| 11:argM:loc | 0.00 | 0.00 | 0.00 | 1 |
| 11:argM:tmp | 0.00 | 0.00 | 0.00 | 1 |
| 11:root | 0.00 | 0.00 | 0.00 | 9 |
| 12:arg0:agt | 0.00 | 0.00 | 0.00 | 3 |
| 12:arg1:pat | 0.00 | 0.00 | 0.00 | 1 |
| 12:arg1:tem | 0.00 | 0.00 | 0.00 | 2 |
| 12:arg2:atr | 0.00 | 0.00 | 0.00 | 2 |
| 12:argM:adv | 0.00 | 0.00 | 0.00 | 1 |
| 12:argM:cau | 0.00 | 0.00 | 0.00 | 1 |
| 12:argM:tmp | 0.00 | 0.00 | 0.00 | 3 |
| 12:root | 0.00 | 0.00 | 0.00 | 7 |
| 13:arg0:cau | 0.00 | 0.00 | 0.00 | 1 |
| 13:arg1:tem | 0.00 | 0.00 | 0.00 | 1 |
| 13:arg2:atr | 0.00 | 0.00 | 0.00 | 1 |
| 13:argM:adv | 0.00 | 0.00 | 0.00 | 1 |
| 13:argM:atr | 0.00 | 0.00 | 0.00 | 1 |
| 13:argM:loc | 0.00 | 0.00 | 0.00 | 1 |
| 13:root | 0.00 | 0.00 | 0.00 | 4 |
| 14:arg1:pat | 0.00 | 0.00 | 0.00 | 1 |
| 14:arg2:ben | 0.00 | 0.00 | 0.00 | 1 |
| 14:argM:mnr | 0.00 | 0.00 | 0.00 | 1 |
| 14:root | 0.00 | 0.00 | 0.00 | 2 |
| micro avg | 0.83 | 0.82 | 0.82 | 15436 |
| macro avg | 0.31 | 0.31 | 0.30 | 15436 |
| weighted avg | 0.82 | 0.82 | 0.82 | 15436 |
| tot root avg | 0.44 | 0.44 | 0.44 | 5165 |
| tot arg0:agt avg | 0.43 | 0.43 | 0.43 | 2257 |
| tot arg0:cau avg | 0.42 | 0.38 | 0.39 | 166 |
| tot arg0:exp avg | 0.00 | 0.00 | 0.00 | 1 |
| tot arg0:src avg | 0.00 | 0.00 | 0.00 | 2 |
| tot arg0 | 0.38 | 0.36 | 0.36 | 2426 |
| tot arg1:ext avg | 0.00 | 0.00 | 0.00 | 5 |
| tot arg1:loc avg | 0.00 | 0.00 | 0.00 | 1 |
| tot arg1:pat avg | 0.39 | 0.41 | 0.40 | 1770 |
| tot arg1:tem avg | 0.43 | 0.43 | 0.42 | 1635 |
| tot arg1 | 0.37 | 0.38 | 0.37 | 3411 |
| tot arg2:atr avg | 0.39 | 0.40 | 0.38 | 794 |
| tot arg2:ben avg | 0.34 | 0.44 | 0.37 | 255 |
| tot arg2:efi avg | 0.48 | 0.36 | 0.41 | 24 |
| tot arg2:exp avg | 0.57 | 0.67 | 0.62 | 6 |
| tot arg2:ext avg | 0.28 | 0.27 | 0.28 | 33 |
| tot arg2:ins avg | 0.00 | 0.00 | 0.00 | 2 |
| tot arg2:loc avg | 0.29 | 0.26 | 0.27 | 165 |
| tot arg2 | 0.34 | 0.35 | 0.34 | 1279 |
| tot arg3:ben avg | 0.00 | 0.00 | 0.00 | 15 |
| tot arg3:ein avg | 0.17 | 0.17 | 0.17 | 9 |
| tot arg3:fin avg | 0.75 | 0.50 | 0.59 | 4 |
| tot arg3:ori avg | 0.16 | 0.31 | 0.19 | 21 |
| tot arg3 | 0.18 | 0.21 | 0.18 | 49 |
| tot arg4:des avg | 0.35 | 0.61 | 0.43 | 61 |
| tot arg4:efi avg | 0.13 | 0.10 | 0.11 | 20 |
| tot arg4 | 0.25 | 0.37 | 0.28 | 81 |
| tot argM:adv avg | 0.23 | 0.23 | 0.22 | 876 |
| tot argM:atr avg | 0.39 | 0.26 | 0.28 | 73 |
| tot argM:cau avg | 0.24 | 0.25 | 0.24 | 115 |
| tot argM:ext avg | 0.00 | 0.00 | 0.00 | 19 |
| tot argM:fin avg | 0.23 | 0.23 | 0.23 | 158 |
| tot argM:ins avg | 0.00 | 0.00 | 0.00 | 1 |
| tot argM:loc avg | 0.32 | 0.33 | 0.32 | 591 |
| tot argM:mnr avg | 0.25 | 0.14 | 0.18 | 186 |
| tot argM:tmp avg | 0.35 | 0.35 | 0.35 | 1013 |
| tot argM | 0.26 | 0.24 | 0.24 | 3032 |
| tot r0 avg | 0.63 | 0.61 | 0.61 | 5242 |
| tot r1 avg | 0.56 | 0.57 | 0.55 | 3913 |
| tot r2 avg | 0.49 | 0.51 | 0.49 | 2711 |
| tot r3 avg | 0.44 | 0.46 | 0.43 | 1626 |
| tot r4 avg | 0.37 | 0.37 | 0.37 | 892 |
| tot r5 avg | 0.32 | 0.28 | 0.29 | 487 |
| tot r6 avg | 0.16 | 0.19 | 0.17 | 216 |
| tot r7 avg | 0.13 | 0.11 | 0.11 | 135 |
| tot r8 avg | 0.02 | 0.04 | 0.03 | 71 |
| tot r9 avg | 0.00 | 0.01 | 0.00 | 49 |
| tot r10 avg | 0.00 | 0.00 | 0.00 | 31 |
| tot r11 avg | 0.00 | 0.00 | 0.00 | 27 |
| tot r12 avg | 0.00 | 0.00 | 0.00 | 20 |
| tot r13 avg | 0.00 | 0.00 | 0.00 | 10 |
| tot r14 avg | 0.00 | 0.00 | 0.00 | 5 |
## Citation
**BibTeX:**
```
@inproceedings{bruton-beloucif-2023-bertie,
title = "{BERT}ie Bott{'}s Every Flavor Labels: A Tasty Introduction to Semantic Role Labeling for {G}alician",
author = "Bruton, Micaella and
Beloucif, Meriem",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.671",
doi = "10.18653/v1/2023.emnlp-main.671",
pages = "10892--10902",
abstract = "In this paper, we leverage existing corpora, WordNet, and dependency parsing to build the first Galician dataset for training semantic role labeling systems in an effort to expand available NLP resources. Additionally, we introduce verb indexing, a new pre-processing method, which helps increase the performance when semantically parsing highly-complex sentences. We use transfer-learning to test both the resource and the verb indexing method. Our results show that the effects of verb indexing were amplified in scenarios where the model was both pre-trained and fine-tuned on datasets utilizing the method, but improvements are also noticeable when only used during fine-tuning. The best-performing Galician SRL model achieved an f1 score of 0.74, introducing a baseline for future Galician SRL systems. We also tested our method on Spanish where we achieved an f1 score of 0.83, outperforming the baseline set by the 2009 CoNLL Shared Task by 0.025 showing the merits of our verb indexing method for pre-processing.",
}
```
|
mbruton/spa_pt_XLM-R
|
mbruton
| 2024-01-03T14:09:32Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"es",
"pt",
"dataset:mbruton/spanish_srl",
"dataset:PropBank.Br",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-04-14T22:30:27Z |
---
license: apache-2.0
datasets:
- mbruton/spanish_srl
- PropBank.Br
language:
- es
- pt
metrics:
- seqeval
library_name: transformers
pipeline_tag: token-classification
---
# Model Card for SpaXLM-R-pt for Semantic Role Labeling
This model is fine-tuned on a version of [XLM RoBERTa Base](https://huggingface.co/xlm-roberta-base) which is pre-trained on the SRL task for Portuguese, and is one of 24 models introduced as part of [this project](https://github.com/mbruton0426/GalicianSRL).
## Model Details
### Model Description
SpaXLM-R-pt for Semantic Role Labeling (SRL) is a transformers model, leveraging XLM-R's extensive pretraining on 100 languages to achieve better SRL predictions for Spanish. This model is additionally pre-trained on the SRL task for Portuguese. It was fine-tuned on Spanish with the following objectives:
- Identify up to 16 verbal roots within a sentence.
- Identify available arguments and thematic roles for each verbal root.
Labels are formatted as: r#:tag, where r# links the token to a specific verbal root of index #, and tag identifies the token as the verbal root (root) or an individual argument (arg0/arg1/arg2/arg3/argM) and its thematic role (adv/agt/atr/ben/cau/cot/des/efi/ein/exp/ext/fin/ins/loc/mnr/ori/pat/src/tem/tmp)
- **Developed by:** [Micaella Bruton](mailto:micaellabruton@gmail.com)
- **Model type:** Transformers
- **Language(s) (NLP):** Spanish (es), Portuguese (pt)
- **License:** Apache 2.0
- **Finetuned from model:** [Portuguese pre-trained XLM RoBERTa Base](https://huggingface.co/liaad/srl-pt_xlmr-base)
### Model Sources
- **Repository:** [GalicianSRL](https://github.com/mbruton0426/GalicianSRL)
- **Paper:** To be updated
## Uses
This model is intended to be used to develop and improve natural language processing tools for Spanish.
## Bias, Risks, and Limitations
The Spanish training set lacked highly complex sentences and as such, performs much better on sentences of mid- to low-complexity.
## Training Details
### Training Data
This model was pre-trained on the [PropBank.Br Portuguese SRL corpus](http://www.nilc.icmc.usp.br/portlex/index.php/en/projects/propbankbringl).
This model was fine-tuned on the "train" portion of the [SpanishSRL Dataset](https://huggingface.co/datasets/mbruton/spanish_srl) produced as part of this same project.
#### Training Hyperparameters
- **Learning Rate:** 2e-5
- **Batch Size:** 16
- **Weight Decay:** 0.01
- **Early Stopping:** 10 epochs
## Evaluation
#### Testing Data
This model was tested on the "test" portion of the [SpanishSRL Dataset](https://huggingface.co/datasets/mbruton/spanish_srl) produced as part of this same project.
#### Metrics
[seqeval](https://huggingface.co/spaces/evaluate-metric/seqeval) is a Python framework for sequence labeling evaluation. It can evaluate the performance of chunking tasks such as named-entity recognition, part-of-speech tagging, and semantic role labeling.
It supplies scoring both overall and per label type.
Overall:
- `accuracy`: the average [accuracy](https://huggingface.co/metrics/accuracy), on a scale between 0.0 and 1.0.
- `precision`: the average [precision](https://huggingface.co/metrics/precision), on a scale between 0.0 and 1.0.
- `recall`: the average [recall](https://huggingface.co/metrics/recall), on a scale between 0.0 and 1.0.
- `f1`: the average [F1 score](https://huggingface.co/metrics/f1), which is the harmonic mean of the precision and recall. It also has a scale of 0.0 to 1.0.
Per label type:
- `precision`: the average [precision](https://huggingface.co/metrics/precision), on a scale between 0.0 and 1.0.
- `recall`: the average [recall](https://huggingface.co/metrics/recall), on a scale between 0.0 and 1.0.
- `f1`: the average [F1 score](https://huggingface.co/metrics/f1), on a scale between 0.0 and 1.0.
### Results
| Label | Precision | Recall | f1-score | Support |
| :----------: | :-------: | :----: | :------: | :-----: |
| 0:arg0:agt | 0.93 | 0.93 | 0.93 | 867 |
| 0:arg0:cau | 0.69 | 0.60 | 0.64 | 57 |
| 0:arg0:src | 0.00 | 0.00 | 0.00 | 1 |
| 0:arg1:ext | 0.00 | 0.00 | 0.00 | 3 |
| 0:arg1:pat | 0.88 | 0.89 | 0.88 | 536 |
| 0:arg1:tem | 0.88 | 0.89 | 0.89 | 589 |
| 0:arg2:atr | 0.85 | 0.89 | 0.87 | 278 |
| 0:arg2:ben | 0.81 | 0.90 | 0.85 | 78 |
| 0:arg2:efi | 0.67 | 0.29 | 0.40 | 7 |
| 0:arg2:exp | 0.33 | 0.17 | 0.22 | 6 |
| 0:arg2:ext | 0.57 | 0.53 | 0.55 | 15 |
| 0:arg2:loc | 0.73 | 0.33 | 0.46 | 57 |
| 0:arg3:ben | 1.00 | 0.20 | 0.33 | 5 |
| 0:arg3:ein | 0.50 | 1.00 | 0.67 | 1 |
| 0:arg3:fin | 0.50 | 0.50 | 0.50 | 2 |
| 0:arg3:ori | 0.67 | 0.60 | 0.63 | 10 |
| 0:arg4:des | 0.58 | 0.94 | 0.71 | 16 |
| 0:arg4:efi | 0.67 | 0.40 | 0.50 | 5 |
| 0:argM:adv | 0.58 | 0.60 | 0.59 | 268 |
| 0:argM:atr | 0.65 | 0.62 | 0.64 | 24 |
| 0:argM:cau | 0.79 | 0.56 | 0.66 | 41 |
| 0:argM:ext | 0.00 | 0.00 | 0.00 | 5 |
| 0:argM:fin | 0.80 | 0.78 | 0.79 | 46 |
| 0:argM:loc | 0.69 | 0.80 | 0.74 | 186 |
| 0:argM:mnr | 0.72 | 0.47 | 0.57 | 66 |
| 0:argM:tmp | 0.86 | 0.86 | 0.86 | 411 |
| 0:root | 0.99 | 0.99 | 0.99 | 1662 |
| 1:arg0:agt | 0.92 | 0.91 | 0.92 | 564 |
| 1:arg0:cau | 0.73 | 0.82 | 0.77 | 44 |
| 1:arg1:ext | 0.00 | 0.00 | 0.00 | 2 |
| 1:arg1:pat | 0.89 | 0.87 | 0.88 | 482 |
| 1:arg1:tem | 0.88 | 0.90 | 0.89 | 390 |
| 1:arg2:atr | 0.89 | 0.88 | 0.88 | 197 |
| 1:arg2:ben | 0.75 | 0.89 | 0.81 | 66 |
| 1:arg2:efi | 1.00 | 0.50 | 0.67 | 6 |
| 1:arg2:ext | 0.71 | 0.71 | 0.71 | 7 |
| 1:arg2:ins | 0.00 | 0.00 | 0.00 | 1 |
| 1:arg2:loc | 0.62 | 0.52 | 0.57 | 44 |
| 1:arg3:ben | 0.00 | 0.00 | 0.00 | 2 |
| 1:arg3:ein | 0.00 | 0.00 | 0.00 | 3 |
| 1:arg3:fin | 1.00 | 1.00 | 1.00 | 2 |
| 1:arg3:ori | 0.12 | 0.50 | 0.20 | 2 |
| 1:arg4:des | 0.47 | 0.90 | 0.62 | 10 |
| 1:arg4:efi | 0.50 | 0.50 | 0.50 | 2 |
| 1:argM:adv | 0.56 | 0.58 | 0.57 | 220 |
| 1:argM:atr | 0.67 | 0.74 | 0.70 | 19 |
| 1:argM:cau | 0.65 | 0.74 | 0.69 | 35 |
| 1:argM:ext | 0.00 | 0.00 | 0.00 | 7 |
| 1:argM:fin | 0.57 | 0.66 | 0.61 | 38 |
| 1:argM:loc | 0.74 | 0.74 | 0.74 | 156 |
| 1:argM:mnr | 0.60 | 0.27 | 0.37 | 44 |
| 1:argM:tmp | 0.83 | 0.81 | 0.82 | 247 |
| 1:root | 0.97 | 0.97 | 0.97 | 1323 |
| 2:arg0:agt | 0.86 | 0.90 | 0.88 | 336 |
| 2:arg0:cau | 0.79 | 0.77 | 0.78 | 35 |
| 2:arg0:exp | 0.00 | 0.00 | 0.00 | 1 |
| 2:arg0:src | 0.00 | 0.00 | 0.00 | 1 |
| 2:arg1:pat | 0.84 | 0.82 | 0.83 | 333 |
| 2:arg1:tem | 0.84 | 0.84 | 0.84 | 291 |
| 2:arg2:atr | 0.92 | 0.89 | 0.90 | 124 |
| 2:arg2:ben | 0.69 | 0.84 | 0.76 | 43 |
| 2:arg2:efi | 0.89 | 0.89 | 0.89 | 9 |
| 2:arg2:ext | 0.33 | 0.60 | 0.43 | 5 |
| 2:arg2:ins | 0.00 | 0.00 | 0.00 | 1 |
| 2:arg2:loc | 0.43 | 0.44 | 0.44 | 27 |
| 2:arg3:ben | 0.00 | 0.00 | 0.00 | 4 |
| 2:arg3:ein | 0.00 | 0.00 | 0.00 | 1 |
| 2:arg3:ori | 0.40 | 0.67 | 0.50 | 3 |
| 2:arg4:des | 0.50 | 0.88 | 0.64 | 16 |
| 2:arg4:efi | 0.00 | 0.00 | 0.00 | 6 |
| 2:argM:adv | 0.54 | 0.51 | 0.52 | 176 |
| 2:argM:atr | 0.56 | 0.33 | 0.42 | 15 |
| 2:argM:cau | 0.43 | 0.59 | 0.50 | 17 |
| 2:argM:ext | 0.00 | 0.00 | 0.00 | 4 |
| 2:argM:fin | 0.78 | 0.69 | 0.74 | 36 |
| 2:argM:ins | 0.00 | 0.00 | 0.00 | 1 |
| 2:argM:loc | 0.73 | 0.74 | 0.73 | 117 |
| 2:argM:mnr | 0.38 | 0.29 | 0.33 | 35 |
| 2:argM:tmp | 0.78 | 0.76 | 0.77 | 161 |
| 2:root | 0.93 | 0.94 | 0.94 | 913 |
| 3:arg0:agt | 0.86 | 0.87 | 0.86 | 227 |
| 3:arg0:cau | 0.71 | 0.71 | 0.71 | 14 |
| 3:arg1:pat | 0.81 | 0.83 | 0.82 | 199 |
| 3:arg1:tem | 0.78 | 0.81 | 0.79 | 160 |
| 3:arg2:atr | 0.78 | 0.77 | 0.78 | 79 |
| 3:arg2:ben | 0.69 | 0.93 | 0.79 | 27 |
| 3:arg2:efi | 0.00 | 0.00 | 0.00 | 1 |
| 3:arg2:ext | 0.00 | 0.00 | 0.00 | 3 |
| 3:arg2:loc | 0.50 | 0.38 | 0.43 | 21 |
| 3:arg3:ben | 0.00 | 0.00 | 0.00 | 3 |
| 3:arg3:ein | 0.00 | 0.00 | 0.00 | 2 |
| 3:arg3:ori | 0.00 | 0.00 | 0.00 | 3 |
| 3:arg4:des | 0.47 | 1.00 | 0.64 | 7 |
| 3:arg4:efi | 0.00 | 0.00 | 0.00 | 5 |
| 3:argM:adv | 0.51 | 0.47 | 0.49 | 98 |
| 3:argM:atr | 1.00 | 0.14 | 0.25 | 7 |
| 3:argM:cau | 0.50 | 0.31 | 0.38 | 13 |
| 3:argM:ext | 0.00 | 0.00 | 0.00 | 1 |
| 3:argM:fin | 0.56 | 0.67 | 0.61 | 15 |
| 3:argM:loc | 0.64 | 0.68 | 0.66 | 69 |
| 3:argM:mnr | 0.43 | 0.55 | 0.48 | 11 |
| 3:argM:tmp | 0.86 | 0.82 | 0.84 | 92 |
| 3:root | 0.92 | 0.93 | 0.92 | 569 |
| 4:arg0:agt | 0.86 | 0.81 | 0.83 | 119 |
| 4:arg0:cau | 1.00 | 0.67 | 0.80 | 6 |
| 4:arg1:pat | 0.71 | 0.75 | 0.73 | 87 |
| 4:arg1:tem | 0.85 | 0.75 | 0.80 | 109 |
| 4:arg2:atr | 0.75 | 0.92 | 0.83 | 53 |
| 4:arg2:ben | 0.53 | 0.82 | 0.64 | 11 |
| 4:arg2:ext | 0.00 | 0.00 | 0.00 | 1 |
| 4:arg2:loc | 0.58 | 0.64 | 0.61 | 11 |
| 4:arg3:ein | 0.00 | 0.00 | 0.00 | 1 |
| 4:arg3:ori | 0.00 | 0.00 | 0.00 | 1 |
| 4:arg4:des | 0.69 | 0.90 | 0.78 | 10 |
| 4:arg4:efi | 0.00 | 0.00 | 0.00 | 1 |
| 4:argM:adv | 0.56 | 0.60 | 0.58 | 50 |
| 4:argM:atr | 0.00 | 0.00 | 0.00 | 4 |
| 4:argM:cau | 0.14 | 0.33 | 0.20 | 3 |
| 4:argM:ext | 0.00 | 0.00 | 0.00 | 1 |
| 4:argM:fin | 0.64 | 0.64 | 0.64 | 11 |
| 4:argM:loc | 0.58 | 0.75 | 0.65 | 24 |
| 4:argM:mnr | 0.50 | 0.31 | 0.38 | 16 |
| 4:argM:tmp | 0.75 | 0.69 | 0.72 | 52 |
| 4:root | 0.90 | 0.91 | 0.90 | 322 |
| 5:arg0:agt | 0.79 | 0.88 | 0.83 | 72 |
| 5:arg0:cau | 1.00 | 0.40 | 0.57 | 5 |
| 5:arg1:pat | 0.64 | 0.65 | 0.64 | 71 |
| 5:arg1:tem | 0.81 | 0.61 | 0.69 | 41 |
| 5:arg2:atr | 0.62 | 0.48 | 0.54 | 21 |
| 5:arg2:ben | 0.43 | 1.00 | 0.60 | 6 |
| 5:arg2:efi | 0.00 | 0.00 | 0.00 | 1 |
| 5:arg2:ext | 0.00 | 0.00 | 0.00 | 1 |
| 5:arg2:loc | 0.00 | 0.00 | 0.00 | 1 |
| 5:arg3:ein | 0.00 | 0.00 | 0.00 | 1 |
| 5:arg4:des | 0.00 | 0.00 | 0.00 | 1 |
| 5:arg4:efi | 0.00 | 0.00 | 0.00 | 1 |
| 5:argM:adv | 0.33 | 0.35 | 0.34 | 26 |
| 5:argM:cau | 0.00 | 0.00 | 0.00 | 3 |
| 5:argM:fin | 0.50 | 0.80 | 0.62 | 5 |
| 5:argM:loc | 0.58 | 0.67 | 0.62 | 21 |
| 5:argM:mnr | 0.00 | 0.00 | 0.00 | 7 |
| 5:argM:tmp | 0.62 | 0.67 | 0.65 | 30 |
| 5:root | 0.82 | 0.84 | 0.83 | 173 |
| 6:arg0:agt | 0.69 | 0.53 | 0.60 | 34 |
| 6:arg0:cau | 0.00 | 0.00 | 0.00 | 1 |
| 6:arg1:loc | 0.00 | 0.00 | 0.00 | 1 |
| 6:arg1:pat | 0.43 | 0.82 | 0.57 | 28 |
| 6:arg1:tem | 0.39 | 0.44 | 0.41 | 16 |
| 6:arg2:atr | 0.31 | 0.38 | 0.34 | 13 |
| 6:arg2:ben | 0.50 | 0.60 | 0.55 | 5 |
| 6:arg2:loc | 0.00 | 0.00 | 0.00 | 1 |
| 6:arg3:ben | 0.00 | 0.00 | 0.00 | 1 |
| 6:argM:adv | 0.23 | 0.70 | 0.34 | 10 |
| 6:argM:atr | 0.00 | 0.00 | 0.00 | 2 |
| 6:argM:cau | 0.00 | 0.00 | 0.00 | 1 |
| 6:argM:fin | 0.33 | 0.50 | 0.40 | 2 |
| 6:argM:loc | 0.18 | 0.57 | 0.28 | 7 |
| 6:argM:mnr | 0.00 | 0.00 | 0.00 | 5 |
| 6:argM:tmp | 0.50 | 0.86 | 0.63 | 7 |
| 6:root | 0.65 | 0.59 | 0.62 | 82 |
| 7:arg0:agt | 0.35 | 0.88 | 0.50 | 17 |
| 7:arg1:pat | 0.54 | 0.82 | 0.65 | 17 |
| 7:arg1:tem | 0.59 | 0.67 | 0.62 | 15 |
| 7:arg2:atr | 0.53 | 0.53 | 0.53 | 15 |
| 7:arg2:ben | 0.40 | 0.29 | 0.33 | 7 |
| 7:arg2:loc | 0.00 | 0.00 | 0.00 | 1 |
| 7:arg3:ori | 0.00 | 0.00 | 0.00 | 1 |
| 7:arg4:des | 0.00 | 0.00 | 0.00 | 1 |
| 7:argM:adv | 0.14 | 0.20 | 0.17 | 5 |
| 7:argM:atr | 0.00 | 0.00 | 0.00 | 1 |
| 7:argM:fin | 0.00 | 0.00 | 0.00 | 1 |
| 7:argM:loc | 0.00 | 0.00 | 0.00 | 3 |
| 7:argM:tmp | 0.42 | 0.83 | 0.56 | 6 |
| 7:root | 0.54 | 0.84 | 0.66 | 45 |
| 8:arg0:agt | 0.00 | 0.00 | 0.00 | 8 |
| 8:arg0:cau | 0.00 | 0.00 | 0.00 | 1 |
| 8:arg1:pat | 0.00 | 0.00 | 0.00 | 4 |
| 8:arg1:tem | 0.21 | 0.56 | 0.30 | 9 |
| 8:arg2:atr | 0.08 | 0.25 | 0.12 | 4 |
| 8:arg2:ext | 0.00 | 0.00 | 0.00 | 1 |
| 8:arg2:loc | 0.00 | 0.00 | 0.00 | 2 |
| 8:arg3:ori | 0.00 | 0.00 | 0.00 | 1 |
| 8:argM:adv | 0.27 | 0.38 | 0.32 | 8 |
| 8:argM:ext | 0.00 | 0.00 | 0.00 | 1 |
| 8:argM:fin | 0.00 | 0.00 | 0.00 | 1 |
| 8:argM:loc | 0.00 | 0.00 | 0.00 | 4 |
| 8:argM:mnr | 0.00 | 0.00 | 0.00 | 1 |
| 8:argM:tmp | 0.00 | 0.00 | 0.00 | 1 |
| 8:root | 0.38 | 0.68 | 0.49 | 25 |
| 9:arg0:agt | 0.00 | 0.00 | 0.00 | 6 |
| 9:arg0:cau | 0.00 | 0.00 | 0.00 | 1 |
| 9:arg1:pat | 0.00 | 0.00 | 0.00 | 4 |
| 9:arg1:tem | 0.00 | 0.00 | 0.00 | 5 |
| 9:arg2:atr | 0.00 | 0.00 | 0.00 | 3 |
| 9:arg2:ben | 0.00 | 0.00 | 0.00 | 1 |
| 9:argM:adv | 0.00 | 0.00 | 0.00 | 6 |
| 9:argM:cau | 0.00 | 0.00 | 0.00 | 1 |
| 9:argM:fin | 0.00 | 0.00 | 0.00 | 2 |
| 9:argM:loc | 0.00 | 0.00 | 0.00 | 2 |
| 9:argM:tmp | 0.00 | 0.00 | 0.00 | 1 |
| 9:root | 0.04 | 0.06 | 0.05 | 17 |
| 10:arg0:agt | 0.00 | 0.00 | 0.00 | 3 |
| 10:arg1:pat | 0.00 | 0.00 | 0.00 | 5 |
| 10:arg1:tem | 0.00 | 0.00 | 0.00 | 3 |
| 10:arg2:atr | 0.00 | 0.00 | 0.00 | 1 |
| 10:arg2:ben | 0.00 | 0.00 | 0.00 | 2 |
| 10:argM:adv | 0.00 | 0.00 | 0.00 | 3 |
| 10:argM:fin | 0.00 | 0.00 | 0.00 | 1 |
| 10:argM:tmp | 0.00 | 0.00 | 0.00 | 1 |
| 10:root | 0.00 | 0.00 | 0.00 | 12 |
| 11:arg0:agt | 0.00 | 0.00 | 0.00 | 1 |
| 11:arg0:cau | 0.00 | 0.00 | 0.00 | 1 |
| 11:arg1:pat | 0.00 | 0.00 | 0.00 | 2 |
| 11:arg1:tem | 0.00 | 0.00 | 0.00 | 4 |
| 11:arg2:atr | 0.00 | 0.00 | 0.00 | 3 |
| 11:arg2:ben | 0.00 | 0.00 | 0.00 | 1 |
| 11:argM:adv | 0.00 | 0.00 | 0.00 | 4 |
| 11:argM:loc | 0.00 | 0.00 | 0.00 | 1 |
| 11:argM:tmp | 0.00 | 0.00 | 0.00 | 1 |
| 11:root | 0.00 | 0.00 | 0.00 | 9 |
| 12:arg0:agt | 0.00 | 0.00 | 0.00 | 3 |
| 12:arg1:pat | 0.00 | 0.00 | 0.00 | 1 |
| 12:arg1:tem | 0.00 | 0.00 | 0.00 | 2 |
| 12:arg2:atr | 0.00 | 0.00 | 0.00 | 2 |
| 12:argM:adv | 0.00 | 0.00 | 0.00 | 1 |
| 12:argM:cau | 0.00 | 0.00 | 0.00 | 1 |
| 12:argM:tmp | 0.00 | 0.00 | 0.00 | 3 |
| 12:root | 0.00 | 0.00 | 0.00 | 7 |
| 13:arg0:cau | 0.00 | 0.00 | 0.00 | 1 |
| 13:arg1:tem | 0.00 | 0.00 | 0.00 | 1 |
| 13:arg2:atr | 0.00 | 0.00 | 0.00 | 1 |
| 13:argM:adv | 0.00 | 0.00 | 0.00 | 1 |
| 13:argM:atr | 0.00 | 0.00 | 0.00 | 1 |
| 13:argM:loc | 0.00 | 0.00 | 0.00 | 1 |
| 13:root | 0.00 | 0.00 | 0.00 | 4 |
| 14:arg1:pat | 0.00 | 0.00 | 0.00 | 1 |
| 14:arg2:ben | 0.00 | 0.00 | 0.00 | 1 |
| 14:argM:mnr | 0.00 | 0.00 | 0.00 | 1 |
| 14:root | 0.00 | 0.00 | 0.00 | 2 |
| micro avg | 0.83 | 0.83 | 0.83 | 15436 |
| macro avg | 0.34 | 0.36 | 0.34 | 15436 |
| weighted avg | 0.83 | 0.83 | 0.83 | 15436 |
| tot root avg | 0.48 | 0.52 | 0.49 | 5165.00 |
| tot arg0:agt avg | 0.48 | 0.52 | 0.49 | 2257.00 |
| tot arg0:cau avg | 0.45 | 0.36 | 0.39 | 166.00 |
| tot arg0:exp avg | 0.00 | 0.00 | 0.00 | 1.00 |
| tot arg0:src avg | 0.00 | 0.00 | 0.00 | 2.00 |
| tot arg0 | 0.41 | 0.40 | 0.39 | 2426.00 |
| tot arg1:ext avg | 0.00 | 0.00 | 0.00 | 5.00 |
| tot arg1:loc avg | 0.00 | 0.00 | 0.00 | 1.00 |
| tot arg1:pat avg | 0.41 | 0.46 | 0.43 | 1770.00 |
| tot arg1:tem avg | 0.45 | 0.46 | 0.45 | 1635.00 |
| tot arg1 | 0.39 | 0.42 | 0.39 | 3411.00 |
| tot arg2:atr avg | 0.41 | 0.43 | 0.41 | 794.00 |
| tot arg2:ben avg | 0.41 | 0.56 | 0.46 | 255.00 |
| tot arg2:efi avg | 0.51 | 0.34 | 0.39 | 24.00 |
| tot arg2:exp avg | 0.33 | 0.17 | 0.22 | 6.00 |
| tot arg2:ext avg | 0.23 | 0.26 | 0.24 | 33.00 |
| tot arg2:ins avg | 0.00 | 0.00 | 0.00 | 2.00 |
| tot arg2:loc avg | 0.32 | 0.26 | 0.28 | 165.00 |
| tot arg2 | 0.36 | 0.38 | 0.36 | 1279.00 |
| tot arg3:ben avg | 0.20 | 0.04 | 0.07 | 15.00 |
| tot arg3:ein avg | 0.08 | 0.17 | 0.11 | 9.00 |
| tot arg3:fin avg | 0.75 | 0.75 | 0.75 | 4.00 |
| tot arg3:ori avg | 0.17 | 0.25 | 0.19 | 21.00 |
| tot arg3 | 0.21 | 0.22 | 0.19 | 49.00 |
| tot arg4:des avg | 0.39 | 0.66 | 0.48 | 61.00 |
| tot arg4:efi avg | 0.20 | 0.15 | 0.17 | 20.00 |
| tot arg4 | 0.30 | 0.42 | 0.34 | 81.00 |
| tot argM:adv avg | 0.27 | 0.31 | 0.28 | 876.00 |
| tot argM:atr avg | 0.36 | 0.23 | 0.25 | 73.00 |
| tot argM:cau avg | 0.28 | 0.28 | 0.27 | 115.00 |
| tot argM:ext avg | 0.00 | 0.00 | 0.00 | 19.00 |
| tot argM:fin avg | 0.38 | 0.43 | 0.40 | 158.00 |
| tot argM:ins avg | 0.00 | 0.00 | 0.00 | 1.00 |
| tot argM:loc avg | 0.35 | 0.41 | 0.37 | 591.00 |
| tot argM:mnr avg | 0.29 | 0.21 | 0.24 | 186.00 |
| tot argM:tmp avg | 0.43 | 0.48 | 0.45 | 1013.00 |
| tot argM | 0.31 | 0.32 | 0.30 | 3032.00 |
| tot r0 avg | 0.64 | 0.58 | 0.59 | 5242 |
| tot r1 avg | 0.58 | 0.59 | 0.57 | 3913 |
| tot r2 avg | 0.47 | 0.50 | 0.48 | 2711 |
| tot r3 avg | 0.48 | 0.47 | 0.45 | 1626 |
| tot r4 avg | 0.48 | 0.50 | 0.48 | 892 |
| tot r5 avg | 0.38 | 0.39 | 0.36 | 487 |
| tot r6 avg | 0.25 | 0.35 | 0.28 | 216 |
| tot r7 avg | 0.25 | 0.36 | 0.29 | 135 |
| tot r8 avg | 0.06 | 0.12 | 0.08 | 71 |
| tot r9 avg | 0.00 | 0.01 | 0.00 | 49 |
| tot r10 avg | 0.00 | 0.00 | 0.00 | 31 |
| tot r11 avg | 0.00 | 0.00 | 0.00 | 27 |
| tot r12 avg | 0.00 | 0.00 | 0.00 | 20 |
| tot r13 avg | 0.00 | 0.00 | 0.00 | 10 |
| tot r14 avg | 0.00 | 0.00 | 0.00 | 5 |
## Citation
**BibTeX:**
```
@inproceedings{bruton-beloucif-2023-bertie,
title = "{BERT}ie Bott{'}s Every Flavor Labels: A Tasty Introduction to Semantic Role Labeling for {G}alician",
author = "Bruton, Micaella and
Beloucif, Meriem",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.671",
doi = "10.18653/v1/2023.emnlp-main.671",
pages = "10892--10902",
abstract = "In this paper, we leverage existing corpora, WordNet, and dependency parsing to build the first Galician dataset for training semantic role labeling systems in an effort to expand available NLP resources. Additionally, we introduce verb indexing, a new pre-processing method, which helps increase the performance when semantically parsing highly-complex sentences. We use transfer-learning to test both the resource and the verb indexing method. Our results show that the effects of verb indexing were amplified in scenarios where the model was both pre-trained and fine-tuned on datasets utilizing the method, but improvements are also noticeable when only used during fine-tuning. The best-performing Galician SRL model achieved an f1 score of 0.74, introducing a baseline for future Galician SRL systems. We also tested our method on Spanish where we achieved an f1 score of 0.83, outperforming the baseline set by the 2009 CoNLL Shared Task by 0.025 showing the merits of our verb indexing method for pre-processing.",
}
```
|
ntc-ai/SDXL-LoRA-slider.luminescent
|
ntc-ai
| 2024-01-03T14:03:03Z | 22 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion-xl",
"lora",
"template:sd-lora",
"template:sdxl-lora",
"sdxl-sliders",
"ntcai.xyz-sliders",
"concept",
"en",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"region:us"
] |
text-to-image
| 2024-01-03T14:03:00Z |
---
language:
- en
thumbnail: "images/evaluate/luminescent.../luminescent_17_3.0.png"
widget:
- text: luminescent
output:
url: images/luminescent_17_3.0.png
- text: luminescent
output:
url: images/luminescent_19_3.0.png
- text: luminescent
output:
url: images/luminescent_20_3.0.png
- text: luminescent
output:
url: images/luminescent_21_3.0.png
- text: luminescent
output:
url: images/luminescent_22_3.0.png
tags:
- text-to-image
- stable-diffusion-xl
- lora
- template:sd-lora
- template:sdxl-lora
- sdxl-sliders
- ntcai.xyz-sliders
- concept
- diffusers
license: "mit"
inference: false
instance_prompt: "luminescent"
base_model: "stabilityai/stable-diffusion-xl-base-1.0"
---
# ntcai.xyz slider - luminescent (SDXL LoRA)
| Strength: -3 | Strength: 0 | Strength: 3 |
| --- | --- | --- |
| <img src="images/luminescent_17_-3.0.png" width=256 height=256 /> | <img src="images/luminescent_17_0.0.png" width=256 height=256 /> | <img src="images/luminescent_17_3.0.png" width=256 height=256 /> |
| <img src="images/luminescent_19_-3.0.png" width=256 height=256 /> | <img src="images/luminescent_19_0.0.png" width=256 height=256 /> | <img src="images/luminescent_19_3.0.png" width=256 height=256 /> |
| <img src="images/luminescent_20_-3.0.png" width=256 height=256 /> | <img src="images/luminescent_20_0.0.png" width=256 height=256 /> | <img src="images/luminescent_20_3.0.png" width=256 height=256 /> |
## Download
Weights for this model are available in Safetensors format.
## Trigger words
You can apply this LoRA with trigger words for additional effect:
```
luminescent
```
## Use in diffusers
```python
from diffusers import StableDiffusionXLPipeline
from diffusers import EulerAncestralDiscreteScheduler
import torch
pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors")
pipe.to("cuda")
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
# Load the LoRA
pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.luminescent', weight_name='luminescent.safetensors', adapter_name="luminescent")
# Activate the LoRA
pipe.set_adapters(["luminescent"], adapter_weights=[2.0])
prompt = "medieval rich kingpin sitting in a tavern, luminescent"
negative_prompt = "nsfw"
width = 512
height = 512
num_inference_steps = 10
guidance_scale = 2
image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0]
image.save('result.png')
```
## Support the Patreon
If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI).
By joining our Patreon, you'll gain access to an ever-growing library of over 840+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities.
Your support on Patreon will allow us to continue developing and refining new models.
## Other resources
- [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs
- [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
|
EvanD/xlm-roberta-base-hungarian-ner-huner
|
EvanD
| 2024-01-03T14:02:44Z | 15 | 3 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"named-entity-recognition",
"sequence-tagger-model",
"hu",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-01-03T13:41:46Z |
---
pipeline_tag: token-classification
tags:
- named-entity-recognition
- sequence-tagger-model
widget:
- text: A nevem Amadeus Wolfgang รฉs Berlinben รฉlek
inference:
parameters:
aggregation_strategy: simple
grouped_entities: true
language:
- hu
---
xlm-roberta model trained on [hungarian ner](https://flairnlp.github.io/docs/tutorial-training/how-to-load-prepared-dataset) dataset from flair
| Test metric | Results |
|-------------------------|--------------------------|
| test_f1_mac_hu_ner | 0.9962009787559509 |
| test_loss_hu_ner | 0.019755737856030464 |
| test_prec_mac_hu_ner | 0.9692726135253906 |
| test_rec_mac_hu_ner | 0.9708725810050964 |
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("EvanD/xlm-roberta-base-hungarian-ner-huner")
ner_model = AutoModelForTokenClassification.from_pretrained("EvanD/xlm-roberta-base-hungarian-ner-huner")
nlp = pipeline("ner", model=ner_model, tokenizer=tokenizer, aggregation_strategy="simple")
example = "A nevem Amadeus Wolfgang รฉs Berlinben รฉlek"
ner_results = nlp(example)
print(ner_results)
```
|
cbertrand/checkpoint
|
cbertrand
| 2024-01-03T14:02:21Z | 173 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-01-02T10:36:27Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: checkpoint
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# checkpoint
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 12345
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Framework versions
- Transformers 4.33.2
- Pytorch 2.2.0.dev20230912+cu121
- Datasets 2.14.5
- Tokenizers 0.13.3
|
EvanD/xlm-roberta-base-ukrainian-ner-ukrner
|
EvanD
| 2024-01-03T14:00:27Z | 71 | 4 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"named-entity-recognition",
"sequence-tagger-model",
"uk",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-01-03T13:36:43Z |
---
pipeline_tag: token-classification
tags:
- named-entity-recognition
- sequence-tagger-model
widget:
- text: ะะตะฝะต ะทะฒััั ะะผะฐะดะตะน ะะพะปััะณะฐะฝะณ, ั ะถะธะฒั ะฒ ะะตัะปัะฝั
inference:
parameters:
aggregation_strategy: simple
grouped_entities: true
language:
- uk
---
xlm-roberta model trained on [ukrainian ner](https://github.com/lang-uk/flair-ner) dataset from flair
| Test metric | Results |
|-------------------------|---------------------------|
| test_f1_mac_ukr_ner | 0.9900672435760498 |
| test_loss_ukr_ner | 0.054602641612291336 |
| test_prec_mac_ukr_ner | 0.9386032819747925 |
| test_rec_mac_ukr_ner | 0.9383019208908081 |
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("EvanD/xlm-roberta-base-ukrainian-ner-ukrner")
ner_model = AutoModelForTokenClassification.from_pretrained("EvanD/xlm-roberta-base-ukrainian-ner-ukrner")
nlp = pipeline("ner", model=ner_model, tokenizer=tokenizer, aggregation_strategy="simple")
example = "ะะตะฝะต ะทะฒััั ะะผะฐะดะตะน ะะพะปััะณะฐะฝะณ, ั ะถะธะฒั ะฒ ะะตัะปัะฝั"
ner_results = nlp(example)
print(ner_results)
```
|
SpartanLondoner/ppo-Pyramids
|
SpartanLondoner
| 2024-01-03T13:53:51Z | 4 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2024-01-03T13:53:47Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog ๐ถ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: SpartanLondoner/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
behzadnet/Llama-2-7b-chat-hf-sharded-bf16-fine-tuned_chatGPT_temp0_Seed113
|
behzadnet
| 2024-01-03T13:47:17Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"base_model:adapter:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"region:us"
] | null | 2024-01-03T13:47:14Z |
---
library_name: peft
base_model: Trelis/Llama-2-7b-chat-hf-sharded-bf16
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
|
behzadnet/Llama-2-7b-chat-hf-sharded-bf16-fine-tuned-adapters_chatGPT_temp0_Seed113
|
behzadnet
| 2024-01-03T13:47:07Z | 0 | 0 |
peft
|
[
"peft",
"arxiv:1910.09700",
"base_model:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"base_model:adapter:Trelis/Llama-2-7b-chat-hf-sharded-bf16",
"region:us"
] | null | 2024-01-03T13:47:01Z |
---
library_name: peft
base_model: Trelis/Llama-2-7b-chat-hf-sharded-bf16
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
|
xsxs/whisper-small-hi
|
xsxs
| 2024-01-03T13:45:38Z | 10 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"zh",
"dataset:mozilla-foundation/common_voice_11_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-01-03T09:05:17Z |
---
language:
- zh
license: apache-2.0
base_model: openai/whisper-small
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
model-index:
- name: Whisper_Small_tw_nan_tw
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper_Small_tw_nan_tw
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- training_steps: 10
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
yentinglin/Taiwan-LLM-MoE-pilot
|
yentinglin
| 2024-01-03T13:42:30Z | 31 | 2 |
transformers
|
[
"transformers",
"pytorch",
"mixtral",
"text-generation",
"traditional mandarin",
"traditional chinese",
"taiwan",
"moe",
"zh-tw",
"zh-hant",
"conversational",
"zh",
"dataset:yentinglin/v1",
"arxiv:2311.17487",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-26T14:41:54Z |
---
license: apache-2.0
datasets:
- yentinglin/v1
language:
- zh
tags:
- traditional mandarin
- traditional chinese
- taiwan
- moe
- mixtral
- zh-tw
- zh-hant
pretty_name: twllm-moe
---
# Taiwan LLM Mixtrue of Export - Pilot run
<!-- Provide a quick summary of what the model is/does. -->

## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [Yen-Ting Lin ๆๅฝฅๅปท](https://yentingl.com/)
- **Compute Funded by:** [HelperAI](https://helperai.ai/)
- **Model type:** [Mixtral](https://huggingface.co/docs/transformers/model_doc/mixtral)
- **Language(s) (NLP):** Traditional Mandarin (zh-tw)
- **License:** [Apache-2.0](https://www.apache.org/licenses/LICENSE-2.0)
- **Finetuned from model:** [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1)
- **TMMLUS+ score:** 38.09223090909092
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** [Taiwan-LLM](https://github.com/MiuLab/Taiwan-LLM)
- **Paper:** [Taiwan-LLM: Bridging the Linguistic Divide with a Culturally Aligned Language Model](https://arxiv.org/pdf/2311.17487.pdf)
- **Demo:** [Taiwan LLM ChatUI](https://twllm.com/)
## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
```bibtex
@misc{lin2023taiwan,
title={Taiwan LLM: Bridging the Linguistic Divide with a Culturally Aligned Language Model},
author={Yen-Ting Lin and Yun-Nung Chen},
year={2023},
eprint={2311.17487},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
wenzw/zephyr-7b-sft-lora
|
wenzw
| 2024-01-03T13:25:56Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"mistral",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:finetune:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-30T06:38:27Z |
---
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.1
tags:
- generated_from_trainer
model-index:
- name: zephyr-7b-sft-lora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr-7b-sft-lora
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9900
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 128
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.9866 | 0.67 | 272 | 0.9900 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
ernlavr/llama-2-7bn-xsum-lora-adapter
|
ernlavr
| 2024-01-03T13:21:50Z | 15 | 1 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"summarization",
"en",
"dataset:EdinburghNLP/xsum",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:finetune:meta-llama/Llama-2-7b-hf",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2023-12-22T19:18:59Z |
---
base_model: meta-llama/Llama-2-7b-hf
tags:
- generated_from_trainer
model-index:
- name: Llama2-7bn-xsum-adapter
results: []
datasets:
- EdinburghNLP/xsum
language:
- en
pipeline_tag: summarization
metrics:
- rouge
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama2-7bn-xsum-adapter
Weights & Biases runs for training and evaluation are available for a detailed overview!
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf)
on a [XSum](https://huggingface.co/datasets/EdinburghNLP/xsum) dataset with Causal LM task. You can view all the implementation details on the [GitHub project](https://github.com/ernlavr/llamarizer)
## Weights & Biases Training and Evaluation Documentation
See the [training](https://wandb.ai/ernlavr/adv_nlp2023/runs/yk6ytvv2) and
[evaluation](https://wandb.ai/ernlavr/adv_nlp2023/runs/f41oo2c6?workspace=user-ernestslavrinovics)
on Weights & Biases for more details!
Summary table of final metrics:
| Metric | rouge1 | rouge2 | rougeL | FactCC | ANLI | SummaC | BARTScore |
|------------------------|---------|---------|---------|---------|--------|---------|------------|
| Mean | 0.18 | 0.033 | 0.126 | 0.188 | 0.408 | 0.658 | -3.713 |
| Std | 0.09 | 0.049 | 0.067 | 0.317 | 0.462 | 0.247 | 0.831 |
## Training procedure
Causal language modeling. Nesting the summary paragraph in a prompt: {Summarize this article: '<INPUT_DOCUMENT>'; Summary: <OUTPUT>}
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- lr_scheduler_warmup_steps: 450.5
- num_epochs: 3
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.35.0
- Pytorch 2.0.1
- Datasets 2.14.6
- Tokenizers 0.14.1
|
MasterCorneo/Corneo-Tifa-RVC
|
MasterCorneo
| 2024-01-03T13:14:02Z | 0 | 0 | null |
[
"license:openrail",
"region:us"
] | null | 2024-01-02T20:01:29Z |
---
license: openrail
---
## Model Details
### Model Description
This is a RVC (Realistic Voice Cloning) model for Tifa Lockhart from Final Fantasy VII REMAKE (English audio)
It was target sampled at 32k, using rmvpe pitch extraction algorithm and trained for 500 epochs
|
zac/handy
|
zac
| 2024-01-03T13:02:38Z | 7 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"pytorch",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-01-03T12:59:34Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: handy
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9777777791023254
---
# handy
Autogenerated by HuggingPics๐ค๐ผ๏ธ
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### bad hands

#### hands

|
faust01/Taxi-v3-DRLcourse
|
faust01
| 2024-01-03T12:58:29Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-03T12:58:19Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3-DRLcourse
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.48 +/- 2.72
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="faust01/Taxi-v3-DRLcourse", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
gyr66/RoBERTa-ext-large-lora-updated-chinese-finetuned-ner
|
gyr66
| 2024-01-03T12:55:50Z | 0 | 0 | null |
[
"safetensors",
"generated_from_trainer",
"base_model:gyr66/RoBERTa-ext-large-chinese-finetuned-ner",
"base_model:finetune:gyr66/RoBERTa-ext-large-chinese-finetuned-ner",
"region:us"
] | null | 2024-01-03T12:55:48Z |
---
base_model: gyr66/RoBERTa-ext-large-chinese-finetuned-ner
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: RoBERTa-ext-large-lora-updated-chinese-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# RoBERTa-ext-large-lora-updated-chinese-finetuned-ner
This model is a fine-tuned version of [gyr66/RoBERTa-ext-large-chinese-finetuned-ner](https://huggingface.co/gyr66/RoBERTa-ext-large-chinese-finetuned-ner) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9586
- Precision: 0.7016
- Recall: 0.7518
- F1: 0.7258
- Accuracy: 0.9154
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0034 | 1.0 | 252 | 1.0787 | 0.6753 | 0.7523 | 0.7117 | 0.9121 |
| 0.0032 | 2.0 | 504 | 1.0376 | 0.6830 | 0.7490 | 0.7145 | 0.9141 |
| 0.0018 | 3.0 | 756 | 1.0547 | 0.6731 | 0.7573 | 0.7127 | 0.9126 |
| 0.0032 | 4.0 | 1008 | 1.0262 | 0.6829 | 0.7384 | 0.7096 | 0.9126 |
| 0.0027 | 5.0 | 1260 | 0.9613 | 0.6898 | 0.7445 | 0.7161 | 0.9118 |
| 0.0027 | 6.0 | 1512 | 0.9481 | 0.6780 | 0.7550 | 0.7145 | 0.9120 |
| 0.0019 | 7.0 | 1764 | 0.9328 | 0.6917 | 0.7513 | 0.7203 | 0.9150 |
| 0.0008 | 8.0 | 2016 | 0.9570 | 0.6976 | 0.7520 | 0.7238 | 0.9143 |
| 0.0005 | 9.0 | 2268 | 0.9586 | 0.7016 | 0.7518 | 0.7258 | 0.9154 |
| 0.0003 | 10.0 | 2520 | 0.9565 | 0.6945 | 0.7520 | 0.7221 | 0.9151 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
loanhhquanhh/Poem-LLama2
|
loanhhquanhh
| 2024-01-03T12:54:15Z | 2 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:vinai/PhoGPT-7B5-Instruct",
"base_model:adapter:vinai/PhoGPT-7B5-Instruct",
"region:us"
] | null | 2024-01-02T00:48:00Z |
---
library_name: peft
base_model: vinai/PhoGPT-7B5-Instruct
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
UserGnalin/temp_model
|
UserGnalin
| 2024-01-03T12:39:03Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased-finetuned-sst-2-english",
"base_model:finetune:distilbert/distilbert-base-uncased-finetuned-sst-2-english",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-03T12:14:01Z |
---
license: apache-2.0
base_model: distilbert-base-uncased-finetuned-sst-2-english
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: temp_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# temp_model
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2292
- Accuracy: 0.9207
- F1: 0.7943
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
s3nh/mlabonne-NeuralPipe-7B-slerp-GGUF
|
s3nh
| 2024-01-03T12:38:39Z | 0 | 1 |
transformers
|
[
"transformers",
"gguf",
"text-generation",
"zh",
"en",
"license:openrail",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-03T12:18:04Z |
---
license: openrail
pipeline_tag: text-generation
library_name: transformers
language:
- zh
- en
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGUF Format model files for [This project](https://huggingface.co/mlabonne/NeuralPipe-7B-slerp).
### GGUF Specs
GGUF is a format based on the existing GGJT, but makes a few changes to the format to make it more extensible and easier to use. The following features are desired:
Single-file deployment: they can be easily distributed and loaded, and do not require any external files for additional information.
Extensible: new features can be added to GGML-based executors/new information can be added to GGUF models without breaking compatibility with existing models.
mmap compatibility: models can be loaded using mmap for fast loading and saving.
Easy to use: models can be easily loaded and saved using a small amount of code, with no need for external libraries, regardless of the language used.
Full information: all information needed to load a model is contained in the model file, and no additional information needs to be provided by the user.
The key difference between GGJT and GGUF is the use of a key-value structure for the hyperparameters (now referred to as metadata), rather than a list of untyped values.
This allows for new metadata to be added without breaking compatibility with existing models, and to annotate the model with additional information that may be useful for
inference or for identifying the model.
### Perplexity params
Model Measure Q2_K Q3_K_S Q3_K_M Q3_K_L Q4_0 Q4_1 Q4_K_S Q4_K_M Q5_0 Q5_1 Q5_K_S Q5_K_M Q6_K Q8_0 F16
7B perplexity 6.7764 6.4571 6.1503 6.0869 6.1565 6.0912 6.0215 5.9601 5.9862 5.9481 5.9419 5.9208 5.9110 5.9070 5.9066
13B perplexity 5.8545 5.6033 5.4498 5.4063 5.3860 5.3608 5.3404 5.3002 5.2856 5.2706 5.2785 5.2638 5.2568 5.2548 5.2543
### inference
TODO
# Original model card
|
paraZite410/xlm-roberta-base-finetuned-panx-de
|
paraZite410
| 2024-01-03T12:30:00Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-01-03T11:20:49Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1360
- F1: 0.8553
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2509 | 1.0 | 787 | 0.1517 | 0.8280 |
| 0.1198 | 2.0 | 1574 | 0.1360 | 0.8553 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
SpartanLondoner/ppo-SnowballTarget
|
SpartanLondoner
| 2024-01-03T12:27:54Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2024-01-02T09:42:22Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog ๐ถ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: SpartanLondoner/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
gyr66/RoBERTa-ext-large-crf-lora-chinese-finetuned-ner
|
gyr66
| 2024-01-03T12:26:20Z | 0 | 0 | null |
[
"safetensors",
"generated_from_trainer",
"base_model:hfl/chinese-roberta-wwm-ext-large",
"base_model:finetune:hfl/chinese-roberta-wwm-ext-large",
"license:apache-2.0",
"region:us"
] | null | 2024-01-03T11:50:28Z |
---
license: apache-2.0
base_model: hfl/chinese-roberta-wwm-ext-large
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: RoBERTa-ext-large-crf-lora-chinese-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# RoBERTa-ext-large-crf-lora-chinese-finetuned-ner
This model is a fine-tuned version of [hfl/chinese-roberta-wwm-ext-large](https://huggingface.co/hfl/chinese-roberta-wwm-ext-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4056
- Precision: 0.4202
- Recall: 0.5916
- F1: 0.4914
- Accuracy: 0.9456
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 1.3615 | 1.0 | 503 | 0.8081 | 0.1274 | 0.1568 | 0.1406 | 0.9028 |
| 0.702 | 2.0 | 1006 | 0.5824 | 0.2954 | 0.4194 | 0.3467 | 0.9261 |
| 0.5585 | 3.0 | 1509 | 0.5107 | 0.3305 | 0.4922 | 0.3955 | 0.9323 |
| 0.4959 | 4.0 | 2012 | 0.4654 | 0.3716 | 0.5274 | 0.4360 | 0.9377 |
| 0.4614 | 5.0 | 2515 | 0.4427 | 0.3880 | 0.5493 | 0.4548 | 0.9399 |
| 0.4381 | 6.0 | 3018 | 0.4292 | 0.3996 | 0.5657 | 0.4684 | 0.9420 |
| 0.4233 | 7.0 | 3521 | 0.4166 | 0.4111 | 0.5813 | 0.4816 | 0.9441 |
| 0.4128 | 8.0 | 4024 | 0.4124 | 0.4144 | 0.5879 | 0.4862 | 0.9448 |
| 0.4008 | 9.0 | 4527 | 0.4067 | 0.4194 | 0.5904 | 0.4904 | 0.9455 |
| 0.3983 | 10.0 | 5030 | 0.4056 | 0.4202 | 0.5916 | 0.4914 | 0.9456 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
EMBO/SourceData_GENEPROD-ROLES_v_1-0-2_BioLinkBERT_base
|
EMBO
| 2024-01-03T12:22:11Z | 175 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:source_data",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-01-03T12:10:20Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- source_data
metrics:
- precision
- recall
- f1
model-index:
- name: SourceData_GENEPROD-ROLES_v_1-0-2_BioLinkBERT_base
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: source_data
type: source_data
args: ROLES_GP
metrics:
- name: Precision
type: precision
value: 0.9325065274151436
- name: Recall
type: recall
value: 0.9359276729559748
- name: F1
type: f1
value: 0.9342139680878889
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SourceData_GENEPROD-ROLES_v_1-0-2_BioLinkBERT_base
This model is a fine-tuned version of [michiyasunaga/BioLinkBERT-base](https://huggingface.co/michiyasunaga/BioLinkBERT-base) on the source_data dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0129
- Accuracy Score: 0.9955
- Precision: 0.9325
- Recall: 0.9359
- F1: 0.9342
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 256
- seed: 42
- optimizer: Adafactor
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy Score | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:---------:|:------:|:------:|
| 0.0161 | 1.0 | 471 | 0.0129 | 0.9955 | 0.9325 | 0.9359 | 0.9342 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0a0+bfe5ad2
- Datasets 2.10.1
- Tokenizers 0.12.1
|
cuongdz01/layoutlmv3-cord
|
cuongdz01
| 2024-01-03T12:17:37Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"layoutlmv3",
"token-classification",
"generated_from_trainer",
"base_model:microsoft/layoutlmv3-base",
"base_model:finetune:microsoft/layoutlmv3-base",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-01-03T11:23:49Z |
---
license: cc-by-nc-sa-4.0
base_model: microsoft/layoutlmv3-base
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: layoutlmv3-cord
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv3-cord
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1589
- Precision: 0.9433
- Recall: 0.9521
- F1: 0.9477
- Accuracy: 0.9669
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 0.5 | 100 | 0.6487 | 0.7825 | 0.8006 | 0.7914 | 0.8330 |
| No log | 1.0 | 200 | 0.4266 | 0.8496 | 0.8686 | 0.8590 | 0.8925 |
| No log | 1.5 | 300 | 0.2553 | 0.9008 | 0.9057 | 0.9033 | 0.9341 |
| No log | 2.0 | 400 | 0.2496 | 0.8960 | 0.9057 | 0.9008 | 0.9295 |
| 0.5667 | 2.5 | 500 | 0.2016 | 0.9274 | 0.9374 | 0.9324 | 0.9554 |
| 0.5667 | 3.0 | 600 | 0.1806 | 0.9387 | 0.9467 | 0.9427 | 0.9609 |
| 0.5667 | 3.5 | 700 | 0.1667 | 0.9424 | 0.9474 | 0.9449 | 0.9630 |
| 0.5667 | 4.0 | 800 | 0.1735 | 0.9452 | 0.9467 | 0.9459 | 0.9639 |
| 0.5667 | 4.5 | 900 | 0.1657 | 0.9456 | 0.9529 | 0.9492 | 0.9660 |
| 0.1025 | 5.0 | 1000 | 0.1589 | 0.9433 | 0.9521 | 0.9477 | 0.9669 |
### Framework versions
- Transformers 4.36.0
- Pytorch 2.0.0
- Datasets 2.16.1
- Tokenizers 0.15.0
|
zac/Bad_hands
|
zac
| 2024-01-03T12:15:34Z | 5 | 1 |
transformers
|
[
"transformers",
"vit",
"image-classification",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-01-03T11:13:36Z |
---
license: apache-2.0
pipeline_tag: image-classification
---
|
s3nh/abacusai-Giraffe-13b-32k-v3-GGUF
|
s3nh
| 2024-01-03T12:08:02Z | 0 | 2 |
transformers
|
[
"transformers",
"gguf",
"text-generation",
"zh",
"en",
"license:openrail",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-03T11:51:05Z |
---
license: openrail
pipeline_tag: text-generation
library_name: transformers
language:
- zh
- en
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGUF Format model files for [This project](https://huggingface.co/abacusai/Giraffe-13b-32k-v3).
### GGUF Specs
GGUF is a format based on the existing GGJT, but makes a few changes to the format to make it more extensible and easier to use. The following features are desired:
Single-file deployment: they can be easily distributed and loaded, and do not require any external files for additional information.
Extensible: new features can be added to GGML-based executors/new information can be added to GGUF models without breaking compatibility with existing models.
mmap compatibility: models can be loaded using mmap for fast loading and saving.
Easy to use: models can be easily loaded and saved using a small amount of code, with no need for external libraries, regardless of the language used.
Full information: all information needed to load a model is contained in the model file, and no additional information needs to be provided by the user.
The key difference between GGJT and GGUF is the use of a key-value structure for the hyperparameters (now referred to as metadata), rather than a list of untyped values.
This allows for new metadata to be added without breaking compatibility with existing models, and to annotate the model with additional information that may be useful for
inference or for identifying the model.
### Perplexity params
Model Measure Q2_K Q3_K_S Q3_K_M Q3_K_L Q4_0 Q4_1 Q4_K_S Q4_K_M Q5_0 Q5_1 Q5_K_S Q5_K_M Q6_K Q8_0 F16
7B perplexity 6.7764 6.4571 6.1503 6.0869 6.1565 6.0912 6.0215 5.9601 5.9862 5.9481 5.9419 5.9208 5.9110 5.9070 5.9066
13B perplexity 5.8545 5.6033 5.4498 5.4063 5.3860 5.3608 5.3404 5.3002 5.2856 5.2706 5.2785 5.2638 5.2568 5.2548 5.2543
### inference
TODO
# Original model card
|
jeevan-23/jupy-model_v4
|
jeevan-23
| 2024-01-03T11:59:17Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:jeevan-23/jupy-model_v3",
"base_model:finetune:jeevan-23/jupy-model_v3",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2024-01-03T10:55:47Z |
---
license: mit
base_model: jeevan-23/jupy-model_v3
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: jupy-model_v4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# jupy-model_v4
This model is a fine-tuned version of [jeevan-23/jupy-model_v3](https://huggingface.co/jeevan-23/jupy-model_v3) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.