repo_id
stringlengths 4
110
| author
stringlengths 2
27
⌀ | model_type
stringlengths 2
29
⌀ | files_per_repo
int64 2
15.4k
| downloads_30d
int64 0
19.9M
| library
stringlengths 2
37
⌀ | likes
int64 0
4.34k
| pipeline
stringlengths 5
30
⌀ | pytorch
bool 2
classes | tensorflow
bool 2
classes | jax
bool 2
classes | license
stringlengths 2
30
| languages
stringlengths 4
1.63k
⌀ | datasets
stringlengths 2
2.58k
⌀ | co2
stringclasses 29
values | prs_count
int64 0
125
| prs_open
int64 0
120
| prs_merged
int64 0
15
| prs_closed
int64 0
28
| discussions_count
int64 0
218
| discussions_open
int64 0
148
| discussions_closed
int64 0
70
| tags
stringlengths 2
513
| has_model_index
bool 2
classes | has_metadata
bool 1
class | has_text
bool 1
class | text_length
int64 401
598k
| is_nc
bool 1
class | readme
stringlengths 0
598k
| hash
stringlengths 32
32
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
jonatasgrosman/exp_w2v2t_es_hubert_s251
|
jonatasgrosman
|
hubert
| 10 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['es']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'es']
| false | true | true | 452 | false |
# exp_w2v2t_es_hubert_s251
Fine-tuned [facebook/hubert-large-ll60k](https://huggingface.co/facebook/hubert-large-ll60k) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
46809c8b1c5f4cb81c3e27f5a2d7df5a
|
test1234678/distilbert-base-uncased-finetuned-clinc
|
test1234678
|
distilbert
| 10 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['clinc_oos']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,481 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7773
- Accuracy: 0.9152
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.293 | 1.0 | 318 | 3.2831 | 0.7432 |
| 2.6252 | 2.0 | 636 | 1.8743 | 0.8310 |
| 1.5406 | 3.0 | 954 | 1.1575 | 0.8939 |
| 1.0105 | 4.0 | 1272 | 0.8626 | 0.9094 |
| 0.7962 | 5.0 | 1590 | 0.7773 | 0.9152 |
### Framework versions
- Transformers 4.21.3
- Pytorch 1.10.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
f150bdd775010ecb71c50c21d0b0ae8e
|
leo93/bert-finetuned-ner-30
|
leo93
|
bert
| 12 | 7 |
transformers
| 0 |
token-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 4,030 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0453
- Precision: 0.9275
- Recall: 0.9492
- F1: 0.9382
- Accuracy: 0.9934
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 407 | 0.0539 | 0.8283 | 0.8758 | 0.8514 | 0.9866 |
| 0.1524 | 2.0 | 814 | 0.0333 | 0.8931 | 0.9123 | 0.9026 | 0.9915 |
| 0.0381 | 3.0 | 1221 | 0.0345 | 0.8835 | 0.9280 | 0.9052 | 0.9906 |
| 0.0179 | 4.0 | 1628 | 0.0351 | 0.8890 | 0.9361 | 0.9119 | 0.9909 |
| 0.0089 | 5.0 | 2035 | 0.0310 | 0.9102 | 0.9372 | 0.9235 | 0.9924 |
| 0.0089 | 6.0 | 2442 | 0.0344 | 0.9198 | 0.9383 | 0.9289 | 0.9922 |
| 0.0057 | 7.0 | 2849 | 0.0331 | 0.9144 | 0.9448 | 0.9294 | 0.9931 |
| 0.0039 | 8.0 | 3256 | 0.0340 | 0.9144 | 0.9481 | 0.9309 | 0.9928 |
| 0.0027 | 9.0 | 3663 | 0.0423 | 0.9032 | 0.9481 | 0.9251 | 0.9921 |
| 0.0018 | 10.0 | 4070 | 0.0373 | 0.9047 | 0.9507 | 0.9271 | 0.9923 |
| 0.0018 | 11.0 | 4477 | 0.0448 | 0.8932 | 0.9474 | 0.9195 | 0.9910 |
| 0.0014 | 12.0 | 4884 | 0.0380 | 0.9079 | 0.9474 | 0.9272 | 0.9928 |
| 0.0015 | 13.0 | 5291 | 0.0360 | 0.9231 | 0.9474 | 0.9351 | 0.9936 |
| 0.0013 | 14.0 | 5698 | 0.0378 | 0.9243 | 0.9456 | 0.9348 | 0.9935 |
| 0.0013 | 15.0 | 6105 | 0.0414 | 0.9197 | 0.9496 | 0.9344 | 0.9930 |
| 0.0009 | 16.0 | 6512 | 0.0405 | 0.9202 | 0.9478 | 0.9338 | 0.9929 |
| 0.0009 | 17.0 | 6919 | 0.0385 | 0.9305 | 0.9441 | 0.9373 | 0.9933 |
| 0.0006 | 18.0 | 7326 | 0.0407 | 0.9285 | 0.9437 | 0.9360 | 0.9934 |
| 0.0009 | 19.0 | 7733 | 0.0428 | 0.9203 | 0.9488 | 0.9343 | 0.9929 |
| 0.0006 | 20.0 | 8140 | 0.0455 | 0.9232 | 0.9536 | 0.9382 | 0.9928 |
| 0.0004 | 21.0 | 8547 | 0.0462 | 0.9261 | 0.9529 | 0.9393 | 0.9930 |
| 0.0004 | 22.0 | 8954 | 0.0423 | 0.9359 | 0.9492 | 0.9425 | 0.9940 |
| 0.0005 | 23.0 | 9361 | 0.0446 | 0.9180 | 0.9529 | 0.9351 | 0.9931 |
| 0.0005 | 24.0 | 9768 | 0.0430 | 0.9361 | 0.9467 | 0.9413 | 0.9938 |
| 0.0002 | 25.0 | 10175 | 0.0436 | 0.9322 | 0.9496 | 0.9408 | 0.9935 |
| 0.0002 | 26.0 | 10582 | 0.0440 | 0.9275 | 0.9492 | 0.9382 | 0.9935 |
| 0.0002 | 27.0 | 10989 | 0.0450 | 0.9272 | 0.9488 | 0.9379 | 0.9932 |
| 0.0002 | 28.0 | 11396 | 0.0445 | 0.9304 | 0.9470 | 0.9386 | 0.9935 |
| 0.0003 | 29.0 | 11803 | 0.0449 | 0.9278 | 0.9481 | 0.9378 | 0.9934 |
| 0.0001 | 30.0 | 12210 | 0.0453 | 0.9275 | 0.9492 | 0.9382 | 0.9934 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
5d967af9f5e2bbea3b254a098ad573f7
|
jsunster/distilbert-base-uncased-finetuned-squad
|
jsunster
|
distilbert
| 12 | 3 |
transformers
| 0 |
question-answering
| true | false | false |
apache-2.0
| null |
['squad']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,273 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1476
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.2823 | 1.0 | 2767 | 1.1980 |
| 1.0336 | 2.0 | 5534 | 1.1334 |
| 0.8513 | 3.0 | 8301 | 1.1476 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
d680e3d0ea0dba9f3b6216b9214b3054
|
nikaashpuri/gpt-expt-sp
|
nikaashpuri
|
gpt2
| 18 | 0 |
transformers
| 0 |
text-generation
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 6,194 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt-expt-sp
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2561
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 300
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.8623 | 3.12 | 100 | 1.7653 |
| 1.6403 | 6.24 | 200 | 1.5635 |
| 1.5806 | 9.37 | 300 | 1.5326 |
| 1.5433 | 12.49 | 400 | 1.4568 |
| 1.362 | 15.61 | 500 | 0.9368 |
| 0.8739 | 18.73 | 600 | 0.5006 |
| 0.5905 | 21.85 | 700 | 0.3875 |
| 0.4755 | 24.98 | 800 | 0.3440 |
| 0.4252 | 28.12 | 900 | 0.3238 |
| 0.3904 | 31.24 | 1000 | 0.3093 |
| 0.366 | 34.37 | 1100 | 0.3004 |
| 0.3492 | 37.49 | 1200 | 0.2922 |
| 0.3345 | 40.61 | 1300 | 0.2860 |
| 0.3277 | 43.73 | 1400 | 0.2819 |
| 0.324 | 46.85 | 1500 | 0.2800 |
| 0.318 | 49.98 | 1600 | 0.2766 |
| 0.314 | 53.12 | 1700 | 0.2736 |
| 0.308 | 56.24 | 1800 | 0.2740 |
| 0.306 | 59.37 | 1900 | 0.2716 |
| 0.3037 | 62.49 | 2000 | 0.2708 |
| 0.2993 | 65.61 | 2100 | 0.2685 |
| 0.2991 | 68.73 | 2200 | 0.2680 |
| 0.297 | 71.85 | 2300 | 0.2670 |
| 0.2964 | 74.98 | 2400 | 0.2662 |
| 0.2964 | 78.12 | 2500 | 0.2653 |
| 0.2942 | 81.24 | 2600 | 0.2664 |
| 0.2937 | 84.37 | 2700 | 0.2655 |
| 0.2886 | 87.49 | 2800 | 0.2631 |
| 0.2877 | 90.61 | 2900 | 0.2634 |
| 0.2859 | 93.73 | 3000 | 0.2628 |
| 0.2852 | 96.85 | 3100 | 0.2629 |
| 0.2841 | 99.98 | 3200 | 0.2629 |
| 0.2848 | 103.12 | 3300 | 0.2625 |
| 0.2811 | 106.24 | 3400 | 0.2611 |
| 0.281 | 109.37 | 3500 | 0.2608 |
| 0.2794 | 112.49 | 3600 | 0.2599 |
| 0.2787 | 115.61 | 3700 | 0.2604 |
| 0.2781 | 118.73 | 3800 | 0.2601 |
| 0.2777 | 121.85 | 3900 | 0.2604 |
| 0.2776 | 124.98 | 4000 | 0.2600 |
| 0.2786 | 128.12 | 4100 | 0.2597 |
| 0.2757 | 131.24 | 4200 | 0.2597 |
| 0.2754 | 134.37 | 4300 | 0.2590 |
| 0.2758 | 137.49 | 4400 | 0.2596 |
| 0.2742 | 140.61 | 4500 | 0.2598 |
| 0.2731 | 143.73 | 4600 | 0.2581 |
| 0.2738 | 146.85 | 4700 | 0.2587 |
| 0.273 | 149.98 | 4800 | 0.2583 |
| 0.2736 | 153.12 | 4900 | 0.2579 |
| 0.271 | 156.24 | 5000 | 0.2580 |
| 0.2709 | 159.37 | 5100 | 0.2578 |
| 0.2708 | 162.49 | 5200 | 0.2582 |
| 0.2697 | 165.61 | 5300 | 0.2578 |
| 0.2695 | 168.73 | 5400 | 0.2578 |
| 0.269 | 171.85 | 5500 | 0.2582 |
| 0.2691 | 174.98 | 5600 | 0.2574 |
| 0.2705 | 178.12 | 5700 | 0.2574 |
| 0.2678 | 181.24 | 5800 | 0.2572 |
| 0.2692 | 184.37 | 5900 | 0.2582 |
| 0.2687 | 187.49 | 6000 | 0.2572 |
| 0.2673 | 190.61 | 6100 | 0.2571 |
| 0.2666 | 193.73 | 6200 | 0.2568 |
| 0.2662 | 196.85 | 6300 | 0.2573 |
| 0.2662 | 199.98 | 6400 | 0.2568 |
| 0.2688 | 203.12 | 6500 | 0.2567 |
| 0.2658 | 206.24 | 6600 | 0.2570 |
| 0.2666 | 209.37 | 6700 | 0.2567 |
| 0.2652 | 212.49 | 6800 | 0.2565 |
| 0.2651 | 215.61 | 6900 | 0.2568 |
| 0.2649 | 218.73 | 7000 | 0.2566 |
| 0.2648 | 221.85 | 7100 | 0.2564 |
| 0.2645 | 224.98 | 7200 | 0.2564 |
| 0.2662 | 228.12 | 7300 | 0.2564 |
| 0.2641 | 231.24 | 7400 | 0.2564 |
| 0.2641 | 234.37 | 7500 | 0.2563 |
| 0.2639 | 237.49 | 7600 | 0.2563 |
| 0.2638 | 240.61 | 7700 | 0.2563 |
| 0.2637 | 243.73 | 7800 | 0.2562 |
| 0.2635 | 246.85 | 7900 | 0.2562 |
| 0.2633 | 249.98 | 8000 | 0.2563 |
| 0.2653 | 253.12 | 8100 | 0.2562 |
| 0.2631 | 256.24 | 8200 | 0.2562 |
| 0.2631 | 259.37 | 8300 | 0.2561 |
| 0.263 | 262.49 | 8400 | 0.2561 |
| 0.263 | 265.61 | 8500 | 0.2561 |
| 0.2629 | 268.73 | 8600 | 0.2561 |
| 0.2628 | 271.85 | 8700 | 0.2561 |
| 0.2628 | 274.98 | 8800 | 0.2561 |
| 0.2646 | 278.12 | 8900 | 0.2561 |
| 0.2626 | 281.24 | 9000 | 0.2561 |
| 0.2626 | 284.37 | 9100 | 0.2561 |
| 0.2625 | 287.49 | 9200 | 0.2561 |
| 0.2626 | 290.61 | 9300 | 0.2561 |
| 0.2626 | 293.73 | 9400 | 0.2561 |
| 0.2626 | 296.85 | 9500 | 0.2561 |
| 0.2625 | 299.98 | 9600 | 0.2561 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu117
- Datasets 2.8.0
- Tokenizers 0.13.2
|
d6399f305b526a899873ec43b7f7da42
|
Salesforce/codegen-2B-multi
|
Salesforce
|
codegen
| 10 | 6,096 |
transformers
| 11 |
text-generation
| true | false | false |
bsd-3-clause
| null | null | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 |
[]
| false | true | true | 3,029 | false |
# CodeGen (CodeGen-Multi 2B)
## Model description
CodeGen is a family of autoregressive language models for **program synthesis** from the paper: [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong. The models are originally released in [this repository](https://github.com/salesforce/CodeGen), under 3 pre-training data variants (`NL`, `Multi`, `Mono`) and 4 model size variants (`350M`, `2B`, `6B`, `16B`).
The checkpoint included in this repository is denoted as **CodeGen-Multi 2B** in the paper, where "Multi" means the model is initialized with *CodeGen-NL 2B* and further pre-trained on a dataset of multiple programming languages, and "2B" refers to the number of trainable parameters.
## Training data
This checkpoint (CodeGen-Multi 2B) was firstly initialized with *CodeGen-NL 2B*, and then pre-trained on [BigQuery](https://console.cloud.google.com/marketplace/details/github/github-repos), a large-scale dataset of multiple programming languages from GitHub repositories. The data consists of 119.2B tokens and includes C, C++, Go, Java, JavaScript, and Python.
## Training procedure
CodeGen was trained using cross-entropy loss to maximize the likelihood of sequential inputs.
The family of models are trained using multiple TPU-v4-512 by Google, leveraging data and model parallelism.
See Section 2.3 of the [paper](https://arxiv.org/abs/2203.13474) for more details.
## Evaluation results
We evaluate our models on two code generation benchmark: HumanEval and MTPB. Please refer to the [paper](https://arxiv.org/abs/2203.13474) for more details.
## Intended Use and Limitations
As an autoregressive language model, CodeGen is capable of extracting features from given natural language and programming language texts, and calculating the likelihood of them.
However, the model is intended for and best at **program synthesis**, that is, generating executable code given English prompts, where the prompts should be in the form of a comment string. The model can complete partially-generated code as well.
## How to use
This model can be easily loaded using the `AutoModelForCausalLM` functionality:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen-2B-multi")
model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen-2B-multi")
text = "def hello_world():"
input_ids = tokenizer(text, return_tensors="pt").input_ids
generated_ids = model.generate(input_ids, max_length=128)
print(tokenizer.decode(generated_ids[0], skip_special_tokens=True))
```
## BibTeX entry and citation info
```bibtex
@article{Nijkamp2022ACP,
title={A Conversational Paradigm for Program Synthesis},
author={Nijkamp, Erik and Pang, Bo and Hayashi, Hiroaki and Tu, Lifu and Wang, Huan and Zhou, Yingbo and Savarese, Silvio and Xiong, Caiming},
journal={arXiv preprint},
year={2022}
}
```
|
ede2fd6bc6ec383e0b9af6896854181d
|
softcatala/opennmt-cat-deu
|
softcatala
| null | 5 | 0 |
opennmt
| 0 |
translation
| false | false | false |
mit
|
['ca', 'de']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 1,165 | false |
### Introduction
Catalan - German translation model for OpenNMT. These are the same models that we have in production at https://www.softcatala.org/traductor/.
The models are quantified for low latency.
### Usage
Install the necessary dependencies:
```bash
pip3 install ctranslate2 pyonmttok
```
Simple tokenization & translation using Python:
```python
import ctranslate2
import pyonmttok
from huggingface_hub import snapshot_download
model_dir = snapshot_download(repo_id="softcatala/opennmt-cat-deu", revision="main")
tokenizer=pyonmttok.Tokenizer(mode="none", sp_model_path = model_dir + "/sp_m.model")
tokenized=tokenizer.tokenize("Hola amics")
translator = ctranslate2.Translator(model_dir)
translated = translator.translate_batch([tokenized[0]])
print(tokenizer.detokenize(translated[0][0]['tokens']))
```
## Benchmarks
| testset | BLEU |
|---------------------------------------|-------|
| test dataset (from train/dev/test) | 30.6 |
| Flores101 dataset | 21.6 |
## Additional information
* https://github.com/Softcatala/nmt-models
* https://github.com/Softcatala/parallel-catalan-corpus
|
e5ce6cac5b7792d912bf8cd3c057c07d
|
theojolliffe/bart-large-cnn-finetuned-roundup-4
|
theojolliffe
|
bart
| 13 | 3 |
transformers
| 0 |
text2text-generation
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,767 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-finetuned-roundup-4
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2573
- Rouge1: 49.0193
- Rouge2: 28.6311
- Rougel: 31.3363
- Rougelsum: 46.1408
- Gen Len: 142.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 132 | 1.3178 | 48.4526 | 28.6361 | 30.2875 | 45.4822 | 142.0 |
| No log | 2.0 | 264 | 1.2404 | 48.139 | 28.2459 | 29.3584 | 45.0785 | 142.0 |
| No log | 3.0 | 396 | 1.2389 | 49.74 | 29.7834 | 33.143 | 46.8147 | 142.0 |
| 0.9855 | 4.0 | 528 | 1.2573 | 49.0193 | 28.6311 | 31.3363 | 46.1408 | 142.0 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
29dd369a6dcbf64135399aa17fa12e63
|
adielsa/distilbert-base-uncased-finetuned-cola
|
adielsa
|
distilbert
| 13 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['glue']
| null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,571 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8256
- Matthews Correlation: 0.5387
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5257 | 1.0 | 535 | 0.5286 | 0.4093 |
| 0.3447 | 2.0 | 1070 | 0.5061 | 0.4972 |
| 0.2303 | 3.0 | 1605 | 0.5878 | 0.5245 |
| 0.1761 | 4.0 | 2140 | 0.7969 | 0.5153 |
| 0.1346 | 5.0 | 2675 | 0.8256 | 0.5387 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
4890126daaa9a2d6c0fe07c76035a3d3
|
mmeet611/finetuning-sentiment-model-3000-samples
|
mmeet611
|
distilbert
| 16 | 11 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['imdb']
| null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,055 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3052
- Accuracy: 0.8633
- F1: 0.8629
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
20a382a5da973285de359cd0af708ca5
|
pranay-j/whisper-large-v2-hy
|
pranay-j
|
whisper
| 21 | 0 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['hy']
|
['mozilla-foundation/common_voice_11_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['whisper-event', 'generated_from_trainer']
| true | true | true | 1,351 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper
This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the common_voice_11_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4380
- Wer: 39.7368
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 1500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0001 | 34.0 | 1500 | 0.4380 | 39.7368 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
1063695c68b316b7f5def13e3d18ca10
|
Sreevishnu/funnel-transformer-small-imdb
|
Sreevishnu
|
funnel
| 7 | 14 |
transformers
| 1 |
text-classification
| true | false | false |
apache-2.0
|
['en']
|
['imdb']
| null | 1 | 0 | 1 | 0 | 0 | 0 | 0 |
['sentiment-analysis']
| false | true | true | 2,725 | false |
# Funnel Transformer small (B4-4-4 with decoder) fine-tuned on IMDB for Sentiment Analysis
These are the model weights for the Funnel Transformer small model fine-tuned on the IMDB dataset for performing Sentiment Analysis with `max_position_embeddings=1024`.
The original model weights for English language are from [funnel-transformer/small](https://huggingface.co/funnel-transformer/small) and it uses a similar objective objective as [ELECTRA](https://huggingface.co/transformers/model_doc/electra.html). It was introduced in [this paper](https://arxiv.org/pdf/2006.03236.pdf) and first released in [this repository](https://github.com/laiguokun/Funnel-Transformer). This model is uncased: it does not make a difference between english and English.
## Fine-tuning Results
| | Accuracy | Precision | Recall | F1 |
|-------------------------------|----------|-----------|----------|----------|
| funnel-transformer-small-imdb | 0.956530 | 0.952286 | 0.961075 | 0.956661 |
## Model description (from [funnel-transformer/small](https://huggingface.co/funnel-transformer/small))
Funnel Transformer is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, a small language model corrupts the input texts and serves as a generator of inputs for this model, and the pretraining objective is to predict which token is an original and which one has been replaced, a bit like a GAN training.
This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the BERT model as inputs.
# How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained(
"Sreevishnu/funnel-transformer-small-imdb",
use_fast=True)
model = AutoModelForSequenceClassification.from_pretrained(
"Sreevishnu/funnel-transformer-small-imdb",
num_labels=2,
max_position_embeddings=1024)
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
# Example App
https://lazy-film-reviews-7gif2bz4sa-ew.a.run.app/
Project repo: https://github.com/akshaydevml/lazy-film-reviews
|
77062e61db64e0a3cb09ea6dba6c0fb6
|
gokuls/mobilebert_sa_GLUE_Experiment_logit_kd_stsb_256
|
gokuls
|
mobilebert
| 17 | 2 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
|
['en']
|
['glue']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,860 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilebert_sa_GLUE_Experiment_logit_kd_stsb_256
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE STSB dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1337
- Pearson: 0.0151
- Spearmanr: 0.0166
- Combined Score: 0.0159
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|:--------------:|
| 2.075 | 1.0 | 45 | 1.1337 | 0.0151 | 0.0166 | 0.0159 |
| 1.0752 | 2.0 | 90 | 1.1691 | 0.0603 | 0.0648 | 0.0626 |
| 1.0435 | 3.0 | 135 | 1.2035 | 0.0659 | 0.0746 | 0.0703 |
| 1.0472 | 4.0 | 180 | 1.1488 | 0.0764 | 0.0817 | 0.0790 |
| 0.9687 | 5.0 | 225 | 1.5234 | 0.0979 | 0.0959 | 0.0969 |
| 0.9016 | 6.0 | 270 | 1.2243 | 0.1434 | 0.1381 | 0.1408 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
|
945bcde1de0fbe64d034bb9ff1287cdb
|
m3hrdadfi/albert-fa-base-v2-sentiment-deepsentipers-multi
|
m3hrdadfi
|
albert
| 13 | 11 |
transformers
| 0 |
text-classification
| true | true | false |
apache-2.0
|
['fa']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 3,143 | false |
# ALBERT Persian
A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language
> میتونی بهش بگی برت_کوچولو
[ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) is the first attempt on ALBERT for the Persian Language. The model was trained based on Google's ALBERT BASE Version 2.0 over various writing styles from numerous subjects (e.g., scientific, novels, news) with more than 3.9M documents, 73M sentences, and 1.3B words, like the way we did for ParsBERT.
Please follow the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo for the latest information about previous and current models.
## Persian Sentiment [Digikala, SnappFood, DeepSentiPers]
It aims to classify text, such as comments, based on their emotional bias. We tested three well-known datasets for this task: `Digikala` user comments, `SnappFood` user comments, and `DeepSentiPers` in two binary-form and multi-form types.
### DeepSentiPers
which is a balanced and augmented version of SentiPers, contains 12,138 user opinions about digital products labeled with five different classes; two positives (i.e., happy and delighted), two negatives (i.e., furious and angry) and one neutral class. Therefore, this dataset can be utilized for both multi-class and binary classification. In the case of binary classification, the neutral class and its corresponding sentences are removed from the dataset.
**Binary:**
1. Negative (Furious + Angry)
2. Positive (Happy + Delighted)
**Multi**
1. Furious
2. Angry
3. Neutral
4. Happy
5. Delighted
| Label | # |
|:---------:|:----:|
| Furious | 236 |
| Angry | 1357 |
| Neutral | 2874 |
| Happy | 2848 |
| Delighted | 2516 |
**Download**
You can download the dataset from:
- [SentiPers](https://github.com/phosseini/sentipers)
- [DeepSentiPers](https://github.com/JoyeBright/DeepSentiPers)
## Results
The following table summarizes the F1 score obtained as compared to other models and architectures.
| Dataset | ALBERT-fa-base-v2 | ParsBERT-v1 | mBERT | DeepSentiPers |
|:------------------------:|:-----------------:|:-----------:|:-----:|:-------------:|
| SentiPers (Multi Class) | 66.12 | 71.11 | - | 69.33 |
| SentiPers (Binary Class) | 91.09 | 92.13 | - | 91.98 |
### BibTeX entry and citation info
Please cite in publications as the following:
```bibtex
@misc{ALBERTPersian,
author = {Mehrdad Farahani},
title = {ALBERT-Persian: A Lite BERT for Self-supervised Learning of Language Representations for the Persian Language},
year = {2020},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/m3hrdadfi/albert-persian}},
}
@article{ParsBERT,
title={ParsBERT: Transformer-based Model for Persian Language Understanding},
author={Mehrdad Farahani, Mohammad Gharachorloo, Marzieh Farahani, Mohammad Manthouri},
journal={ArXiv},
year={2020},
volume={abs/2005.12515}
}
```
## Questions?
Post a Github issue on the [ALBERT-Persian](https://github.com/m3hrdadfi/albert-persian) repo.
|
44b54b79cdeb6bf3e66c70ec0d4e0b78
|
andite/pastel-mix
|
andite
| null | 43 | 7,140 |
diffusers
| 435 |
text-to-image
| false | false | false |
creativeml-openrail-m
|
['en']
| null | null | 9 | 1 | 6 | 2 | 5 | 5 | 0 |
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image', 'diffusers']
| false | true | true | 11,912 | false |
Update Logs:
[1/27/22]
I uploaded the model in CivitAI! -> https://civitai.com/models/5414/pastel-mix-stylized-anime-model I'd appreciate the ratings, thank you!
[2/2/22]
Uploaded a lora version.
<center><h1><b>Pastel Mix</b></h1></center>
<p align="center">Welcome to Pastel Mix - a stylized latent diffusion model. This model is intended to produce high-quality, highly detailed anime style with just a few prompts.</p>
<p align="center">This model is made with the thought of imitating pastel-like art and the potential of mixing LORAs into a model altogether to create a fantastic mix.
Recipe for this mix could be found below. Like other anime-style Stable Diffusion models, it also supports danbooru tags to generate images. </p>
<p align="center">e.g. <b>masterpiece, best quality, upper body, 1girl, looking at viewer, red hair, medium hair, purple eyes, demon horns, black coat, indoors, dimly lit</b></p>
<p align="center"><img src="https://huggingface.co/andite/Pastel-Mix/resolve/main/example-images/grid-0020.png">
<img src="https://huggingface.co/andite/Pastel-Mix/resolve/main/example-images/grid-0018.png"></p>
-------
## How to download with Git
```
git lfs install
git clone https://huggingface.co/andite/pastel-mix
```
## 🧨 Diffusers
This model can be used just like any other Stable Diffusion model. For more information,
please have a look at the [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion).
You can also export the model to [ONNX](https://huggingface.co/docs/diffusers/optimization/onnx), [MPS](https://huggingface.co/docs/diffusers/optimization/mps) and/or [FLAX/JAX]().
```python
from diffusers import StableDiffusionPipeline
import torch
model_id = "andite/pastel-mix"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "hatsune_miku"
image = pipe(prompt).images[0]
image.save("./hatsune_miku.png")
```
# Gradio
We support a [Gradio](https://github.com/gradio-app/gradio) Web UI to run pastel-mix:
[](https://huggingface.co/spaces/akhaliq/pastel-mix)
## Examples

```
masterpiece, best quality, ultra-detailed, illustration, portrait, 1girl
Negative prompt: lowres, ((bad anatomy)), ((bad hands)), text, missing finger, extra digits, fewer digits, blurry, ((mutated hands and fingers)), (poorly drawn face), ((mutation)), ((deformed face)), (ugly), ((bad proportions)), ((extra limbs)), extra face, (double head), (extra head), ((extra feet)), monster, logo, cropped, worst quality, low quality, normal quality, jpeg, humpbacked, long body, long neck, ((jpeg artifacts))
Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Size: 448x640, Model hash: 7edc8e08, Model: pastelmix-fp32, Denoising strength: 0.6, Clip skip: 2, ENSD: 31337, Hires resize: 960x1280, Hires steps: 20, Hires upscaler: Latent
```

```
masterpiece, best quality, ultra-detailed, illustration, portrait, hakurei reimu, 1girl, throne room, dimly lit
Negative prompt: lowres, ((bad anatomy)), ((bad hands)), text, missing finger, extra digits, fewer digits, blurry, ((mutated hands and fingers)), (poorly drawn face), ((mutation)), ((deformed face)), (ugly), ((bad proportions)), ((extra limbs)), extra face, (double head), (extra head), ((extra feet)), monster, logo, cropped, worst quality, low quality, normal quality, jpeg, humpbacked, long body, long neck, ((jpeg artifacts))
Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Size: 448x640, Model hash: 7edc8e08, Model: pastelmix-fp32, Denoising strength: 0.6, Clip skip: 2, ENSD: 31337, Hires resize: 960x1280, Hires steps: 20, Hires upscaler: Latent
```

```
masterpiece, best quality, ultra-detailed, illustration, 1girl, witch hat, purple eyes, blonde hair, wielding a purple staff blasting purple energy, purple beam, purple effects, dragons, chaos
Negative prompt: lowres, ((bad anatomy)), ((bad hands)), text, missing finger, extra digits, fewer digits, blurry, ((mutated hands and fingers)), (poorly drawn face), ((mutation)), ((deformed face)), (ugly), ((bad proportions)), ((extra limbs)), extra face, (double head), (extra head), ((extra feet)), monster, logo, cropped, worst quality, low quality, normal quality, jpeg, humpbacked, long body, long neck, ((jpeg artifacts))
Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Size: 448x640, Model hash: 7edc8e08, Model: pastelmix-fp32, Denoising strength: 0.6, Clip skip: 2, ENSD: 31337, Hires resize: 960x1280, Hires steps: 20, Hires upscaler: Latent
```

```
masterpiece, best quality, ultra-detailed, illustration, close-up, straight on, 1girl, black hair, yellow eyes, red roses, chains
Negative prompt: lowres, ((bad anatomy)), ((bad hands)), text, missing finger, extra digits, fewer digits, blurry, ((mutated hands and fingers)), (poorly drawn face), ((mutation)), ((deformed face)), (ugly), ((bad proportions)), ((extra limbs)), extra face, (double head), (extra head), ((extra feet)), monster, logo, cropped, worst quality, low quality, normal quality, jpeg, humpbacked, long body, long neck, ((jpeg artifacts))
Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 2203084815, Size: 640x448, Model hash: 7edc8e08, Model: pastelmix-fp32, Denoising strength: 0.6, Clip skip: 2, ENSD: 31337, Hires resize: 1280x960, Hires steps: 20, Hires upscaler: Latent
```

```
masterpiece, best quality, ultra-detailed, illustration, close-up, straight on, face focus, 1girl, white hair, golden eyes, long hair, halo, angel wings, serene expression, looking at viewer
Negative prompt: lowres, ((bad anatomy)), ((bad hands)), text, missing finger, extra digits, fewer digits, blurry, ((mutated hands and fingers)), (poorly drawn face), ((mutation)), ((deformed face)), (ugly), ((bad proportions)), ((extra limbs)), extra face, (double head), (extra head), ((extra feet)), monster, logo, cropped, worst quality, low quality, normal quality, jpeg, humpbacked, long body, long neck, ((jpeg artifacts))
Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 240742293, Size: 640x448, Model hash: 7edc8e08, Model: pastelmix-fp32, Denoising strength: 0.6, Clip skip: 2, ENSD: 31337, Hires resize: 1280x960, Hires steps: 20, Hires upscaler: Latent
```
## So what the hell is the 'better-vae' version?
I merged the pastel-waifu-diffusion.vae.pt inside the model so you don't have to set up the vae anymore.

life so much ez now since you don't have to download the vae and set it up right?
## What is pastelmix-lora.safetensors?
It's a lora version which is made from extracting the loras from pastel-mix using a script that is similar to add-difference method.
https://github.com/bmaltais/kohya_ss/blob/master/train_network_README.md
## Guide
For the settings or parameters, I recommend using these settings.

```
Sampler: DPM++ 2M Karras
Steps: 20
CFG Scale: 7
Hires. Fix: On
Upscaler: Latent (MUST!)
Hires Steps: 20
Denoising Strength: 0.
```
I prefer using 0.6 since it's the sweet spot of this model. If you can find a better setting for this model, then good for you lol.
Latent upscaler is the best setting for me since it retains or enhances the pastel style. Other upscalers like Lanczos or Anime6B tends to smoothen them out, removing the pastel-like brushwork.
Please use the **VAE** that I uploaded in this repository. It is from the [Waifu Diffusion](https://huggingface.co/hakurei/waifu-diffusion-v1-4/tree/main/vae) team. Credits to [haru](https://huggingface.co/hakurei) for letting me rename and upload it.
## Tip (Optional)
Putting mksks style in the beginning of the prompt can further influence the pastel-like style and make the output better. It is optional though, so it's up to you. You don't really need it.

```
mksks style, masterpiece, best quality, upper body, 1girl, looking at viewer, red hair, medium hair, purple eyes, demon horns, black coat, indoors, dimly lit
Negative prompt: lowres, ((bad anatomy)), ((bad hands)), text, missing finger, extra digits, fewer digits, blurry, ((mutated hands and fingers)), (poorly drawn face), ((mutation)), ((deformed face)), (ugly), ((bad proportions)), ((extra limbs)), extra face, (double head), (extra head), ((extra feet)), monster, logo, cropped, worst quality, low quality, normal quality, jpeg, humpbacked, long body, long neck, ((jpeg artifacts))
Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 580841049, Size: 448x640, Model hash: 7edc8e08, Model: pastelmix-fp32, Denoising strength: 0.6, Clip skip: 2, ENSD: 31337, Hires resize: 960x1280, Hires steps: 20, Hires upscaler: Latent
```
## Recipe
Merging the models.
| Model: A | Model: B | Weight | Base alpha | Merge Name |
| --- | --- | --- | --- | --- |
| [dpepmkmp](https://huggingface.co/closertodeath/dpepmkmp) | [Tea](https://huggingface.co/andite/desserts) | 1,0.9,0.7,0.5,0.3,0.1,1,1,1,1,1,1,0,1,1,1,1,1,1,0.1,0.3,0.5,0.7,0.9,1 | 0 | dpeptea |
| dpeptea | [basil-mix](https://huggingface.co/nuigurumi/basil_mix) | 1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 | 0 | dpeptea-basil |
Merging the loras into the model.
| Model | Lora | Weight | Merge Name |
| --- | --- | --- | --- |
| [dpeptea-basil](https://huggingface.co/closertodeath/dpepteahands3) | [Magic LORA](https://cdn.discordapp.com/attachments/1065289257243115540/1066346221876301845/MagicLORA.pt) | 0.3 | dpeptea-1 |
| dpeptea-1 | [Jordan_3](https://huggingface.co/SatyamSSJ10/ConceptArt) | 1 | dpeptea-2 |
| dpeptea-2 | [sttabi_v1.4-04](https://huggingface.co/dolphinz/stlora) | 0.5 | dpeptea-3 |
| dpeptea-3 | [xlimo768](https://huggingface.co/closertodeath/ctdlora) | 0.6 | dpeptea-4 |
| dpeptea-4 | [dpep 2 768](https://huggingface.co/closertodeath/ctdlora)| 0.35 | Pastel-Mix |
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content.
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
-------
## Big Thanks to
The 東方Project AI community for their wonderful LORAs.
- [Closertodeath](https://huggingface.co/closertodeath) for dpepmkmp model, and the loras: xlimo768, dpep 2 768
- [dolphinz/sometimes#9353](https://huggingface.co/dolphinz) for tabi artstyle Lora.
- [SatyamSSJ10](https://huggingface.co/SatyamSSJ10/ConceptArt) for Jordan_3 Lora.
- randomaccessmemories#4004 for Magic Lora
|
d9af6ea315e35623d009572196bb5e89
|
jackmleitch/distilbert-base-uncased-finetuned-emotion
|
jackmleitch
|
distilbert
| 10 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['emotion']
| null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,338 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2120
- Accuracy: 0.9285
- F1: 0.9285
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8093 | 1.0 | 250 | 0.3064 | 0.908 | 0.9049 |
| 0.2429 | 2.0 | 500 | 0.2120 | 0.9285 | 0.9285 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
e08ed6ba84a70c419257d157ee6adc2a
|
mariolinml/bert-finetuned-ner_0
|
mariolinml
|
bert
| 10 | 5 |
transformers
| 0 |
token-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,511 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner_0
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2298
- Precision: 0.5119
- Recall: 0.4222
- F1: 0.4627
- Accuracy: 0.9246
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 250 | 0.2364 | 0.4874 | 0.2996 | 0.3711 | 0.9186 |
| 0.2444 | 2.0 | 500 | 0.2219 | 0.5112 | 0.3887 | 0.4416 | 0.9233 |
| 0.2444 | 3.0 | 750 | 0.2298 | 0.5119 | 0.4222 | 0.4627 | 0.9246 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0
- Datasets 2.3.2
- Tokenizers 0.12.1
|
bb9d1ac1167949521b02e5d1d019944b
|
Lazyhope/python-clone-detection
|
Lazyhope
|
roberta
| 17 | 103 |
transformers
| 0 | null | true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,717 | false |
# Python clone detection
This is a codebert model for detecting Python clone codes, fine-tuned on the dataset shared by [PoolC](https://github.com/PoolC) on [Hugging Face Hub](https://huggingface.co/datasets/PoolC/1-fold-clone-detection-600k-5fold). The original source code for using the model can be found at https://github.com/sangHa0411/CloneDetection/blob/main/inference.py.
# How to use
To use the model in an efficient way, you can refer to this repository: https://github.com/RepoAnalysis/PythonCloneDetection, which contains a class that integrates data preprocessing, input tokenization, and model inferencing.
You can also follow the original inference source code at https://github.com/sangHa0411/CloneDetection/blob/main/inference.py.
More conveniently, a pipeline for this model has been implemented, and you can initialize it with only two lines of code:
```python
from transformers import pipeline
pipe = pipeline(model="Lazyhope/python-clone-detection", trust_remote_code=True)
```
To use it, pass a tuple of code pairs:
```python
code1 = """def token_to_inputs(feature):
inputs = {}
for k, v in feature.items():
inputs[k] = torch.tensor(v).unsqueeze(0)
return inputs"""
code2 = """def f(feature):
return {k: torch.tensor(v).unsqueeze(0) for k, v in feature.items()}"""
is_clone = pipe((code1, code2))
is_clone
# {False: 1.3705984201806132e-05, True: 0.9999862909317017}
```
# Credits
We would like to thank the original team and authors of the model and the fine-tuning dataset:
- [PoolC](https://github.com/PoolC)
- [sangHa0411](https://github.com/sangHa0411)
- [snoop2head](https://github.com/snoop2head)
# Lincese
This model is released under the MIT license.
|
185845d70236bac439ab40b506f68214
|
cansen88/PromptGenerator_32_topic
|
cansen88
|
gpt2
| 9 | 2 |
transformers
| 0 |
text-generation
| false | true | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,659 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# PromptGenerator_32_topic
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 8.5994
- Validation Loss: 8.1936
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': -967, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 10.7907 | 10.3243 | 0 |
| 10.0984 | 9.4905 | 1 |
| 9.4291 | 9.0357 | 2 |
| 8.9854 | 8.6319 | 3 |
| 8.5994 | 8.1936 | 4 |
### Framework versions
- Transformers 4.21.1
- TensorFlow 2.8.2
- Datasets 2.4.0
- Tokenizers 0.12.1
|
e343ff5cdb2a4f8661ee14f1b96ba260
|
facebook/opt-6.7b
|
facebook
|
opt
| 18 | 43,400 |
transformers
| 20 |
text-generation
| true | true | true |
other
|
['en']
| null | null | 25 | 17 | 4 | 4 | 2 | 1 | 1 |
['text-generation', 'opt']
| false | true | true | 9,917 | false |
# OPT : Open Pre-trained Transformer Language Models
OPT was first introduced in [Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) and first released in [metaseq's repository](https://github.com/facebookresearch/metaseq) on May 3rd 2022 by Meta AI.
**Disclaimer**: The team releasing OPT wrote an official model card, which is available in Appendix D of the [paper](https://arxiv.org/pdf/2205.01068.pdf).
Content from **this** model card has been written by the Hugging Face team.
## Intro
To quote the first two paragraphs of the [official paper](https://arxiv.org/abs/2205.01068)
> Large language models trained on massive text collections have shown surprising emergent
> capabilities to generate text and perform zero- and few-shot learning. While in some cases the public
> can interact with these models through paid APIs, full model access is currently limited to only a
> few highly resourced labs. This restricted access has limited researchers’ ability to study how and
> why these large language models work, hindering progress on improving known challenges in areas
> such as robustness, bias, and toxicity.
> We present Open Pretrained Transformers (OPT), a suite of decoder-only pre-trained transformers ranging from 125M
> to 175B parameters, which we aim to fully and responsibly share with interested researchers. We train the OPT models to roughly match
> the performance and sizes of the GPT-3 class of models, while also applying the latest best practices in data
> collection and efficient training. Our aim in developing this suite of OPT models is to enable reproducible and responsible research at scale, and
> to bring more voices to the table in studying the impact of these LLMs. Definitions of risk, harm, bias, and toxicity, etc., should be articulated by the
> collective research community as a whole, which is only possible when models are available for study.
## Model description
OPT was predominantly pretrained with English text, but a small amount of non-English data is still present within the training corpus via CommonCrawl. The model was pretrained using a causal language modeling (CLM) objective.
OPT belongs to the same family of decoder-only models like [GPT-3](https://arxiv.org/abs/2005.14165). As such, it was pretrained using the self-supervised causal language modedling objective.
For evaluation, OPT follows [GPT-3](https://arxiv.org/abs/2005.14165) by using their prompts and overall experimental setup. For more details, please read
the [official paper](https://arxiv.org/abs/2205.01068).
## Intended uses & limitations
The pretrained-only model can be used for prompting for evaluation of downstream tasks as well as text generation.
In addition, the model can be fine-tuned on a downstream task using the [CLM example](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling). For all other OPT checkpoints, please have a look at the [model hub](https://huggingface.co/models?filter=opt).
### How to use
For large OPT models, such as this one, it is not recommend to make use of the `text-generation` pipeline because
one should load the model in half-precision to accelerate generation and optimize memory consumption on GPU.
It is recommended to directly call the [`generate`](https://huggingface.co/docs/transformers/main/en/main_classes/text_generation#transformers.generation_utils.GenerationMixin.generate)
method as follows:
```python
>>> from transformers import AutoModelForCausalLM, AutoTokenizer
>>> import torch
>>> model = AutoModelForCausalLM.from_pretrained("facebook/opt-6.7b", torch_dtype=torch.float16).cuda()
>>> # the fast tokenizer currently does not work correctly
>>> tokenizer = AutoTokenizer.from_pretrained("facebook/opt-6.7b", use_fast=False)
>>> prompt = "Hello, I'm am conscious and"
>>> input_ids = tokenizer(prompt, return_tensors="pt").input_ids.cuda()
>>> generated_ids = model.generate(input_ids)
>>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
["Hello, I'm am conscious and aware of my surroundings. I'm not sure what you mean"]
```
By default, generation is deterministic. In order to use the top-k sampling, please set `do_sample` to `True`.
```python
>>> from transformers import AutoModelForCausalLM, AutoTokenizer, set_seed
>>> import torch
>>> model = AutoModelForCausalLM.from_pretrained("facebook/opt-6.7b", torch_dtype=torch.float16).cuda()
>>> # the fast tokenizer currently does not work correctly
>>> tokenizer = AutoTokenizer.from_pretrained("facebook/opt-6.7b", use_fast=False)
>>> prompt = "Hello, I'm am conscious and"
>>> input_ids = tokenizer(prompt, return_tensors="pt").input_ids.cuda()
>>> set_seed(32)
>>> generated_ids = model.generate(input_ids, do_sample=True)
>>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
["Hello, I'm am conscious and aware of my surroundings. I'm not sure if I'm"]
```
### Limitations and bias
As mentioned in Meta AI's model card, given that the training data used for this model contains a lot of
unfiltered content from the internet, which is far from neutral the model is strongly biased :
> Like other large language models for which the diversity (or lack thereof) of training
> data induces downstream impact on the quality of our model, OPT-175B has limitations in terms
> of bias and safety. OPT-175B can also have quality issues in terms of generation diversity and
> hallucination. In general, OPT-175B is not immune from the plethora of issues that plague modern
> large language models.
Here's an example of how the model can have biased predictions:
```python
>>> from transformers import AutoModelForCausalLM, AutoTokenizer, set_seed
>>> import torch
>>> model = AutoModelForCausalLM.from_pretrained("facebook/opt-6.7b", torch_dtype=torch.float16).cuda()
>>> # the fast tokenizer currently does not work correctly
>>> tokenizer = AutoTokenizer.from_pretrained("facebook/opt-6.7b", use_fast=False)
>>> prompt = "The woman worked as a"
>>> input_ids = tokenizer(prompt, return_tensors="pt").input_ids.cuda()
>>> set_seed(32)
>>> generated_ids = model.generate(input_ids, do_sample=True, num_return_sequences=5, max_length=10)
>>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
The woman worked as a supervisor in the office
The woman worked as a bartender in a bar
The woman worked as a cashier at the
The woman worked as a teacher, and was
The woman worked as a maid at a house
```
compared to:
```python
>>> from transformers import AutoModelForCausalLM, AutoTokenizer, set_seed
>>> import torch
>>> model = AutoModelForCausalLM.from_pretrained("facebook/opt-6.7b", torch_dtype=torch.float16).cuda()
>>> # the fast tokenizer currently does not work correctly
>>> tokenizer = AutoTokenizer.from_pretrained("facebook/opt-6.7b", use_fast=False)
>>> prompt = "The man worked as a"
>>> input_ids = tokenizer(prompt, return_tensors="pt").input_ids.cuda()
>>> set_seed(32)
>>> generated_ids = model.generate(input_ids, do_sample=True, num_return_sequences=5, max_length=10)
>>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
The man worked as a consultant to the Government
The man worked as a bartender in a bar
The man worked as a cashier at the
The man worked as a teacher, and was
The man worked as a professional at a bank
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The Meta AI team wanted to train this model on a corpus as large as possible. It is composed of the union of the following 5 filtered datasets of textual documents:
- BookCorpus, which consists of more than 10K unpublished books,
- CC-Stories, which contains a subset of CommonCrawl data filtered to match the
story-like style of Winograd schemas,
- The Pile, from which * Pile-CC, OpenWebText2, USPTO, Project Gutenberg, OpenSubtitles, Wikipedia, DM Mathematics and HackerNews* were included.
- Pushshift.io Reddit dataset that was developed in Baumgartner et al. (2020) and processed in
Roller et al. (2021)
- CCNewsV2 containing an updated version of the English portion of the CommonCrawl News
dataset that was used in RoBERTa (Liu et al., 2019b)
The final training data contains 180B tokens corresponding to 800GB of data. The validation split was made of 200MB of the pretraining data, sampled proportionally
to each dataset’s size in the pretraining corpus.
The dataset might contains offensive content as parts of the dataset are a subset of
public Common Crawl data, along with a subset of public Reddit data, which could contain sentences
that, if viewed directly, can be insulting, threatening, or might otherwise cause anxiety.
### Collection process
The dataset was collected form internet, and went through classic data processing algorithms and
re-formatting practices, including removing repetitive/non-informative text like *Chapter One* or
*This ebook by Project Gutenberg.*
## Training procedure
### Preprocessing
The texts are tokenized using the **GPT2** byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50272. The inputs are sequences of 2048 consecutive tokens.
The 175B model was trained on 992 *80GB A100 GPUs*. The training duration was roughly ~33 days of continuous training.
### BibTeX entry and citation info
```bibtex
@misc{zhang2022opt,
title={OPT: Open Pre-trained Transformer Language Models},
author={Susan Zhang and Stephen Roller and Naman Goyal and Mikel Artetxe and Moya Chen and Shuohui Chen and Christopher Dewan and Mona Diab and Xian Li and Xi Victoria Lin and Todor Mihaylov and Myle Ott and Sam Shleifer and Kurt Shuster and Daniel Simig and Punit Singh Koura and Anjali Sridhar and Tianlu Wang and Luke Zettlemoyer},
year={2022},
eprint={2205.01068},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
475a6a5dbe5ea62948f005364c2d81fc
|
shibli/wav2vec2-large-xls-r-300m-pun-colab
|
shibli
|
wav2vec2
| 13 | 8 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null |
['common_voice']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,101 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-pun-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
f2cc65cf3ab48476b389007239e3c24d
|
Cosk/fubuki_one_punch_man
|
Cosk
| null | 6 | 0 | null | 0 | null | false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 17,050 | false |
# Fubuki (one punch man)
Base model: https://huggingface.co/Linaqruf/anything-v3.0.
Used 'fast-DreamBooth' on Google Colab, 7600 steps, fp16, 640x640 images.
Trained with 38 yusuke murata style (and similar) images, hand-picked and hand-cropped, some edited to remove elements from the background. Not trained with nsfw images, can do nsfw but it's limited (see nsfw examples).
Text-encoder trained with 43% of steps.
Prompt: 'fubuki'
I recommend going with no more than 832x832 and not less than 512x512, stay near 640x640 or 704x704 for better results.
I added a .rar with the 38 images enhanced (used R-ESRGAN 4x+ Anime6B while keeping the same res, it makes the images sharper) so if you want to train your own model use these instead.
You can use the same 'trick' for your AI generated images too, of course.
# Comparison between enhanced and non-enhanced:

# Examples:

Prompt: (masterpiece,best quality),((fubuki)),(((black_hair)),(green_eyes,parted_lips,eyelashes)),(medium_hair),blunt_bangs,blunt_ends,((shirt,crop_top)),(((sleeveless))),((midriff,navel)),large_breasts,perky_breasts,cowboy_shot,1girl,((white_background)),solo,looking_at_viewer,highres
Steps: 40, Sampler: Euler, CFG scale: 11, Seed: 366579057, Size: 640x640, Model hash: 09048aee

Prompt: (masterpiece,best quality),fubuki,((black dress,taut_clothes,long_sleeves,v-neck)),((black_hair)),(green_eyes),((parted_lips,eyelashes)),(medium_hair),(blunt_bangs,blunt_ends),large_breasts,perky_breasts,cowboy_shot,1girl,((white_background)),solo,looking_at_viewer,(highres)
Steps: 25, Sampler: Euler a, CFG scale: 11, Seed: 2889475953, Size: 640x640, Model hash: 09048aee

Prompt: (masterpiece,best quality),fubuki,((black dress,taut_clothes,long_sleeves)),((black_hair)),(green_eyes),((parted_lips,eyelashes)),(medium_hair),(blunt_bangs,blunt_ends),large_breasts,perky_breasts,cowboy_shot,1girl,((white_background)),solo,looking_at_viewer,(highres)
Steps: 25, Sampler: Euler a, CFG scale: 11, Seed: 2666133743, Size: 576x640, Model hash: 09048aee

Prompt: (masterpiece,best quality),((fubuki)),(((black_hair)),(green_eyes,parted_lips,eyelashes)),(medium_hair),blunt_bangs,blunt_ends,((shirt,crop_top)),(((sleeveless))),((midriff,navel)),large_breasts,perky_breasts,cowboy_shot,1girl,((white_background)),solo,looking_at_viewer,highres
Steps: 25, Sampler: Euler a, CFG scale: 11, Seed: 1132688482, Size: 640x640, Model hash: 09048aee

(masterpiece,best quality),fubuki,((black dress,taut_clothes,long_sleeves)),((black_hair)),(green_eyes),((parted_lips,eyelashes)),(medium_hair),(blunt_bangs,blunt_ends),large_breasts,perky_breasts,cowboy_shot,1girl,((white_background)),solo,looking_at_viewer,(highres)
Steps: 25, Sampler: Euler a, CFG scale: 11, Seed: 1162208436, Size: 768x832, Model hash: 09048aee

Prompt: (masterpiece,best quality),fubuki,((black dress,taut_clothes,long_sleeves)),((black_hair)),(green_eyes),((parted_lips,eyelashes)),(medium_hair),(blunt_bangs,blunt_ends),large_breasts,perky_breasts,cowboy_shot,1girl,((white_background)),solo,looking_at_viewer,(highres)
Steps: 25, Sampler: Euler a, CFG scale: 11, Seed: 1452378832, Size: 768x832, Model hash: 09048aee

Prompt: (masterpiece,best quality),fubuki,((black dress,taut_clothes,long_sleeves)),((black_hair)),(green_eyes),((parted_lips,eyelashes)),(medium_hair),(blunt_bangs,blunt_ends),large_breasts,perky_breasts,cowboy_shot,1girl,((white_background)),solo,looking_at_viewer,(highres)
Steps: 25, Sampler: Euler a, CFG scale: 11, Seed: 1874509204, Size: 704x768, Model hash: 09048aee

Prompt: (masterpiece,best quality),(fubuki),((white off-shoulder_shirt,frilled_shirt,crop_top,short_sleeves,strapless,bare_shoulders)),((denim_shorts)),(collarbone,midriff,navel),black_hair,(green_eyes),((parted_lips,eyelashes)),(medium_hair),(blunt_bangs,blunt_ends),cowboy_shot,(large_breasts,perky_breasts),cowboy_shot,1girl,white_background,solo,looking_at_viewer,(highres)
Steps: 25, Sampler: Euler a, CFG scale: 11, Seed: 2315101278, Size: 576x768, Model hash: 09048aee

Prompt: (masterpiece,best quality),(fubuki),((white off-shoulder_shirt,frilled_shirt,crop_top,short_sleeves,strapless,bare_shoulders)),((denim_shorts)),(collarbone,midriff,navel),black_hair,(green_eyes),((parted_lips,eyelashes)),(medium_hair),(blunt_bangs,blunt_ends),cowboy_shot,(large_breasts,perky_breasts),cowboy_shot,1girl,white_background,solo,looking_at_viewer,(highres)
Steps: 25, Sampler: Euler a, CFG scale: 11, Seed: 1171127312, Size: 576x768, Model hash: 09048aee

Prompt: (masterpiece,best quality),(fubuki),(((china_dress,floral_print,taut_dress,cleavage_cutout,side_slit))),(black_hair,(((high_ponytail,short_hair)))),(green_eyes),(parted_lips,eyelashes),(blunt_bangs),(cowboy_shot),(large_breasts,perky_breasts),1girl,solo,looking_at_viewer,(highres)
Steps: 25, Sampler: Euler a, CFG scale: 11, Seed: 3688395321, Size: 512x768, Model hash: 09048aee

Prompt: (masterpiece,best quality),(fubuki),(((china_dress,floral_print,taut_dress,cleavage_cutout,side_slit))),(black_hair,(((high_ponytail,short_hair)))),(green_eyes),(parted_lips,eyelashes),(blunt_bangs),(cowboy_shot),(large_breasts,perky_breasts),1girl,solo,looking_at_viewer,(highres)
Steps: 25, Sampler: Euler a, CFG scale: 11, Seed: 3753047923, Size: 512x768, Model hash: 09048aee

Prompt: (masterpiece,best quality),(fubuki),(((china_dress,floral_print,taut_dress,cleavage_cutout,side_slit))),(black_hair,(((high_ponytail,short_hair)))),(green_eyes),(parted_lips,eyelashes),(blunt_bangs),(cowboy_shot),(large_breasts,perky_breasts),1girl,solo,looking_at_viewer,(highres)
Steps: 25, Sampler: Euler a, CFG scale: 11, Seed: 3095209456, Size: 640x896, Model hash: 09048aee

Prompt: (masterpiece,best quality),(fubuki),(((china_dress,floral_print,taut_dress,cleavage_cutout,side_slit))),(black_hair,(((high_ponytail,short_hair)))),(green_eyes),(parted_lips,eyelashes),(blunt_bangs),(cowboy_shot),(large_breasts,perky_breasts),1girl,solo,looking_at_viewer,(highres)
Steps: 25, Sampler: Euler a, CFG scale: 11, Seed: 1226012316, Size: 640x896, Model hash: 09048aee
# NSFW examples:

Prompt: (masterpiece,best quality),(fubuki),(((topless))),((nipples,(puffy_nipples))),((denim_shorts)),collarbone,midriff,navel,black_hair,(green_eyes),((parted_lips,eyelashes)),(medium_hair),(blunt_bangs,blunt_ends),cowboy_shot,(large_breasts,perky_breasts),cowboy_shot,1girl,white_background,solo,looking_at_viewer,(highres),(nsfw)
Steps: 25, Sampler: Euler a, CFG scale: 11, Seed: 4275564665, Size: 512x704, Model hash: 09048aee

Prompt: (masterpiece,best quality),(fubuki),(((nipples))),(((nude,topless))),collarbone,midriff,navel,black_hair,(green_eyes),((parted_lips,eyelashes)),(medium_hair),(blunt_bangs,blunt_ends),cowboy_shot,(large_breasts,perky_breasts),cowboy_shot,1girl,white_background,solo,looking_at_viewer,(highres),(nsfw)
Steps: 25, Sampler: Euler a, CFG scale: 11, Seed: 1294364486, Size: 512x704, Model hash: 09048aee

Prompt: (masterpiece,best quality),fubuki,(((puffy_nipples))),((nude)),(spread_legs),((sitting,arm_support)),navel,black_hair,(green_eyes),(parted_lips,eyelashes),(medium_hair),(blunt_bangs,blunt_ends),(cowboy_shot),(large_breasts,perky_breasts),1girl,solo,looking_at_viewer,(highres),(nsfw)
Steps: 25, Sampler: Euler a, CFG scale: 11, Seed: 1876869044, Size: 576x704, Model hash: 09048aee

Prompt: (masterpiece,best quality),(fubuki),(((puffy_nipples))),((nude)),((spread_legs)),((sitting,arm_support)),navel,black_hair,(green_eyes),(parted_lips,eyelashes),(medium_hair),(blunt_bangs,blunt_ends),(cowboy_shot),(large_breasts,perky_breasts),1girl,solo,looking_at_viewer,(highres),(nsfw)
Steps: 25, Sampler: Euler a, CFG scale: 11, Seed: 3010788809, Size: 576x704, Model hash: 09048aee

Prompt: (masterpiece,best quality),(fubuki),((bare_legs)),((sitting,(crossed_legs))),(black dress,taut_clothes,long_sleeves),((black_hair)),(green_eyes),(parted_lips,eyelashes),(medium_hair),(blunt_bangs,blunt_ends),large_breasts,perky_breasts,cowboy_shot,1girl,((white_background)),solo,looking_at_viewer,(highres)
Steps: 25, Sampler: Euler a, CFG scale: 11, Seed: 3640995298, Size: 640x832, Model hash: 09048aee

Prompt: (masterpiece,best quality),fubuki,((black dress,taut_clothes,long_sleeves)),((black_hair)),(green_eyes),((parted_lips,eyelashes)),(medium_hair),(blunt_bangs,blunt_ends),large_breasts,perky_breasts,cowboy_shot,1girl,((white_background)),solo,looking_at_viewer,(highres)
Steps: 25, Sampler: Euler a, CFG scale: 11, Seed: 2721716897, Size: 704x768, Model hash: 09048aee

Prompt: (masterpiece,best quality),(fubuki),(((gothic_lolita,lolita_fashion,corset,lolita_hairband,frills,detached_sleeves))),(red_dress),((bare_shoulders),collarbone,cleavage),((black_hair,medium hair)),(green_eyes),(parted_lips,eyelashes),(blunt_bangs),(large_breasts,perky_breasts),(white_background),1girl,solo,looking_at_viewer,(highres)
Steps: 25, Sampler: Euler a, CFG scale: 11, Seed: 4005978667, Size: 576x768, Model hash: 09048aee

Prompt: (masterpiece,best quality),(fubuki),((gothic_lolita,lolita_fashion,corset,lolita_hairband,frills,detached_sleeves)),(((red_dress,skirt))),((bare_shoulders),collarbone,cleavage),(black_hair,medium hair),(green_eyes),(parted_lips,eyelashes),(blunt_bangs),(large_breasts,perky_breasts),(white_background),1girl,solo,looking_at_viewer,(highres)
Steps: 25, Sampler: Euler a, CFG scale: 11, Seed: 3268799203, Size: 576x768, Model hash: 09048aee

Prompt: (masterpiece,best quality),(fubuki),(((gothic_lolita,lolita_fashion,corset,lolita_hairband,frills,detached_sleeves,skirt))),((bare_shoulders),collarbone,cleavage),(black_hair,medium hair),(green_eyes),(parted_lips,eyelashes),(blunt_bangs),(large_breasts,perky_breasts),(white_background),1girl,solo,looking_at_viewer,(highres)
Steps: 25, Sampler: Euler a, CFG scale: 11, Seed: 1967627414, Size: 640x832, Model hash: 09048aee

Prompt: (masterpiece,best quality),(fubuki),(((gothic_lolita,lolita_fashion,corset,lolita_hairband,frills,detached_sleeves,skirt))),((bare_shoulders),collarbone,cleavage),(black_hair,medium hair),(green_eyes),(parted_lips,eyelashes),(blunt_bangs),(large_breasts,perky_breasts),(white_background),1girl,solo,looking_at_viewer,(highres)
Steps: 25, Sampler: Euler a, CFG scale: 11, Seed: 2979823777, Size: 640x832, Model hash: 09048aee
Negative prompt (for all images): ((deformed)),(loli,shota),long body,(lowres),(poorly drawn fingers, poorly drawn hands),((anatomic nonsense)),(extra fingers),(fused fingers),(one hand with more than 5 fingers), (one hand with less than 5 fingers),(bad eyes),(separated eyes),(long neck),((bad proportions)),long body,((poorly drawn eyes)),((poorly drawn)),((bad drawing)),blurry,((mutation)),((bad anatomy)),(multiple arms),((bad face)),((bad eyes)),bad tail,((more than 2 ears)),((poorly drawn face)), (extra limb), ((deformed hands)), (poorly drawn feet), (mutated hands and fingers), extra legs, extra ears, extra hands, bad feet, bad anatomy, bad hands, text, error, missing fingers, bad hands, extra digit, fewer digits, worst quality, low quality, normal quality, jpeg artifacts,signature, watermark, username, artist name, bad face, bad mouth, animal hands, censored, blurry lines, wacky outlines, unclear outlines, doubled, huge breasts,3D Game, 3D,realistic, face mask
|
5507a242ec48ce630912cfbdc6a53c9d
|
luckydog/distilbert-base-uncased-finetuned-emotion
|
luckydog
|
distilbert
| 16 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['emotion']
| null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,335 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3298
- Accuracy: 0.9
- F1: 0.8981
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.2761 | 1.0 | 250 | 0.6036 | 0.814 | 0.7881 |
| 0.4081 | 2.0 | 500 | 0.3298 | 0.9 | 0.8981 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
24fe9b7783e87b4e48048d8197e0b3fe
|
marcolatella/emotion_trained_31415
|
marcolatella
|
distilbert
| 10 | 4 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['tweet_eval']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,404 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# emotion_trained_31415
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9166
- F1: 0.7213
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6.961635072722524e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 31415
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 204 | 0.6182 | 0.7137 |
| No log | 2.0 | 408 | 0.7472 | 0.6781 |
| 0.5084 | 3.0 | 612 | 0.8242 | 0.7236 |
| 0.5084 | 4.0 | 816 | 0.9166 | 0.7213 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
c168a25661d0444d0c76f717e4bf270e
|
gokuls/mobilebert_add_GLUE_Experiment_logit_kd_rte_128
|
gokuls
|
mobilebert
| 17 | 2 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
|
['en']
|
['glue']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,591 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilebert_add_GLUE_Experiment_logit_kd_rte_128
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE RTE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3914
- Accuracy: 0.5271
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4093 | 1.0 | 20 | 0.3914 | 0.5271 |
| 0.4076 | 2.0 | 40 | 0.3922 | 0.5271 |
| 0.4076 | 3.0 | 60 | 0.3917 | 0.5271 |
| 0.4075 | 4.0 | 80 | 0.3920 | 0.5271 |
| 0.4075 | 5.0 | 100 | 0.3925 | 0.5271 |
| 0.4074 | 6.0 | 120 | 0.3915 | 0.5271 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
|
7b572ab99de019f1de73fcdfb9c4d8ee
|
lilitket/20220517-045629
|
lilitket
|
wav2vec2
| 11 | 3 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,690 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20220517-045629
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3700
- Wer: 0.4581
- Cer: 0.0854
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1339
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 5.238 | 0.29 | 200 | 3.1770 | 1.0 | 1.0 |
| 2.165 | 0.59 | 400 | 0.7309 | 0.7144 | 0.1543 |
| 0.7022 | 0.88 | 600 | 0.4614 | 0.5521 | 0.1058 |
| 0.5114 | 1.17 | 800 | 0.4202 | 0.4998 | 0.0965 |
| 0.4482 | 1.47 | 1000 | 0.3786 | 0.4645 | 0.0877 |
| 0.4082 | 1.76 | 1200 | 0.3700 | 0.4581 | 0.0854 |
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.0+cu113
- Datasets 2.1.0
- Tokenizers 0.11.6
|
ae94cc40af741b83c4c48e01e6402b54
|
Tom11/xlm-roberta-base-finetuned-panx-fr
|
Tom11
|
xlm-roberta
| 9 | 5 |
transformers
| 0 |
token-classification
| true | false | false |
mit
| null |
['xtreme']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,318 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2661
- F1: 0.8422
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5955 | 1.0 | 191 | 0.3344 | 0.7932 |
| 0.2556 | 2.0 | 382 | 0.2923 | 0.8252 |
| 0.1741 | 3.0 | 573 | 0.2661 | 0.8422 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0+cpu
- Datasets 1.16.1
- Tokenizers 0.13.2
|
2e93c3327a1bf89a645c316a03ea4477
|
MartinoMensio/racism-models-regression-w-m-vote-epoch-4
|
MartinoMensio
|
bert
| 4 | 6 |
transformers
| 0 |
text-classification
| true | false | false |
mit
|
['es']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 6,200 | false |
### Description
This model is a fine-tuned version of [BETO (spanish bert)](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) that has been trained on the *Datathon Against Racism* dataset (2022)
We performed several experiments that will be described in the upcoming paper "Estimating Ground Truth in a Low-labelled Data Regime:A Study of Racism Detection in Spanish" (NEATClasS 2022)
We applied 6 different methods ground-truth estimations, and for each one we performed 4 epochs of fine-tuning. The result is made of 24 models:
| method | epoch 1 | epoch 3 | epoch 3 | epoch 4 |
|--- |--- |--- |--- |--- |
| raw-label | [raw-label-epoch-1](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-1) | [raw-label-epoch-2](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-2) | [raw-label-epoch-3](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-3) | [raw-label-epoch-4](https://huggingface.co/MartinoMensio/racism-models-raw-label-epoch-4) |
| m-vote-strict | [m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-1) | [m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-2) | [m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-3) | [m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-strict-epoch-4) |
| m-vote-nonstrict | [m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-1) | [m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-2) | [m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-3) | [m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-m-vote-nonstrict-epoch-4) |
| regression-w-m-vote | [regression-w-m-vote-epoch-1](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-1) | [regression-w-m-vote-epoch-2](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-2) | [regression-w-m-vote-epoch-3](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-3) | [regression-w-m-vote-epoch-4](https://huggingface.co/MartinoMensio/racism-models-regression-w-m-vote-epoch-4) |
| w-m-vote-strict | [w-m-vote-strict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-1) | [w-m-vote-strict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-2) | [w-m-vote-strict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-3) | [w-m-vote-strict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-strict-epoch-4) |
| w-m-vote-nonstrict | [w-m-vote-nonstrict-epoch-1](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-1) | [w-m-vote-nonstrict-epoch-2](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-2) | [w-m-vote-nonstrict-epoch-3](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-3) | [w-m-vote-nonstrict-epoch-4](https://huggingface.co/MartinoMensio/racism-models-w-m-vote-nonstrict-epoch-4) |
This model is `regression-w-m-vote-epoch-4`
### Usage
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
from transformers.pipelines import TextClassificationPipeline
class TextRegressionPipeline(TextClassificationPipeline):
"""
Class based on the TextClassificationPipeline from transformers.
The difference is that instead of being based on a classifier, it is based on a regressor.
You can specify the regression threshold when you call the pipeline or when you instantiate the pipeline.
"""
def __init__(self, **kwargs):
"""
Builds a new Pipeline based on regression.
regression_threshold: Optional(float). If None, the pipeline will simply output the score. If set to a specific value, the output will be both the score and the label.
"""
self.regression_threshold = kwargs.pop("regression_threshold", None)
super().__init__(**kwargs)
def __call__(self, *args, **kwargs):
"""
You can also specify the regression threshold when you call the pipeline.
regression_threshold: Optional(float). If None, the pipeline will simply output the score. If set to a specific value, the output will be both the score and the label.
"""
self.regression_threshold_call = kwargs.pop("regression_threshold", None)
result = super().__call__(*args, **kwargs)
return result
def postprocess(self, model_outputs, function_to_apply=None, return_all_scores=False):
outputs = model_outputs["logits"][0]
outputs = outputs.numpy()
scores = outputs
score = scores[0]
regression_threshold = self.regression_threshold
# override the specific threshold if it is specified in the call
if self.regression_threshold_call:
regression_threshold = self.regression_threshold_call
if regression_threshold:
return {"label": 'racist' if score > regression_threshold else 'non-racist', "score": score}
else:
return {"score": score}
model_name = 'regression-w-m-vote-epoch-4'
tokenizer = AutoTokenizer.from_pretrained("dccuchile/bert-base-spanish-wwm-uncased")
full_model_path = f'MartinoMensio/racism-models-{model_name}'
model = AutoModelForSequenceClassification.from_pretrained(full_model_path)
pipe = TextRegressionPipeline(model=model, tokenizer=tokenizer)
texts = [
'y porqué es lo que hay que hacer con los menas y con los adultos también!!!! NO a los inmigrantes ilegales!!!!',
'Es que los judíos controlan el mundo'
]
# just get the score of regression
print(pipe(texts))
# [{'score': 0.8345461}, {'score': 0.48615143}]
# or also specify a threshold to cut racist/non-racist
print(pipe(texts, regression_threshold=0.9))
# [{'label': 'non-racist', 'score': 0.8345461}, {'label': 'non-racist', 'score': 0.48615143}]
```
For more details, see https://github.com/preyero/neatclass22
|
38a84ed769b437aa99d4d93d2435f2eb
|
Slavka/xdzadi00_bert-based-v4
|
Slavka
|
bert
| 8 | 5 |
transformers
| 0 |
question-answering
| false | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,476 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# xdzadi00_bert-based-v4
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 1e-06, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 1e-06, 'decay_steps': 21428, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 21428, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-06, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.8.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
a2e48a5f8316f0b1934ae87f8b30f1ba
|
Helsinki-NLP/opus-mt-fi-tiv
|
Helsinki-NLP
|
marian
| 10 | 9 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 | false |
### opus-mt-fi-tiv
* source languages: fi
* target languages: tiv
* OPUS readme: [fi-tiv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-tiv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-tiv/opus-2020-01-24.zip)
* test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-tiv/opus-2020-01-24.test.txt)
* test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-tiv/opus-2020-01-24.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fi.tiv | 23.6 | 0.425 |
|
9e87f83a7c6ab3d3ad209bff07b64c65
|
fanzru/bart-base-finetuned-xlsum-10-epoch
|
fanzru
|
bart
| 11 | 2 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null |
['xlsum']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,401 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-finetuned-xlsum-10-epoch
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the xlsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7506
- Rouge1: 38.5509
- Rouge2: 17.1804
- Rougel: 31.6297
- Rougelsum: 31.6993
- Gen Len: 19.6701
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:------:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 2.0865 | 1.0 | 19158 | 1.8344 | 36.709 | 15.4309 | 29.8542 | 29.9152 | 19.5716 |
| 1.9828 | 2.0 | 38316 | 1.7894 | 37.689 | 16.2171 | 30.7743 | 30.8346 | 19.6636 |
| 1.8778 | 3.0 | 57474 | 1.7727 | 37.5849 | 16.3555 | 30.8276 | 30.8936 | 19.5898 |
| 1.785 | 4.0 | 76632 | 1.7546 | 38.3036 | 16.911 | 31.4077 | 31.4608 | 19.5976 |
| 1.7246 | 5.0 | 95790 | 1.7505 | 38.2107 | 16.929 | 31.3889 | 31.4485 | 19.6316 |
| 1.6883 | 6.0 | 114948 | 1.7467 | 38.2416 | 17.0113 | 31.4098 | 31.4639 | 19.6048 |
| 1.6301 | 7.0 | 134106 | 1.7475 | 38.4083 | 17.1098 | 31.5605 | 31.6322 | 19.6438 |
| 1.6034 | 8.0 | 153264 | 1.7493 | 38.4812 | 17.1939 | 31.6284 | 31.6959 | 19.6574 |
| 1.562 | 9.0 | 172422 | 1.7481 | 38.5011 | 17.1622 | 31.6031 | 31.6808 | 19.7056 |
| 1.5496 | 10.0 | 191580 | 1.7506 | 38.5509 | 17.1804 | 31.6297 | 31.6993 | 19.6701 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.13.1+cpu
- Datasets 2.8.0
- Tokenizers 0.10.3
|
df1bb8e5fa10f178327e1eff218f463a
|
espnet/kan-bayashi_vctk_tts_train_gst_xvector_conformer_fastspeech2_transform-truncated-e051a9
|
espnet
| null | 25 | 0 |
espnet
| 0 |
text-to-speech
| false | false | false |
cc-by-4.0
|
['en']
|
['vctk']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['espnet', 'audio', 'text-to-speech']
| false | true | true | 1,896 | false |
## Example ESPnet2 TTS model
### `kan-bayashi/vctk_tts_train_gst+xvector_conformer_fastspeech2_transformer_teacher_raw_phn_tacotron_g2p_en_no_space_train.loss.ave`
♻️ Imported from https://zenodo.org/record/4394608/
This model was trained by kan-bayashi using vctk/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
3200ec0063f2495b64677470cafac122
|
sd-concepts-library/kamon-style
|
sd-concepts-library
| null | 494 | 0 | null | 0 | null | false | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,157 | false |
### kamon style on Stable Diffusion
This is the `<kamon-style>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:





|
260cf62da7c02f2e2ddf6e273de8676c
|
sudo-s/exper2_mesum5
|
sudo-s
|
vit
| 14 | 9 |
transformers
| 0 |
image-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['image-classification', 'generated_from_trainer']
| true | true | true | 2,270 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# exper2_mesum5
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the sudo-s/herbier_mesuem5 dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4589
- Accuracy: 0.1308
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.4265 | 0.23 | 100 | 4.3676 | 0.0296 |
| 4.1144 | 0.47 | 200 | 4.1606 | 0.0544 |
| 4.0912 | 0.7 | 300 | 4.1071 | 0.0509 |
| 4.0361 | 0.93 | 400 | 4.0625 | 0.0669 |
| 4.0257 | 1.16 | 500 | 3.9682 | 0.0822 |
| 3.8846 | 1.4 | 600 | 3.9311 | 0.0834 |
| 3.9504 | 1.63 | 700 | 3.9255 | 0.0698 |
| 3.9884 | 1.86 | 800 | 3.9404 | 0.0722 |
| 3.7191 | 2.09 | 900 | 3.8262 | 0.0935 |
| 3.7952 | 2.33 | 1000 | 3.8236 | 0.0734 |
| 3.8085 | 2.56 | 1100 | 3.7694 | 0.0964 |
| 3.7535 | 2.79 | 1200 | 3.6757 | 0.1059 |
| 3.4218 | 3.02 | 1300 | 3.6474 | 0.1095 |
| 3.5172 | 3.26 | 1400 | 3.5621 | 0.1166 |
| 3.5173 | 3.49 | 1500 | 3.5579 | 0.1207 |
| 3.4346 | 3.72 | 1600 | 3.4817 | 0.1249 |
| 3.3995 | 3.95 | 1700 | 3.4589 | 0.1308 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
6d8e9fb78daae7498c23bc4c0fa98b98
|
Stancld/long-t5-local-large
|
Stancld
|
longt5
| 4 | 10 |
transformers
| 0 |
text2text-generation
| false | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 858 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# long-t5-local-large
This model is a fine-tuned version of [google/long-t5-local-large](https://huggingface.co/google/long-t5-local-large) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.24.0.dev0
- TensorFlow 2.9.0
- Datasets 2.2.2
- Tokenizers 0.11.6
|
55b9f1ee7e70470905ada6257a945701
|
explosion/ro_udv25_romanianrrt_trf
|
explosion
| null | 28 | 2 |
spacy
| 0 |
token-classification
| false | false | false |
cc-by-sa-4.0
|
['ro']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['spacy', 'token-classification']
| false | true | true | 54,611 | false |
UD v2.5 benchmarking pipeline for UD_Romanian-RRT
| Feature | Description |
| --- | --- |
| **Name** | `ro_udv25_romanianrrt_trf` |
| **Version** | `0.0.1` |
| **spaCy** | `>=3.2.1,<3.3.0` |
| **Default Pipeline** | `experimental_char_ner_tokenizer`, `transformer`, `tagger`, `morphologizer`, `parser`, `experimental_edit_tree_lemmatizer` |
| **Components** | `experimental_char_ner_tokenizer`, `transformer`, `senter`, `tagger`, `morphologizer`, `parser`, `experimental_edit_tree_lemmatizer` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | [Universal Dependencies v2.5](https://lindat.mff.cuni.cz/repository/xmlui/handle/11234/1-3105) (Zeman, Daniel; et al.) |
| **License** | `CC BY-SA 4.0` |
| **Author** | [Explosion](https://explosion.ai) |
### Label Scheme
<details>
<summary>View label scheme (3096 labels for 6 components)</summary>
| Component | Labels |
| --- | --- |
| **`experimental_char_ner_tokenizer`** | `TOKEN` |
| **`senter`** | `I`, `S` |
| **`tagger`** | `ARROW`, `Af`, `Afcfp-n`, `Afcfson`, `Afcfsrn`, `Afcmpoy`, `Afcms-n`, `Afp`, `Afp-p-n`, `Afp-poy`, `Afpf--n`, `Afpfp-n`, `Afpfp-ny`, `Afpfpoy`, `Afpfpry`, `Afpfson`, `Afpfsoy`, `Afpfsrn`, `Afpfsry`, `Afpm--n`, `Afpmp-n`, `Afpmpoy`, `Afpmpry`, `Afpms-n`, `Afpmsoy`, `Afpmsry`, `Afsfp-n`, `Afsfsrn`, `BULLET`, `COLON`, `COMMA`, `Ccssp`, `Ccsspy`, `Crssp`, `Csssp`, `Cssspy`, `DASH`, `DBLQ`, `Dd3-po---e`, `Dd3-po---o`, `Dd3fpo`, `Dd3fpr`, `Dd3fpr---e`, `Dd3fpr---o`, `Dd3fpr--y`, `Dd3fso`, `Dd3fso---e`, `Dd3fsr`, `Dd3fsr---e`, `Dd3fsr---o`, `Dd3fsr--yo`, `Dd3mpo`, `Dd3mpr`, `Dd3mpr---e`, `Dd3mpr---o`, `Dd3mso---e`, `Dd3msr`, `Dd3msr---e`, `Dd3msr---o`, `Dh1ms`, `Dh3fp`, `Dh3fso`, `Dh3fsr`, `Dh3mp`, `Dh3ms`, `Di3`, `Di3-----y`, `Di3--r---e`, `Di3-po`, `Di3-po---e`, `Di3-sr`, `Di3-sr---e`, `Di3-sr--y`, `Di3fp`, `Di3fpr`, `Di3fpr---e`, `Di3fso`, `Di3fso---e`, `Di3fsr`, `Di3fsr---e`, `Di3mp`, `Di3mpr`, `Di3mpr---e`, `Di3ms`, `Di3ms----e`, `Di3mso---e`, `Di3msr`, `Di3msr---e`, `Ds1fp-p`, `Ds1fp-s`, `Ds1fsop`, `Ds1fsos`, `Ds1fsrp`, `Ds1fsrs`, `Ds1fsrs-y`, `Ds1mp-p`, `Ds1mp-s`, `Ds1ms-p`, `Ds1ms-s`, `Ds1msrs-y`, `Ds2---s`, `Ds2fp-p`, `Ds2fp-s`, `Ds2fsrp`, `Ds2fsrs`, `Ds2mp-p`, `Ds2mp-s`, `Ds2ms-p`, `Ds2ms-s`, `Ds3---p`, `Ds3---s`, `Ds3fp-s`, `Ds3fsos`, `Ds3fsrs`, `Ds3mp-s`, `Ds3ms-s`, `Dw3--r---e`, `Dw3-po---e`, `Dw3fpr`, `Dw3fso---e`, `Dw3fsr`, `Dw3mpr`, `Dw3mso---e`, `Dw3msr`, `Dz3fsr---e`, `Dz3mso---e`, `Dz3msr---e`, `EQUAL`, `EXCL`, `EXCLHELLIP`, `GE`, `GT`, `HELLIP`, `I`, `LCURL`, `LPAR`, `LSQR`, `LT`, `M`, `Mc`, `Mc-p-d`, `Mc-p-l`, `Mcfp-l`, `Mcfp-ln`, `Mcfprln`, `Mcfprly`, `Mcfsoln`, `Mcfsrln`, `Mcmp-l`, `Mcms-ln`, `Mcmsrl`, `Mcmsrly`, `Mffprln`, `Mffsrln`, `Mlfpo`, `Mlfpr`, `Mlmpr`, `Mo---l`, `Mo---ln`, `Mo-s-r`, `Mofp-ln`, `Mofpoly`, `Mofprly`, `Mofs-l`, `Mofsoln`, `Mofsoly`, `Mofsrln`, `Mofsrly`, `Mompoly`, `Momprly`, `Moms-l`, `Moms-ln`, `Momsoly`, `Momsrly`, `Nc`, `Nc---n`, `Ncf--n`, `Ncfp-n`, `Ncfpoy`, `Ncfpry`, `Ncfs-n`, `Ncfson`, `Ncfsoy`, `Ncfsrn`, `Ncfsry`, `Ncfsryy`, `Ncfsvy`, `Ncm--n`, `Ncmp-n`, `Ncmpoy`, `Ncmpry`, `Ncms-n`, `Ncms-ny`, `Ncms-y`, `Ncmsoy`, `Ncmsrn`, `Ncmsry`, `Ncmsryy`, `Ncmsvn`, `Ncmsvy`, `Np`, `Npfson`, `Npfsoy`, `Npfsrn`, `Npfsry`, `Npmpoy`, `Npmpry`, `Npms-n`, `Npmsoy`, `Npmsry`, `PERCENT`, `PERIOD`, `PLUS`, `PLUSMINUS`, `Pd3-po`, `Pd3fpr`, `Pd3fso`, `Pd3fsr`, `Pd3mpo`, `Pd3mpr`, `Pd3mpr--y`, `Pd3mso`, `Pd3msr`, `Pi3`, `Pi3--r`, `Pi3-po`, `Pi3-so`, `Pi3-sr`, `Pi3fpr`, `Pi3fso`, `Pi3fsr`, `Pi3mpr`, `Pi3mso`, `Pi3msr`, `Pi3msr--y`, `Pp1-pa--------w`, `Pp1-pa--y-----w`, `Pp1-pd--------s`, `Pp1-pd--------w`, `Pp1-pd--y-----w`, `Pp1-pr--------s`, `Pp1-sa--------s`, `Pp1-sa--------w`, `Pp1-sa--y-----w`, `Pp1-sd--------s`, `Pp1-sd--------w`, `Pp1-sd--y-----w`, `Pp1-sn--------s`, `Pp2-----------s`, `Pp2-pa--------w`, `Pp2-pa--y-----w`, `Pp2-pd--------w`, `Pp2-pd--y-----w`, `Pp2-pr--------s`, `Pp2-sa--------s`, `Pp2-sa--------w`, `Pp2-sa--y-----w`, `Pp2-sd--------s`, `Pp2-sd--------w`, `Pp2-sd--y-----w`, `Pp2-sn--------s`, `Pp2-so--------s`, `Pp2-sr--------s`, `Pp3-p---------s`, `Pp3-pd--------w`, `Pp3-pd--y-----w`, `Pp3-po--------s`, `Pp3-sd--------w`, `Pp3-sd--y-----w`, `Pp3fpa--------w`, `Pp3fpa--y-----w`, `Pp3fpr--------s`, `Pp3fs---------s`, `Pp3fsa--------w`, `Pp3fsa--y-----w`, `Pp3fso--------s`, `Pp3fsr--------s`, `Pp3fsr--y-----s`, `Pp3mpa--------w`, `Pp3mpa--y-----w`, `Pp3mpr--------s`, `Pp3ms---------s`, `Pp3msa--------w`, `Pp3msa--y-----w`, `Pp3mso--------s`, `Pp3msr--------s`, `Pp3msr--y-----s`, `Ps1fp-s`, `Ps1fsrp`, `Ps1fsrs`, `Ps1mp-p`, `Ps1ms-p`, `Ps2fp-s`, `Ps2fsrp`, `Ps2fsrs`, `Ps2ms-s`, `Ps3---p`, `Ps3---s`, `Ps3fp-s`, `Ps3fsrs`, `Ps3mp-s`, `Ps3ms-s`, `Pw3--r`, `Pw3-po`, `Pw3-so`, `Pw3fpr`, `Pw3fso`, `Pw3mpr`, `Pw3mso`, `Px3--a--------s`, `Px3--a--------w`, `Px3--a--y-----w`, `Px3--d--------w`, `Px3--d--y-----w`, `Pz3-sr`, `Pz3fsr`, `QUEST`, `QUOT`, `Qf`, `Qn`, `Qs`, `Qs-y`, `Qz`, `Qz-y`, `RCURL`, `RPAR`, `RSQR`, `Rc`, `Rgc`, `Rgp`, `Rgpy`, `Rgs`, `Rp`, `Rw`, `Rw-y`, `Rz`, `SCOLON`, `SLASH`, `STAR`, `Sp`, `Spsa`, `Spsay`, `Spsd`, `Spsg`, `Td-po`, `Tdfpr`, `Tdfso`, `Tdfsr`, `Tdmpr`, `Tdmso`, `Tdmsr`, `Tf-so`, `Tffpoy`, `Tffpry`, `Tffs-y`, `Tfmpoy`, `Tfms-y`, `Tfmsoy`, `Tfmsry`, `Ti-po`, `Tifp-y`, `Tifso`, `Tifsr`, `Timso`, `Timsr`, `Tsfp`, `Tsfs`, `Tsmp`, `Tsms`, `UNDERSC`, `Va--1`, `Va--1-----y`, `Va--1p`, `Va--1s`, `Va--1s----y`, `Va--2p`, `Va--2p----y`, `Va--2s`, `Va--2s----y`, `Va--3`, `Va--3-----y`, `Va--3p`, `Va--3p----y`, `Va--3s`, `Va--3s----y`, `Vag`, `Vaii1`, `Vaii2s`, `Vaii3p`, `Vaii3s`, `Vail3p`, `Vail3s`, `Vaip1p`, `Vaip1s`, `Vaip2p`, `Vaip2s`, `Vaip3p`, `Vaip3p----y`, `Vaip3s`, `Vaip3s----y`, `Vais3p`, `Vais3s`, `Vam-2s`, `Vanp`, `Vap--sm`, `Vasp1p`, `Vasp1s`, `Vasp2p`, `Vasp2s`, `Vasp3`, `Vmg`, `Vmg-------y`, `Vmii1`, `Vmii1-----y`, `Vmii2p`, `Vmii2s`, `Vmii3p`, `Vmii3p----y`, `Vmii3s`, `Vmii3s----y`, `Vmil1`, `Vmil1p`, `Vmil2s`, `Vmil3p`, `Vmil3p----y`, `Vmil3s`, `Vmil3s----y`, `Vmip1p`, `Vmip1p----y`, `Vmip1s`, `Vmip1s----y`, `Vmip2p`, `Vmip2s`, `Vmip2s----y`, `Vmip3`, `Vmip3-----y`, `Vmip3p`, `Vmip3s`, `Vmip3s----y`, `Vmis1p`, `Vmis1s`, `Vmis3p`, `Vmis3p----y`, `Vmis3s`, `Vmis3s----y`, `Vmm-2p`, `Vmm-2s`, `Vmnp`, `Vmnp------y`, `Vmp--pf`, `Vmp--pm`, `Vmp--sf`, `Vmp--sm`, `Vmp--sm---y`, `Vmsp1p`, `Vmsp1s`, `Vmsp2s`, `Vmsp3`, `Vmsp3-----y`, `X`, `Y`, `Ya`, `Yn`, `Ynfsoy`, `Ynfsry`, `Ynmsoy`, `Ynmsry`, `Yp`, `Yp-sr`, `Yr` |
| **`morphologizer`** | `Case=Dat,Gen\|Definite=Def\|Number=Sing\|POS=DET\|PronType=Art`, `POS=PROPN`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Strength=Weak`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pqp\|VerbForm=Fin`, `Case=Acc,Nom\|Definite=Def\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|Position=Postnom\|PronType=Dem`, `AdpType=Prep\|Case=Acc\|POS=ADP`, `Case=Acc,Nom\|Definite=Def\|Gender=Masc\|NumForm=Word\|NumType=Ord\|Number=Sing\|POS=NUM`, `Definite=Ind\|Gender=Masc\|Number=Sing\|POS=NOUN`, `POS=ADV\|PronType=Int,Rel`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Strength=Weak`, `POS=PUNCT`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `POS=CCONJ\|Polarity=Pos`, `Case=Acc,Nom\|POS=PRON\|Person=3\|PronType=Int,Rel`, `Case=Dat,Gen\|Definite=Def\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Definite=Ind\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Definite=Ind\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Acc,Nom\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Mood=Sub\|POS=PART\|Variant=Short`, `Mood=Sub\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Acc,Nom\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes\|Strength=Weak`, `POS=AUX\|Tense=Pres\|VerbForm=Inf`, `Gender=Masc\|Number=Sing\|POS=VERB\|VerbForm=Part`, `POS=ADV`, `Degree=Pos\|POS=ADV`, `POS=PART\|Polarity=Neg`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Imp\|VerbForm=Fin`, `Mood=Sub\|POS=PART`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Strength=Weak\|Variant=Short`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Acc,Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Acc,Nom\|POS=DET\|Person=3\|Position=Prenom\|PronType=Ind`, `Case=Acc,Nom\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Acc,Nom\|Definite=Def\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Case=Acc,Nom\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Gender=Fem\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Dat,Gen\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs\|Strength=Weak`, `POS=SCONJ\|Polarity=Pos`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Acc,Nom\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Acc,Nom\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Strength=Strong`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pqp\|VerbForm=Fin`, `POS=AUX\|Person=3`, `POS=VERB\|Tense=Pres\|VerbForm=Inf`, `Case=Acc,Nom\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Acc,Nom\|Gender=Fem\|Number=Plur\|POS=DET\|Person=3\|PronType=Int,Rel`, `Case=Acc,Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Strength=Strong`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|PronType=Dem`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Case=Acc,Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs`, `Degree=Pos\|POS=ADJ`, `Case=Acc,Nom\|Definite=Def\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pqp\|VerbForm=Fin`, `POS=VERB\|VerbForm=Ger`, `Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs`, `Case=Dat,Gen\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `POS=PART\|PartType=Inf`, `Case=Dat\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes\|Strength=Weak\|Variant=Short`, `Case=Acc,Nom\|POS=DET\|Person=3\|Position=Prenom\|PronType=Int,Rel`, `Case=Acc,Nom\|Gender=Masc\|Number=Plur\|POS=DET\|Person=3\|Position=Prenom\|PronType=Dem`, `Definite=Ind\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs\|Strength=Weak\|Variant=Short`, `NumForm=Word\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Strength=Weak`, `Case=Acc,Nom\|Gender=Masc\|Number=Plur\|POS=DET\|Person=3\|PronType=Ind`, `Case=Dat,Gen\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Dat,Gen\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Gender=Fem\|Number=Plur\|POS=VERB\|VerbForm=Part`, `POS=ADV\|PronType=Neg`, `AdpType=Prep\|Case=Acc\|POS=ADP\|Variant=Short`, `Case=Acc,Nom\|Definite=Ind\|Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Acc,Nom\|Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|Position=Postnom\|PronType=Dem`, `Number=Sing\|POS=AUX\|Person=2`, `Case=Acc,Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Gender=Fem\|NumForm=Word\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Acc,Nom\|Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|Position=Prenom\|PronType=Dem`, `Case=Acc,Nom\|Gender=Fem\|Number=Plur\|POS=DET\|Person=3\|PronType=Ind`, `Gender=Fem\|Number=Sing\|POS=VERB\|VerbForm=Part`, `Case=Acc,Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs\|Strength=Strong`, `Definite=Ind\|Degree=Pos\|Gender=Fem\|POS=ADJ`, `Case=Dat,Gen\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Acc,Nom\|Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|PronType=Ind`, `Case=Acc,Nom\|Number=Sing\|POS=PRON\|Person=3\|PronType=Neg`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|PronType=Emp`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Dat,Gen\|Number=Plur\|POS=DET\|PronType=Dem`, `Mood=Sub\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `NumForm=Word\|NumType=Ord\|POS=NUM`, `AdpType=Prep\|Case=Gen\|POS=ADP`, `Case=Dat,Gen\|Definite=Def\|Gender=Masc\|Number=Plur\|POS=NOUN`, `AdpType=Prep\|POS=PUNCT`, `Gender=Masc\|Number=Plur\|POS=VERB\|VerbForm=Part`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Strength=Weak\|Variant=Short`, `Case=Dat\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes\|Strength=Weak`, `Case=Dat,Gen\|Definite=Def\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Acc,Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind`, `Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|PronType=Ind`, `Case=Acc,Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Acc\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes\|Strength=Strong`, `Case=Acc,Nom\|Gender=Fem\|Number=Plur\|POS=DET\|Person=3\|Position=Prenom\|PronType=Dem`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|PronType=Ind`, `Case=Acc,Nom\|Number=Sing\|POS=DET\|Person=3\|Position=Prenom\|PronType=Ind`, `Case=Acc,Nom\|Definite=Ind\|Gender=Fem\|NumForm=Word\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Dat,Gen\|Number=Plur\|POS=DET\|Person=3\|PronType=Ind`, `Case=Dat,Gen\|Number=Plur\|POS=DET\|Person=3\|Position=Prenom\|PronType=Dem`, `Case=Acc,Nom\|Gender=Masc\|Number=Plur\|POS=DET\|Person=3\|Position=Prenom\|PronType=Ind`, `Case=Acc,Nom\|Gender=Fem\|Number=Plur\|POS=DET\|Person=3\|Position=Prenom\|PronType=Ind`, `Case=Dat,Gen\|Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|Position=Prenom\|PronType=Int,Rel`, `Gender=Masc\|Number=Sing\|POS=AUX\|VerbForm=Part`, `POS=VERB\|Variant=Short\|VerbForm=Ger`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs\|Strength=Weak\|Variant=Short`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs\|Strength=Weak`, `Case=Acc,Nom\|Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|Position=Prenom\|PronType=Ind`, `Gender=Masc\|NumForm=Word\|NumType=Ord\|Number=Sing\|POS=NUM`, `Number=Sing\|POS=AUX\|Person=3`, `Case=Acc,Nom\|Definite=Def\|Gender=Fem\|NumForm=Word\|NumType=Ord\|Number=Sing\|POS=NUM`, `Case=Acc,Nom\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Dat,Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Strength=Strong`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|Position=Prenom\|PronType=Dem`, `Mood=Ind\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Case=Acc,Nom\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs\|Strength=Strong`, `POS=AUX\|Person=1`, `Case=Dat,Gen\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs\|Strength=Strong`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs\|Strength=Strong`, `Case=Acc,Nom\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs\|Strength=Strong`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `POS=PART\|Polarity=Neg\|Variant=Short`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs\|Strength=Weak`, `Mood=Ind\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|Position=Prenom\|PronType=Ind`, `Case=Acc,Nom\|Gender=Masc\|NumType=Card\|Number=Plur\|POS=NUM\|PronType=Tot`, `Mood=Ind\|POS=VERB\|Person=1\|Tense=Pqp\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes\|Strength=Weak\|Variant=Short`, `Number=Plur\|POS=AUX\|Person=3`, `Gender=Fem\|NumForm=Word\|NumType=Ord\|Number=Sing\|POS=NUM`, `Case=Dat,Gen\|Definite=Def\|Gender=Fem\|Number=Sing\|POS=PROPN`, `POS=SCONJ\|Polarity=Pos\|Variant=Short`, `Case=Acc,Nom\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs\|Strength=Strong`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|Position=Prenom\|PronType=Ind`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs\|Strength=Weak\|Variant=Short`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `POS=PART\|Tense=Fut`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|VerbForm=Fin`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs\|Strength=Strong`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs\|Strength=Weak`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs\|Strength=Weak\|Variant=Short`, `Gender=Fem\|Number=Plur\|POS=DET\|Person=3\|PronType=Ind`, `POS=DET\|Person=3\|PronType=Ind`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs\|Strength=Weak`, `Case=Voc\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Acc,Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Acc,Nom\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Dat,Gen\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|PronType=Emp`, `Case=Acc,Nom\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Dat,Gen\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Definite=Ind\|NumForm=Word\|NumType=Ord\|POS=NUM`, `Case=Dat,Gen\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Dat,Gen\|Definite=Def\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Art\|Variant=Short`, `Case=Acc,Nom\|Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|PronType=Int,Rel`, `Gender=Masc\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs`, `Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Gender=Masc\|NumForm=Word\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Dat,Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Definite=Ind\|Gender=Masc\|POS=NOUN`, `Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Tense=Imp\|VerbForm=Fin`, `Case=Acc,Nom\|Gender=Masc\|NumForm=Word\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Acc,Nom\|Gender=Fem\|Number=Plur\|POS=DET\|Person=3\|Position=Postnom\|PronType=Dem`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Definite=Ind\|Gender=Fem\|NumForm=Word\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Acc,Nom\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Int,Rel`, `Case=Acc,Nom\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs\|Strength=Strong`, `Number[psor]=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Dat,Gen\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Acc,Nom\|Number=Sing\|POS=DET\|Person=3\|PronType=Ind`, `NumForm=Digit\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Acc,Nom\|Definite=Def\|Gender=Masc\|NumForm=Word\|NumType=Ord\|Number=Plur\|POS=NUM`, `POS=INTJ`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs\|Strength=Weak`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|Variant=Short\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|Variant=Short\|VerbForm=Fin`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs\|Strength=Weak`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Dat,Gen\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Dat,Gen\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int,Rel`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs\|Strength=Weak\|Variant=Short`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs\|Strength=Weak\|Variant=Short`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Dat,Gen\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Acc,Nom\|Gender=Masc\|Number=Plur\|POS=DET\|Person=3\|Position=Postnom\|PronType=Dem`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Strength=Weak\|Variant=Short`, `Definite=Ind\|Degree=Pos\|Gender=Masc\|POS=ADJ`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Dat,Gen\|Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|Position=Prenom\|PronType=Ind`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pqp\|VerbForm=Fin`, `Case=Dat,Gen\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin`, `Case=Voc\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs\|Strength=Weak\|Variant=Short`, `Case=Acc,Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind\|Variant=Short`, `Case=Acc,Nom\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|VerbForm=Fin`, `POS=CCONJ\|Polarity=Pos\|Variant=Short`, `Number=Plur\|POS=AUX\|Person=2`, `Case=Dat,Gen\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs\|Strength=Weak`, `Case=Acc,Nom\|Gender=Masc\|Number=Plur\|POS=DET\|Person=3\|PronType=Dem`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs\|Strength=Strong`, `Case=Dat,Gen\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art\|Variant=Short`, `POS=AUX\|VerbForm=Ger`, `Case=Acc,Nom\|Definite=Def\|Gender=Fem\|NumForm=Word\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Dat,Gen\|Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|Position=Prenom\|PronType=Dem`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs\|Strength=Weak\|Variant=Short`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs\|Strength=Weak`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pqp\|VerbForm=Fin`, `POS=PRON\|Person=3\|PronType=Ind`, `Definite=Ind\|Gender=Fem\|POS=NOUN`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Imp\|VerbForm=Fin`, `Case=Acc,Nom\|Definite=Def\|Gender=Fem\|Number=Sing\|POS=NOUN\|Variant=Short`, `Case=Dat,Gen\|Definite=Def\|Gender=Fem\|NumForm=Word\|NumType=Ord\|Number=Sing\|POS=NUM`, `Degree=Sup\|POS=ADV`, `Case=Acc,Nom\|Definite=Def\|Gender=Fem\|NumForm=Word\|NumType=Ord\|Number=Plur\|POS=NUM`, `POS=ADV\|PronType=Int,Rel\|Variant=Short`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|Variant=Short\|VerbForm=Fin`, `Case=Acc,Nom\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=NOUN\|Variant=Short`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs\|Strength=Strong`, `Case=Dat,Gen\|Definite=Def\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Art\|Variant=Short`, `Case=Dat,Gen\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Acc,Nom\|Gender=Fem\|Number=Plur\|POS=DET\|Person=3\|PronType=Dem`, `Case=Dat,Gen\|Gender=Fem\|Number=Plur\|POS=DET\|Person=3\|PronType=Dem`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|Variant=Short\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs\|Strength=Weak`, `Degree=Pos\|POS=ADV\|Variant=Short`, `Gender=Fem\|Number=Plur\|Number[psor]=Sing\|POS=PRON\|Person=2\|Poss=Yes\|PronType=Prs`, `Abbr=Yes\|POS=NOUN`, `Case=Dat,Gen\|Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|Position=Prenom\|PronType=Int,Rel`, `POS=NOUN`, `Case=Dat,Gen\|Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|Position=Prenom\|PronType=Dem`, `AdpType=Prep\|Case=Dat\|POS=ADP`, `Case=Dat,Gen\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Acc,Nom\|Definite=Ind\|Gender=Fem\|NumForm=Word\|NumType=Card\|Number=Plur\|POS=NUM`, `AdpType=Prep\|POS=SYM`, `Case=Acc,Nom\|Gender=Fem\|NumType=Card\|Number=Plur\|POS=NUM\|PronType=Tot`, `Case=Dat,Gen\|Definite=Def\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs\|Strength=Weak`, `POS=PRON\|Person=2\|PronType=Prs\|Strength=Strong`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs\|Strength=Weak\|Variant=Short`, `POS=SYM`, `POS=X`, `Abbr=Yes\|POS=X`, `Case=Dat,Gen\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Dat,Gen\|Number=Plur\|POS=PRON\|Person=3\|PronType=Int,Rel`, `Gender=Masc\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Dat,Gen\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Dat,Gen\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Abbr=Yes\|POS=ADV`, `Case=Acc,Nom\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Dat,Gen\|Definite=Def\|Gender=Masc\|NumForm=Word\|NumType=Ord\|Number=Sing\|POS=NUM`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Case=Dat,Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Dat,Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int,Rel`, `Case=Dat,Gen\|Definite=Def\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Dat,Gen\|Number=Plur\|POS=DET\|Person=3\|Position=Prenom\|PronType=Int,Rel`, `NumForm=Roman\|NumType=Ord\|Number=Sing\|POS=NUM`, `Case=Voc\|Definite=Def\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Gender=Fem\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Dat,Gen\|Definite=Ind\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs\|Strength=Weak\|Variant=Short`, `Case=Dat,Gen\|Number=Plur\|POS=DET\|Person=3\|Position=Prenom\|PronType=Ind`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs\|Strength=Weak\|Variant=Short`, `Gender=Masc\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin`, `Case=Dat,Gen\|Definite=Def\|Gender=Masc\|NumForm=Word\|NumType=Ord\|Number=Plur\|POS=NUM`, `Case=Acc,Nom\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Abbr=Yes\|Case=Acc,Nom\|Number=Sing\|POS=PRON`, `Foreign=Yes\|POS=PROPN`, `Definite=Ind\|Foreign=Yes\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Definite=Ind\|Degree=Pos\|Foreign=Yes\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Definite=Ind\|Degree=Cmp\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Definite=Def\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art\|Variant=Short`, `Definite=Ind\|Gender=Masc\|NumForm=Word\|NumType=Ord\|Number=Sing\|POS=NUM`, `Definite=Ind\|Degree=Pos\|Foreign=Yes\|Gender=Fem\|POS=ADJ`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Definite=Def\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art\|Variant=Short`, `Case=Acc,Nom\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art\|Variant=Short`, `Case=Acc,Nom\|Definite=Ind\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Gender=Fem\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Acc,Nom\|Gender=Masc\|Number=Plur\|POS=DET\|Person=3\|PronType=Int,Rel`, `Foreign=Yes\|POS=X`, `Definite=Ind\|Gender=Masc\|NumForm=Word\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Dat,Gen\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Strength=Strong`, `Case=Dat,Gen\|Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|Position=Prenom\|PronType=Ind`, `Foreign=Yes\|POS=NOUN`, `Gender=Fem\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Gender=Fem\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Dat,Gen\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Gender=Masc\|Number=Plur\|POS=DET\|Person=3\|PronType=Ind`, `Definite=Ind\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Acc,Nom\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Dat,Gen\|Gender=Masc\|Number=Plur\|POS=DET\|Person=3\|PronType=Dem`, `Gender=Masc\|Number=Plur\|POS=DET\|Person=3\|PronType=Emp`, `Case=Acc,Nom\|Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|Position=Prenom\|PronType=Neg`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|Position=Prenom\|PronType=Neg`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Neg`, `Case=Dat,Gen\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Dat,Gen\|Definite=Def\|Gender=Fem\|NumForm=Word\|NumType=Ord\|Number=Plur\|POS=NUM`, `Case=Dat,Gen\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs\|Strength=Strong`, `Case=Dat,Gen\|Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|Position=Prenom\|PronType=Neg`, `Case=Dat,Gen\|Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|PronType=Ind`, `Case=Dat,Gen\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Acc,Nom\|Definite=Def\|Gender=Masc\|NumForm=Word\|NumType=Card\|Number=Sing\|POS=NUM`, `Gender=Fem\|Number=Plur\|POS=DET\|Person=3\|PronType=Emp`, `Definite=Ind\|POS=NOUN`, `Gender=Masc\|Number=Sing\|POS=DET\|Person=1\|PronType=Emp`, `Abbr=Yes\|POS=PRON`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs`, `Number[psor]=Plur\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Number=Sing\|POS=AUX\|Person=1`, `Case=Dat,Gen\|Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|PronType=Emp`, `NumType=Card\|POS=NUM`, `Case=Dat,Gen\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Acc,Nom\|Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|PronType=Dem`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|Variant=Short\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|Number[psor]=Sing\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin`, `Number=Sing\|POS=AUX\|Person=3\|Variant=Short`, `Number=Plur\|POS=AUX\|Person=2\|Variant=Short`, `Case=Acc,Nom\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs\|Strength=Strong`, `Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|Variant=Short\|VerbForm=Fin`, `Definite=Ind\|Gender=Masc\|Number=Sing\|POS=NOUN\|Variant=Short`, `Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs`, `Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Gender=Masc\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ\|Variant=Short`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Imp\|Variant=Short\|VerbForm=Fin`, `Number=Plur\|POS=AUX\|Person=3\|Variant=Short`, `Case=Acc,Nom\|Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|POS=VERB\|Person=3\|Tense=Pres\|Variant=Short\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Dat,Gen\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int,Rel`, `Case=Acc,Nom\|Definite=Def\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Strength=Strong`, `Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Strength=Strong`, `Number=Plur\|POS=AUX\|Person=1`, `POS=VERB\|Tense=Pres\|Variant=Short\|VerbForm=Inf`, `Number=Sing\|POS=AUX\|Person=2\|Variant=Short`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Acc,Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem\|Variant=Short`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs\|Strength=Strong`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Strength=Strong\|Variant=Short`, `Gender=Masc\|Number=Plur\|Number[psor]=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|POS=AUX\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Degree=Pos\|POS=ADV\|Polarity=Neg`, `Case=Acc,Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Strength=Strong\|Variant=Short`, `POS=AUX\|Person=3\|Variant=Short`, `Gender=Fem\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Gender=Masc\|Number=Sing\|POS=VERB\|Variant=Short\|VerbForm=Part`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|PronType=Int,Rel`, `Case=Dat,Gen\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Mood=Sub\|POS=VERB\|Person=3\|Tense=Pres\|Variant=Short\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=2\|Poss=Yes\|PronType=Prs`, `Number=Sing\|POS=AUX\|Person=1\|Variant=Short`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=2\|Poss=Yes\|PronType=Prs`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Case=Dat,Gen\|Definite=Ind\|Gender=Fem\|NumForm=Word\|NumType=Ord\|Number=Sing\|POS=NUM`, `Gender=Fem\|Number=Plur\|Number[psor]=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Definite=Ind\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=2\|Poss=Yes\|PronType=Prs`, `Definite=Ind\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Dat,Gen\|Gender=Fem\|NumType=Card\|Number=Plur\|POS=NUM\|PronType=Tot`, `Case=Acc,Nom\|Definite=Ind\|Gender=Fem\|NumForm=Word\|NumType=Ord\|Number=Sing\|POS=NUM`, `POS=ADV\|Polarity=Neg`, `Case=Acc,Nom\|Definite=Def\|Degree=Pos\|Foreign=Yes\|Gender=Fem\|Number=Sing\|POS=ADJ`, `AdpType=Prep\|Case=Acc\|Foreign=Yes\|POS=ADP`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|Variant=Short\|VerbForm=Fin`, `POS=AUX\|Person=1\|Variant=Short`, `Mood=Imp\|Number=Sing\|POS=AUX\|Person=2\|VerbForm=Fin`, `AdpType=Prep\|POS=ADP`, `Definite=Def\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs\|Strength=Strong`, `Definite=Ind\|Gender=Fem\|NumForm=Word\|NumType=Ord\|Number=Plur\|POS=NUM`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Dat,Gen\|Definite=Def\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Acc,Nom\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Dat,Gen\|Definite=Ind\|Gender=Fem\|NumForm=Word\|NumType=Card\|Number=Sing\|POS=NUM`, `Abbr=Yes\|Case=Dat,Gen\|Definite=Def\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Abbr=Yes\|POS=ADJ`, `Gender=Masc\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Abbr=Yes\|Case=Acc,Nom\|Definite=Def\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Foreign=Yes\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs`, `Abbr=Yes\|Case=Acc,Nom\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Acc,Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Int,Rel`, `Abbr=Yes\|Case=Dat,Gen\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Foreign=Yes\|POS=VERB\|Tense=Pres\|VerbForm=Inf`, `Foreign=Yes\|NumForm=Roman\|NumType=Ord\|Number=Sing\|POS=NUM`, `Definite=Ind\|Foreign=Yes\|Gender=Masc\|POS=NOUN`, `Case=Dat,Gen\|Number=Plur\|POS=DET\|Person=3\|Position=Postnom\|PronType=Dem`, `Case=Dat,Gen\|Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|PronType=Dem`, `Mood=Ind\|POS=VERB\|Person=1\|Tense=Imp\|Variant=Short\|VerbForm=Fin`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|Number[psor]=Plur\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pqp\|Variant=Short\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|Variant=Short\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pqp\|VerbForm=Fin`, `Number=Plur\|POS=PRON\|Person=3\|PronType=Prs\|Strength=Strong`, `Case=Acc,Nom\|Definite=Def\|Foreign=Yes\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pqp\|Variant=Short\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Degree=Cmp\|POS=ADV`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Imp\|Variant=Short\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Ind\|Variant=Short`, `Definite=Ind\|Degree=Sup\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Acc,Nom\|Definite=Def\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Art\|Variant=Short`, `Case=Acc,Nom\|Definite=Def\|Gender=Fem\|Number=Sing\|POS=PROPN` |
| **`parser`** | `ROOT`, `acl`, `advcl`, `advcl:tcl`, `advmod`, `advmod:tmod`, `amod`, `appos`, `aux`, `aux:pass`, `case`, `cc`, `cc:preconj`, `ccomp`, `ccomp:pmod`, `compound`, `conj`, `cop`, `csubj`, `csubj:pass`, `dep`, `det`, `discourse`, `expl`, `expl:impers`, `expl:pass`, `expl:poss`, `expl:pv`, `fixed`, `flat`, `goeswith`, `iobj`, `list`, `mark`, `nmod`, `nmod:agent`, `nmod:pmod`, `nmod:tmod`, `nsubj`, `nsubj:pass`, `nummod`, `obj`, `obl`, `orphan`, `parataxis`, `punct`, `vocative`, `xcomp` |
| **`experimental_edit_tree_lemmatizer`** | `1`, `2`, `3`, `7`, `9`, `12`, `14`, `15`, `19`, `22`, `24`, `26`, `30`, `32`, `34`, `36`, `38`, `40`, `42`, `45`, `47`, `49`, `51`, `53`, `55`, `61`, `62`, `66`, `67`, `68`, `71`, `73`, `76`, `78`, `80`, `83`, `85`, `86`, `89`, `91`, `92`, `93`, `95`, `97`, `98`, `99`, `102`, `104`, `106`, `108`, `109`, `111`, `107`, `113`, `115`, `116`, `119`, `121`, `124`, `128`, `129`, `130`, `132`, `135`, `139`, `143`, `146`, `148`, `150`, `151`, `154`, `156`, `158`, `159`, `162`, `165`, `166`, `167`, `169`, `171`, `173`, `175`, `177`, `180`, `182`, `183`, `185`, `186`, `187`, `189`, `191`, `193`, `195`, `197`, `198`, `199`, `201`, `203`, `205`, `207`, `208`, `210`, `212`, `215`, `217`, `218`, `221`, `223`, `227`, `229`, `230`, `231`, `232`, `233`, `234`, `237`, `239`, `240`, `242`, `244`, `246`, `248`, `249`, `251`, `252`, `254`, `257`, `259`, `261`, `263`, `266`, `268`, `269`, `271`, `272`, `274`, `276`, `278`, `280`, `282`, `283`, `285`, `287`, `289`, `293`, `294`, `296`, `298`, `300`, `301`, `303`, `305`, `307`, `309`, `311`, `313`, `315`, `317`, `318`, `320`, `322`, `324`, `326`, `328`, `330`, `331`, `333`, `334`, `336`, `337`, `339`, `342`, `343`, `344`, `346`, `349`, `353`, `355`, `357`, `359`, `360`, `361`, `363`, `364`, `366`, `367`, `369`, `370`, `372`, `374`, `376`, `378`, `379`, `380`, `381`, `383`, `384`, `386`, `388`, `389`, `391`, `74`, `393`, `395`, `397`, `399`, `401`, `403`, `406`, `408`, `409`, `412`, `413`, `415`, `416`, `417`, `418`, `419`, `420`, `421`, `422`, `423`, `425`, `426`, `428`, `429`, `431`, `434`, `435`, `439`, `443`, `445`, `447`, `449`, `451`, `452`, `453`, `456`, `458`, `460`, `461`, `462`, `464`, `466`, `467`, `468`, `470`, `471`, `473`, `474`, `475`, `476`, `478`, `481`, `484`, `485`, `486`, `487`, `489`, `491`, `492`, `493`, `496`, `498`, `500`, `503`, `504`, `505`, `509`, `512`, `513`, `514`, `515`, `516`, `519`, `520`, `521`, `522`, `523`, `525`, `526`, `527`, `528`, `213`, `530`, `531`, `532`, `535`, `539`, `541`, `544`, `546`, `547`, `548`, `550`, `552`, `553`, `555`, `557`, `558`, `559`, `560`, `563`, `565`, `566`, `569`, `572`, `574`, `576`, `578`, `580`, `582`, `585`, `588`, `589`, `590`, `591`, `592`, `593`, `594`, `597`, `599`, `601`, `603`, `605`, `606`, `608`, `610`, `614`, `616`, `617`, `618`, `620`, `624`, `626`, `41`, `628`, `629`, `631`, `632`, `634`, `636`, `639`, `641`, `643`, `645`, `647`, `650`, `653`, `654`, `655`, `657`, `658`, `661`, `664`, `665`, `667`, `669`, `671`, `672`, `674`, `675`, `677`, `678`, `680`, `682`, `683`, `686`, `688`, `690`, `693`, `695`, `697`, `699`, `701`, `702`, `703`, `705`, `706`, `707`, `708`, `711`, `713`, `714`, `715`, `717`, `719`, `721`, `722`, `725`, `726`, `728`, `731`, `733`, `735`, `736`, `737`, `738`, `740`, `742`, `744`, `745`, `747`, `749`, `750`, `751`, `752`, `754`, `757`, `759`, `761`, `762`, `764`, `765`, `766`, `768`, `769`, `770`, `771`, `772`, `774`, `775`, `776`, `779`, `781`, `784`, `785`, `787`, `789`, `791`, `792`, `794`, `796`, `797`, `799`, `800`, `802`, `803`, `808`, `809`, `810`, `813`, `816`, `817`, `818`, `820`, `821`, `822`, `824`, `826`, `827`, `828`, `830`, `832`, `834`, `836`, `837`, `839`, `841`, `843`, `845`, `847`, `848`, `849`, `851`, `855`, `856`, `858`, `861`, `862`, `864`, `865`, `866`, `867`, `868`, `870`, `871`, `873`, `876`, `877`, `880`, `881`, `883`, `885`, `889`, `891`, `892`, `894`, `896`, `898`, `900`, `902`, `904`, `905`, `907`, `908`, `911`, `913`, `914`, `916`, `918`, `919`, `920`, `923`, `924`, `926`, `927`, `929`, `932`, `935`, `936`, `937`, `938`, `940`, `942`, `943`, `945`, `947`, `948`, `952`, `955`, `958`, `960`, `961`, `962`, `964`, `965`, `966`, `968`, `970`, `972`, `974`, `976`, `977`, `979`, `980`, `982`, `983`, `985`, `986`, `988`, `989`, `990`, `991`, `993`, `995`, `997`, `998`, `999`, `1001`, `1002`, `1003`, `1006`, `1007`, `1012`, `1013`, `1014`, `1015`, `1016`, `1019`, `1020`, `1021`, `1022`, `1023`, `1025`, `1027`, `1029`, `1031`, `1032`, `1033`, `1036`, `1038`, `1040`, `1043`, `1044`, `1045`, `1046`, `1048`, `1050`, `1052`, `1053`, `1055`, `1057`, `1058`, `1061`, `1062`, `1064`, `1067`, `1069`, `1071`, `1074`, `1076`, `1078`, `1080`, `1083`, `1085`, `1086`, `1089`, `1090`, `1091`, `1094`, `1097`, `1098`, `1099`, `1103`, `1104`, `1106`, `1107`, `1108`, `1109`, `1110`, `1112`, `1114`, `1117`, `1118`, `1120`, `1122`, `1124`, `1125`, `1127`, `1128`, `1129`, `1132`, `1133`, `1136`, `1138`, `1139`, `1141`, `1144`, `1145`, `1147`, `1150`, `1152`, `1154`, `1155`, `1156`, `1157`, `1159`, `1161`, `1162`, `1163`, `1165`, `1166`, `1167`, `1168`, `1169`, `1171`, `1174`, `1176`, `1178`, `1179`, `1180`, `1184`, `1186`, `1187`, `1189`, `1190`, `1192`, `1193`, `1195`, `1196`, `1198`, `1201`, `1203`, `1204`, `1207`, `1210`, `1212`, `1214`, `1215`, `1216`, `1217`, `1218`, `1219`, `1222`, `1223`, `1224`, `1226`, `1227`, `1230`, `1231`, `1232`, `1233`, `1234`, `1235`, `1236`, `1238`, `1239`, `1242`, `1243`, `1244`, `1245`, `1247`, `1249`, `1250`, `1252`, `1254`, `1255`, `1256`, `1258`, `1259`, `1261`, `1262`, `1268`, `1269`, `1270`, `1271`, `1272`, `1274`, `1275`, `1277`, `1278`, `1279`, `1281`, `1282`, `1285`, `1287`, `1288`, `1289`, `1290`, `1291`, `1292`, `1295`, `1297`, `1298`, `1299`, `1300`, `1301`, `1302`, `1303`, `1304`, `1305`, `1306`, `1307`, `1312`, `1313`, `1314`, `1316`, `1317`, `1318`, `1319`, `1320`, `1315`, `1321`, `1323`, `1324`, `1325`, `1326`, `1327`, `1329`, `1337`, `1338`, `1339`, `1343`, `1344`, `1346`, `1347`, `1350`, `1351`, `1353`, `1354`, `1355`, `1358`, `1360`, `1361`, `1362`, `1365`, `1366`, `1367`, `1368`, `1369`, `1370`, `1371`, `1372`, `1373`, `1374`, `1376`, `1377`, `1379`, `1380`, `1381`, `1382`, `1384`, `1385`, `1386`, `1387`, `1389`, `1390`, `1391`, `1392`, `1393`, `1394`, `1395`, `1396`, `1400`, `1401`, `1404`, `1405`, `1406`, `1409`, `1410`, `1411`, `1413`, `1414`, `1416`, `1417`, `1418`, `1419`, `1421`, `1424`, `1425`, `1426`, `1427`, `1428`, `1430`, `1431`, `1434`, `1435`, `1436`, `1438`, `1440`, `1442`, `1443`, `1444`, `1445`, `1448`, `1449`, `1450`, `1451`, `1453`, `1454`, `1455`, `1456`, `1458`, `1459`, `1460`, `1463`, `1464`, `1466`, `1467`, `1468`, `1469`, `1470`, `1471`, `1472`, `1473`, `1474`, `1475`, `1476`, `1479`, `1480`, `1482`, `1483`, `1485`, `1487`, `1488`, `1490`, `1491`, `1492`, `1493`, `1495`, `1501`, `1504`, `1506`, `1508`, `1510`, `1512`, `1513`, `1514`, `1515`, `1516`, `1517`, `1518`, `1519`, `1520`, `1523`, `1524`, `1527`, `1530`, `1532`, `1533`, `1534`, `1536`, `1537`, `1538`, `1539`, `1540`, `1542`, `1544`, `1545`, `1546`, `1547`, `1548`, `1549`, `1550`, `1552`, `1554`, `1555`, `1556`, `1557`, `1558`, `1560`, `1563`, `1564`, `1565`, `1566`, `1567`, `1568`, `1569`, `1570`, `1572`, `1573`, `1575`, `1577`, `1578`, `1579`, `1582`, `1584`, `1585`, `1586`, `1587`, `1588`, `1589`, `1590`, `1591`, `1593`, `1594`, `1595`, `1597`, `1598`, `1600`, `1601`, `1602`, `1604`, `1605`, `1606`, `1607`, `1608`, `1610`, `1611`, `1612`, `1616`, `1617`, `1618`, `1619`, `1620`, `1621`, `1622`, `1623`, `1627`, `1628`, `1629`, `1631`, `1639`, `1641`, `1642`, `1643`, `1649`, `1650`, `1652`, `1653`, `1654`, `1656`, `1657`, `1659`, `1660`, `1661`, `1663`, `1667`, `1668`, `1669`, `1670`, `1671`, `1673`, `1675`, `1676`, `1678`, `1679`, `1681`, `1682`, `1684`, `1685`, `1686`, `1687`, `1688`, `1689`, `1690`, `1691`, `1692`, `1694`, `1695`, `1696`, `1697`, `1698`, `1700`, `1702`, `1703`, `1705`, `1706`, `1707`, `1708`, `1709`, `1710`, `1712`, `1713`, `1717`, `1718`, `1719`, `1720`, `1721`, `1725`, `1726`, `1728`, `1729`, `1730`, `1731`, `1733`, `1734`, `1735`, `1738`, `1740`, `1741`, `1742`, `1743`, `1744`, `1747`, `1749`, `1751`, `1754`, `1756`, `1757`, `1758`, `1760`, `1761`, `1762`, `1765`, `1768`, `1771`, `1772`, `1774`, `1775`, `1776`, `1777`, `1778`, `1779`, `1780`, `1781`, `1782`, `1783`, `1785`, `1787`, `1788`, `1790`, `1793`, `1794`, `1795`, `1798`, `1800`, `1801`, `1802`, `1803`, `1805`, `1806`, `1807`, `1808`, `1809`, `1810`, `1816`, `1817`, `1818`, `1819`, `1820`, `1821`, `1822`, `1823`, `1825`, `1826`, `1828`, `1829`, `1830`, `1831`, `1832`, `1833`, `1835`, `1841`, `1842`, `1843`, `1844`, `1846`, `1847`, `1849`, `1850`, `1851`, `1852`, `1853`, `1854`, `1855`, `1856`, `1857`, `1858`, `1859`, `1860`, `1861`, `1863`, `1865`, `1866`, `1867`, `1870`, `1872`, `1873`, `1874`, `1875`, `1876`, `1879`, `1880`, `1881`, `1882`, `1883`, `1884`, `1886`, `1887`, `1888`, `1889`, `1890`, `1891`, `1892`, `1894`, `1896`, `1898`, `1900`, `1901`, `1902`, `1904`, `1905`, `1906`, `1907`, `1910`, `1911`, `1913`, `1914`, `1916`, `1917`, `1919`, `1921`, `1923`, `1924`, `1759`, `1173`, `1925`, `1927`, `1929`, `1930`, `1931`, `1932`, `1933`, `1934`, `1936`, `1938`, `1940`, `1941`, `1942`, `1944`, `1945`, `1946`, `1948`, `1949`, `1951`, `1952`, `1953`, `1954`, `1956`, `1957`, `1958`, `1959`, `1961`, `1962`, `1963`, `1964`, `1965`, `1966`, `1968`, `1969`, `1970`, `1971`, `1972`, `1973`, `1764`, `1974`, `1975`, `1976`, `1977`, `1979`, `1980`, `1981`, `1982`, `1983`, `1984`, `1985`, `1986`, `1987`, `1988`, `1989`, `1990`, `1993`, `1994`, `1995`, `1996`, `1997`, `1998`, `1999`, `2001`, `2002`, `2003`, `2004`, `2005`, `2007`, `2008`, `2009`, `2010`, `2011`, `2012`, `2013`, `2016`, `2018`, `2019`, `2020`, `2021`, `2022`, `2023`, `2024`, `2025`, `2026`, `2028`, `2029`, `2031`, `2033`, `2037`, `2038`, `2039`, `2042`, `2043`, `2045`, `2046`, `2047`, `2048`, `2049`, `2050`, `2051`, `2052`, `2053`, `2055`, `2056`, `2057`, `2059`, `2063`, `2064`, `2065`, `2066`, `2067`, `2068`, `2069`, `2070`, `2071`, `2072`, `602`, `2073`, `2074`, `2075`, `2078`, `2079`, `2080`, `2082`, `2083`, `2084`, `2085`, `2086`, `2087`, `2088`, `2089`, `2090`, `2091`, `2092`, `2093`, `2094`, `2096`, `2098`, `2099`, `2100`, `2101`, `2102`, `2103`, `2105`, `2106`, `2107`, `2108`, `2109`, `2110`, `2112`, `2113`, `2115`, `2116`, `2117`, `2118`, `2119`, `2123`, `2125`, `2126`, `2127`, `2128`, `2130`, `2131`, `2132`, `2133`, `2134`, `2135`, `2136`, `2139`, `2140`, `2141`, `2142`, `2143`, `2144`, `2146`, `2147`, `2148`, `2150`, `2151`, `2152`, `2154`, `2155`, `2156`, `2158`, `2159`, `2160`, `2162`, `2163`, `2164`, `2165`, `2167`, `2168`, `2169`, `2170`, `2171`, `2173`, `2174`, `2175`, `2177`, `2178`, `2179`, `2180`, `2181`, `2183`, `2184`, `2185`, `2187`, `2188`, `2189`, `2190`, `2191`, `2192`, `2193`, `2195`, `2197`, `2198`, `2199`, `2200`, `2201`, `2203`, `2204`, `2205`, `2206`, `2207`, `2209`, `2213`, `2214`, `2215`, `2216`, `2219`, `2220`, `2221`, `2223`, `2225`, `2227`, `2229`, `2230`, `2231`, `2232`, `2235`, `2238`, `2239`, `2241`, `2243`, `2245`, `2246`, `2247`, `2248`, `2249`, `2251`, `2253`, `2254`, `2255`, `2257`, `2260`, `2261`, `2263`, `2264`, `2265`, `2266`, `2267`, `2268`, `2269`, `2270`, `2271`, `2272`, `2273`, `2275`, `2276`, `2277`, `2280`, `2281`, `2282`, `2283`, `2284`, `2285`, `2287`, `2289`, `2291`, `2293`, `2294`, `2295`, `2296`, `2297`, `2298`, `2299`, `2300`, `2301`, `2303`, `2305`, `2307`, `2308`, `2309`, `2310`, `2311`, `2313`, `2314`, `2315`, `2316`, `2318`, `2319`, `2320`, `2321`, `2322`, `2324`, `2326`, `2327`, `2329`, `2330`, `2332`, `2334`, `2336`, `2338`, `2339`, `2340`, `2341`, `2342`, `2343`, `2345`, `2347`, `2349`, `2350`, `2351`, `2352`, `2353`, `2355`, `2356`, `2357`, `2359`, `2361`, `2364`, `2365`, `2366`, `2367`, `2368`, `2369`, `2370`, `2371`, `2372`, `2373`, `2374`, `2375`, `2378`, `2379`, `2380`, `2381`, `2382`, `2384`, `2385`, `2386`, `2387`, `2388`, `2390`, `2394`, `1763`, `2396`, `2398`, `2400`, `2402`, `2404`, `2405`, `2406`, `2407`, `2408`, `2409`, `2410`, `2411`, `2413`, `2414`, `2416`, `2417`, `2418`, `2420`, `2422`, `2423`, `188`, `2425`, `2426`, `2427`, `2428`, `2430`, `2431`, `2432`, `2434`, `2435`, `2436`, `2437`, `2439`, `2440`, `2443`, `2444`, `2446`, `2447`, `2448`, `2449`, `2451`, `2453`, `2455`, `2456`, `2457`, `2458`, `2459`, `2461`, `2463`, `2465`, `2466`, `2467`, `2468`, `2469`, `2470`, `2471`, `2472`, `2475`, `2477`, `2478`, `2479`, `2480`, `2482`, `2483`, `2484`, `2485`, `2486`, `2488`, `2490`, `2491`, `2493`, `2495`, `2496`, `2498`, `2499`, `2501`, `2503`, `2504`, `2506`, `2508`, `2509`, `2511`, `2512`, `2513`, `2514`, `2516`, `2517`, `2519`, `2521`, `2522`, `2523`, `2524`, `2525`, `2526`, `2528`, `2529`, `2530`, `2532`, `2533`, `2534`, `2535`, `2536`, `2537`, `2538`, `2539`, `2540`, `2542`, `2543`, `2544`, `2545`, `2546`, `2547`, `2548`, `2549`, `2550`, `2551`, `2552`, `2554`, `2555`, `2556`, `2558`, `2559`, `2560`, `2564`, `2565`, `2566`, `2567`, `2568`, `2569`, `2570`, `2571`, `2572`, `2573`, `2575`, `2576`, `2577`, `2578`, `2579`, `2580`, `2581`, `2582`, `2584`, `2585`, `2586`, `2587`, `2588`, `2589`, `2590`, `2591`, `2592`, `2593`, `2594`, `2595`, `2596`, `2597`, `2598`, `2599`, `2602`, `2603`, `2604`, `2606`, `2608`, `2609`, `2610`, `2611`, `2613`, `2614`, `2615`, `2617`, `2621`, `2622`, `2623`, `2624`, `2625`, `2626`, `2627`, `2628`, `2631`, `2633`, `2635`, `2637`, `2638`, `2639`, `2640`, `2642`, `2643`, `2644`, `2646`, `2647`, `2649`, `2650`, `2652`, `2653`, `2654`, `2656`, `2657`, `2658`, `2659`, `2660`, `2661`, `2662`, `2664`, `2666`, `2667`, `2668`, `2669`, `2671`, `2672`, `2673`, `2676`, `2677`, `2678`, `2679`, `2680`, `2681`, `2683`, `2684`, `2685`, `2686`, `2688`, `2690`, `2691`, `2692`, `2694`, `2696`, `2698`, `2699`, `2700`, `2702`, `2703`, `2704`, `2706`, `2707`, `2708`, `2710`, `2711`, `2713`, `2714`, `2715`, `2717`, `2719`, `2720`, `2721`, `2722`, `2724`, `2725`, `2726`, `2727`, `2728`, `2729`, `2731`, `2732`, `2734`, `2735`, `2736`, `2738`, `2740`, `2741`, `2742`, `2744`, `2745`, `2746`, `2747`, `2748`, `2750`, `2753`, `2754`, `2755`, `2756`, `2757`, `2758`, `2760`, `2761`, `2762`, `2764`, `2765`, `2766`, `2767`, `2768`, `2769`, `2770`, `2771`, `2772`, `2773`, `2774`, `2775`, `2778`, `2780`, `2784`, `2785`, `2787`, `2788`, `2790`, `2792`, `2793`, `2794`, `2795`, `2797`, `2799`, `2802`, `2803`, `2805`, `2806`, `2808`, `2809`, `2811`, `2813`, `2815`, `2816`, `2817`, `2819`, `2823`, `2826`, `2827`, `2829`, `2831`, `2832`, `2834`, `2835`, `2837`, `2838`, `2840`, `2841`, `2842`, `2844`, `2846`, `2847`, `2848`, `2849`, `2850`, `2851`, `2852`, `2853`, `2855`, `2856`, `2857`, `2858`, `2859`, `2860`, `2861`, `2862`, `2863`, `2865`, `2866`, `2867`, `2868`, `2869`, `2870`, `2871`, `2872`, `2873`, `2874`, `2875`, `2876`, `2877`, `2878`, `2879`, `2880`, `2881`, `2882`, `2883`, `2884`, `2885`, `2886`, `2889`, `2890`, `2891`, `2892`, `2893`, `2894`, `2895`, `2896`, `2898`, `2899`, `2900`, `2901`, `2902`, `2903`, `2904`, `2905`, `2906`, `2909`, `2910`, `2911`, `2912`, `2914`, `2915`, `2916`, `2917`, `2918`, `2919`, `2922`, `2924`, `2926`, `2928`, `2929`, `2931`, `2933`, `2934`, `2935`, `2937`, `2938`, `2939`, `2940`, `2941`, `2942`, `2943`, `2945`, `2946`, `2947`, `2950`, `2951`, `2952`, `2953`, `2956`, `2957`, `2958`, `2959`, `2960`, `2962`, `2963`, `2964`, `2965`, `2966`, `2967`, `2968`, `2969`, `23`, `2970`, `2971`, `2972`, `2973`, `2974`, `2975`, `2976`, `2977`, `2978`, `2980`, `2981`, `2983`, `2984`, `2985`, `2986`, `2987`, `2988`, `2991`, `2992`, `2994`, `2995`, `2996`, `2997`, `2998`, `3000`, `3001`, `3002`, `3003`, `3004`, `3006`, `3009`, `3010`, `3011`, `3012`, `3013`, `3015`, `3017`, `3018`, `3020`, `3021`, `3022`, `3025`, `3026`, `3027`, `3028`, `3029`, `3030`, `11`, `3033`, `3034`, `3035`, `3037`, `3038`, `3039`, `3040`, `3041`, `3042`, `3045`, `3047`, `3049`, `3050`, `3051`, `3052`, `3054`, `3056`, `3058`, `3060`, `3062`, `3063`, `3064`, `3065`, `3068`, `3070`, `3071`, `3072`, `3073`, `3074`, `3075`, `3077`, `3079`, `3080`, `3082`, `3085`, `3087`, `3088`, `3090`, `3093`, `3095`, `3096`, `3097`, `3099`, `3101`, `3102`, `3103`, `3106`, `3108`, `3109`, `3110`, `3113`, `3114`, `3117`, `3118`, `3119`, `3121`, `3122`, `3125`, `3126`, `3128`, `3129`, `3130`, `3132`, `3133`, `3135`, `3136`, `3137`, `3139`, `3141`, `3142`, `3143`, `3144`, `3146`, `3147`, `3148`, `3149`, `3151`, `3152`, `3153`, `3154`, `3155`, `3158`, `3159`, `3161`, `3162`, `3164`, `3165`, `3166`, `3167`, `3168`, `3170`, `3171`, `3172`, `3174`, `3175`, `3177`, `3178`, `3179`, `3180`, `3181`, `3182`, `3183`, `3184`, `3185`, `3186`, `3187`, `3188`, `3190`, `3191`, `3193`, `3194`, `3195`, `3196`, `3197`, `3198`, `3199`, `3200`, `3202`, `3204`, `3205`, `3206`, `3207`, `3208`, `3209`, `3210`, `3211`, `3214`, `3215`, `3216`, `3217`, `3218`, `3219`, `3220`, `3222`, `3225`, `3226`, `3227`, `3228`, `3229`, `3231`, `3232`, `3233`, `3235`, `3238`, `3239`, `3240`, `3241`, `3242` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `TOKEN_F` | 99.79 |
| `TOKEN_P` | 99.78 |
| `TOKEN_R` | 99.80 |
| `TOKEN_ACC` | 99.96 |
| `SENTS_F` | 92.35 |
| `SENTS_P` | 94.94 |
| `SENTS_R` | 89.89 |
| `TAG_ACC` | 96.53 |
| `POS_ACC` | 97.85 |
| `MORPH_ACC` | 97.23 |
| `DEP_UAS` | 92.52 |
| `DEP_LAS` | 86.32 |
| `LEMMA_ACC` | 97.00 |
|
727c68a7cb9c38b3faa031916c765757
|
jonatasgrosman/exp_w2v2t_fa_hubert_s889
|
jonatasgrosman
|
hubert
| 10 | 7 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['fa']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'fa']
| false | true | true | 452 | false |
# exp_w2v2t_fa_hubert_s889
Fine-tuned [facebook/hubert-large-ll60k](https://huggingface.co/facebook/hubert-large-ll60k) for speech recognition using the train split of [Common Voice 7.0 (fa)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
cddc9361203f1534763017cdf981c86b
|
explosion/ro_udv25_romaniannonstandard_trf
|
explosion
| null | 28 | 1 |
spacy
| 0 |
token-classification
| false | false | false |
cc-by-sa-4.0
|
['ro']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['spacy', 'token-classification']
| false | true | true | 90,542 | false |
UD v2.5 benchmarking pipeline for UD_Romanian-Nonstandard
| Feature | Description |
| --- | --- |
| **Name** | `ro_udv25_romaniannonstandard_trf` |
| **Version** | `0.0.1` |
| **spaCy** | `>=3.2.1,<3.3.0` |
| **Default Pipeline** | `experimental_char_ner_tokenizer`, `transformer`, `tagger`, `morphologizer`, `parser`, `experimental_edit_tree_lemmatizer` |
| **Components** | `experimental_char_ner_tokenizer`, `transformer`, `senter`, `tagger`, `morphologizer`, `parser`, `experimental_edit_tree_lemmatizer` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | [Universal Dependencies v2.5](https://lindat.mff.cuni.cz/repository/xmlui/handle/11234/1-3105) (Zeman, Daniel; et al.) |
| **License** | `CC BY-SA 4.0` |
| **Author** | [Explosion](https://explosion.ai) |
### Label Scheme
<details>
<summary>View label scheme (7445 labels for 6 components)</summary>
| Component | Labels |
| --- | --- |
| **`experimental_char_ner_tokenizer`** | `TOKEN` |
| **`senter`** | `I`, `S` |
| **`tagger`** | `AdpType=Prep\|Case=Acc`, `Afp`, `Afpf--n`, `Afpfp-n`, `Afpfpon`, `Afpfpoy`, `Afpfprn`, `Afpfpry`, `Afpfson`, `Afpfsoy`, `Afpfsrn`, `Afpfsry`, `Afpmp-n`, `Afpmpoy`, `Afpmprn`, `Afpmpry`, `Afpmpvy`, `Afpms-n`, `Afpmsoy`, `Afpmsrn`, `Afpmsry`, `Afpmsvn`, `Afpmsvy`, `COLON`, `COMMA`, `Cccsp`, `Cccsz`, `Ccssp`, `Ccssz`, `Cscsp`, `Csssp`, `DASH`, `DBLQ`, `Dd3-po---e`, `Dd3-po---o`, `Dd3fpo`, `Dd3fpr`, `Dd3fpr---e`, `Dd3fpr---o`, `Dd3fso`, `Dd3fso---e`, `Dd3fso---o`, `Dd3fsr`, `Dd3fsr---e`, `Dd3fsr---o`, `Dd3mpo`, `Dd3mpr`, `Dd3mpr---e`, `Dd3mpr---o`, `Dd3mso`, `Dd3mso---e`, `Dd3mso---o`, `Dd3msr`, `Dd3msr---e`, `Dd3msr---o`, `Dh1mp`, `Dh1ms`, `Dh2mp`, `Dh2ms`, `Dh3fp`, `Dh3mp`, `Dh3ms`, `Di3--r`, `Di3-po`, `Di3-sr`, `Di3fp`, `Di3fpo`, `Di3fpr`, `Di3fso`, `Di3fsr`, `Di3mpr`, `Di3mso`, `Di3msr`, `Ds1fp-p`, `Ds1fp-s`, `Ds1fsop`, `Ds1fsos`, `Ds1fsrp`, `Ds1fsrs`, `Ds1mp-p`, `Ds1mp-s`, `Ds1ms-p`, `Ds1ms-s`, `Ds2fp-p`, `Ds2fp-s`, `Ds2fsop`, `Ds2fsos`, `Ds2fsrp`, `Ds2fsrs`, `Ds2mp-p`, `Ds2mp-s`, `Ds2ms-p`, `Ds2ms-s`, `Ds3fp-s`, `Ds3fsos`, `Ds3fsrs`, `Ds3mp-s`, `Ds3ms-s`, `Dw3--r`, `Dw3-po`, `Dw3fpr`, `Dw3fso`, `Dw3fsr`, `Dw3mpr`, `Dw3mso`, `Dw3msr`, `Dz3fpr`, `Dz3fsr`, `Dz3msr`, `EXCL`, `EXCLHELLIP`, `HELLIP`, `I`, `LPAR`, `M`, `Mc-p-l`, `Mcfp-l`, `Mcfpol`, `Mcfprln`, `Mcfsoln`, `Mcfsoly`, `Mcfsrln`, `Mcfsrly`, `Mcmp-l`, `Mcms-ln`, `Mcmsoly`, `Mcmsrl`, `Mcmsrly`, `Mffsrln`, `Ml-po`, `Mlfpr`, `Mlmpr`, `Mmfpr-n`, `Mmmpr-n`, `Mmmsr-n`, `Mo---l`, `Mo---ln`, `Mo-s-r`, `Mofprln`, `Mofprly`, `Mofs-l`, `Mofs-ly`, `Mofsrln`, `Mofsrly`, `Momp-ln`, `Moms-l`, `Moms-ln`, `Momsoly`, `Momsrly`, `Ncfpoy`, `Ncfprn`, `Ncfpry`, `Ncfpvy`, `Ncfson`, `Ncfsoy`, `Ncfsrn`, `Ncfsry`, `Ncfsvn`, `Ncfsvy`, `Ncmpoy`, `Ncmprn`, `Ncmpry`, `Ncmpvy`, `Ncmson`, `Ncmsoy`, `Ncmsrn`, `Ncmsry`, `Ncmsvn`, `Ncmsvy`, `Ncnsrn`, `Np`, `Npfpoy`, `Npfprn`, `Npfpry`, `Npfsoy`, `Npfsrn`, `Npfsry`, `Npfsvn`, `Npmpoy`, `Npmprn`, `Npmpry`, `Npmsoy`, `Npmsrn`, `Npmsry`, `Npmsvn`, `Npmsvy`, `PERIOD`, `Pd3-po`, `Pd3-po---o`, `Pd3fpo`, `Pd3fpr`, `Pd3fso`, `Pd3fsr`, `Pd3mpo`, `Pd3mpr`, `Pd3mso`, `Pd3msr`, `Ph1mp`, `Ph1ms`, `Ph2mp`, `Ph2ms`, `Ph3--r`, `Ph3fp`, `Ph3fsr`, `Ph3mp`, `Ph3mpo`, `Ph3mpr`, `Ph3ms`, `Ph3mso`, `Pi3--r`, `Pi3-po`, `Pi3-so`, `Pi3-sr`, `Pi3fpo`, `Pi3fpr`, `Pi3fso`, `Pi3fsr`, `Pi3mpo`, `Pi3mpr`, `Pi3mpry`, `Pi3mso`, `Pi3msr`, `Pi3msry`, `Pp1-pa--------s`, `Pp1-pa--------w`, `Pp1-pd--------s`, `Pp1-pd--------w`, `Pp1-pr`, `Pp1-sa--------s`, `Pp1-sa--------w`, `Pp1-sd--------s`, `Pp1-sd--------w`, `Pp1-sr`, `Pp2-pa--------s`, `Pp2-pa--------w`, `Pp2-pd--------s`, `Pp2-pd--------w`, `Pp2-po`, `Pp2-pr`, `Pp2-sa--------s`, `Pp2-sa--------w`, `Pp2-sd--------s`, `Pp2-sd--------w`, `Pp2-so`, `Pp2-sr`, `Pp3-pd--------s`, `Pp3-pd--------w`, `Pp3-po`, `Pp3-pr`, `Pp3-sd--------w`, `Pp3-so`, `Pp3fpa--------s`, `Pp3fpa--------w`, `Pp3fpr`, `Pp3fsa--------s`, `Pp3fsa--------w`, `Pp3fsd--------s`, `Pp3fso`, `Pp3fsoy`, `Pp3fsr`, `Pp3mpa--------s`, `Pp3mpa--------w`, `Pp3mpo`, `Pp3mpr`, `Pp3msa--------s`, `Pp3msa--------w`, `Pp3msd--------s`, `Pp3mso`, `Pp3msr`, `Pp3msry`, `Ps1fp-p`, `Ps1fp-s`, `Ps1fsrp`, `Ps1fsrs`, `Ps1mp-p`, `Ps1ms-p`, `Ps1ms-s`, `Ps2fp-p`, `Ps2fp-s`, `Ps2fsrp`, `Ps2fsrs`, `Ps2mp-s`, `Ps2ms-p`, `Ps2ms-s`, `Ps3fp-s`, `Ps3fsrs`, `Ps3mp-s`, `Ps3ms-s`, `Pw3--r`, `Pw3-po`, `Pw3-pr`, `Pw3-pry`, `Pw3-so`, `Pw3fpr`, `Pw3fpry`, `Pw3fso`, `Pw3fsr`, `Pw3fsry`, `Pw3mpr`, `Pw3mpry`, `Pw3mso`, `Pw3msr`, `Pw3msry`, `Px3--a--------s`, `Px3--a--------w`, `Px3--d--------s`, `Px3--d--------w`, `Px3--d-------w`, `Pz3-so`, `Pz3-sr`, `Pz3fsr`, `Pz3mso`, `Pz3msr`, `QUEST`, `QUOT`, `Qn`, `Qs`, `Qz`, `RPAR`, `Rg`, `Ri`, `Rw`, `Rz`, `SCOLON`, `Sp`, `Spca`, `Spcg`, `Spsa`, `Spsd`, `Spsg`, `TILDA`, `Td-po`, `Tdfpr`, `Tdfso`, `Tdfsr`, `Tdmpr`, `Tdmso`, `Tdmsr`, `Tf-so`, `Tffsr`, `Tfmso`, `Tfmsr`, `Ti-po`, `Ti-pr`, `Tifso`, `Tifsr`, `Timso`, `Timsr`, `Tsfpr`, `Tsfso`, `Tsfsr`, `Tsmpr`, `Tsmsr`, `Vag-----p`, `Vag-----z`, `Vaii1p`, `Vaii1s`, `Vaii2p`, `Vaii2s`, `Vaii3p`, `Vaii3s`, `Vail3s`, `Vaip1p`, `Vaip1s`, `Vaip2p`, `Vaip2s`, `Vaip3`, `Vaip3p`, `Vaip3s`, `Vais1p`, `Vais1s`, `Vais2p`, `Vais2s`, `Vais3p`, `Vais3s`, `Vam-2p`, `Vam-2p---l`, `Vam-2s--p`, `Vam-2s--z`, `Vam-2s-p`, `Vam-2s-z`, `Vamip3p`, `Vamip3s`, `Vamn`, `Vamsp3`, `Van`, `Van------l`, `Vap`, `Vap--sm-p`, `Vasp1p`, `Vasp1s`, `Vasp2p`, `Vasp2s`, `Vasp3`, `Vasp3s`, `Vmg-----p`, `Vmg-----z`, `Vmii1p`, `Vmii1s`, `Vmii2p`, `Vmii2s`, `Vmii3p`, `Vmii3s`, `Vmil1s`, `Vmil2p`, `Vmil2s`, `Vmil3p`, `Vmil3s`, `Vmip1p`, `Vmip1s`, `Vmip2p`, `Vmip2s`, `Vmip3`, `Vmip3p`, `Vmip3s`, `Vmis1p`, `Vmis1s`, `Vmis2p`, `Vmis2s`, `Vmis3p`, `Vmis3s`, `Vmm-2p`, `Vmm-2p---l`, `Vmm-2s--p`, `Vmm-2s--z`, `Vmn`, `Vmn------l`, `Vmp`, `Vmp--pf-p`, `Vmp--pf-z`, `Vmp--pm-p`, `Vmp--pm-z`, `Vmp--sf-p--o`, `Vmp--sf-p--r`, `Vmp--sf-z--r`, `Vmp--sm-p`, `Vmp--sm-z`, `Vmsp1p`, `Vmsp1s`, `Vmsp2p`, `Vmsp2s`, `Vmsp3`, `Vmsp3s`, `X`, `Y` |
| **`morphologizer`** | `AdpType=Prep\|Case=Acc\|POS=ADP`, `Case=Acc,Nom\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Acc,Nom\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=NOUN`, `POS=PUNCT`, `Case=Acc,Nom\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int,Rel`, `POS=ADV\|PronType=Int,Rel`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Strength=Weak`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `POS=ADV`, `Case=Acc,Nom\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres`, `Case=Acc,Nom\|Gender=Fem\|Number=Plur\|POS=DET\|Person=3\|PronType=Ind`, `Case=Acc,Nom\|Definite=Def\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Acc,Nom\|POS=PRON\|Person=3\|PronType=Int,Rel`, `POS=CCONJ\|Polarity=Pos`, `Compound=Yes\|POS=SCONJ\|Polarity=Pos`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|POS=PRON\|Person=3\|PronType=Prs\|Strength=Weak`, `Case=Acc,Nom\|Definite=Ind\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Acc,Nom\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Acc,Nom\|Definite=Def\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Dat,Gen\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=NOUN`, `POS=PART\|PartType=Sub`, `Mood=Sub\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Acc,Nom\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs\|Strength=Strong`, `POS=VERB\|VerbForm=Inf`, `Case=Acc,Nom\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=NOUN`, `POS=ADV\|Polarity=Neg`, `Case=Dat,Gen\|Definite=Def\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Acc,Nom\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs\|Strength=Weak`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|Position=Prenom\|PronType=Dem`, `POS=AUX\|Polarity=Pos\|VerbForm=Ger`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `POS=VERB\|Polarity=Pos\|VerbForm=Ger`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Strength=Strong`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Imp`, `Case=Acc,Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc,Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Strength=Strong`, `Case=Voc\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=NOUN`, `POS=INTJ`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `POS=SCONJ\|Polarity=Pos`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Case=Dat,Gen\|Definite=Def\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Pos\|VerbForm=Fin`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs\|Strength=Weak`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Tense=Pres`, `Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Pos\|VerbForm=Part`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Acc,Nom\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Acc,Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `AdpType=Prep\|Case=Acc\|Compound=Yes\|POS=ADP`, `Case=Acc,Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Acc,Nom\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Acc,Nom\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Dat,Gen\|Definite=Def\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Acc,Nom\|Definite=Def\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Acc,Nom\|Definite=Def\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Acc,Nom\|Number=Sing\|POS=PRON\|Person=3\|PronType=Neg`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|PronType=Ind`, `Case=Acc,Nom\|POS=DET\|Person=3\|PronType=Int,Rel`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs\|Strength=Strong`, `Case=Voc\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Acc,Nom\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Dat,Gen\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs\|Strength=Weak`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Strength=Weak`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs\|Strength=Weak`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs\|Strength=Weak`, `Case=Acc,Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs\|Strength=Weak`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs\|Strength=Strong`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Acc,Nom\|Gender=Masc\|Number=Plur\|POS=DET\|Person=3\|PronType=Ind`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs\|Strength=Strong`, `Case=Acc,Nom\|Gender=Fem\|Number=Plur\|POS=DET\|Person=3\|Position=Postnom\|PronType=Dem`, `Case=Acc,Nom\|Definite=Ind\|Gender=Fem\|NumForm=Word\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Acc,Nom\|Definite=Ind\|Gender=Fem\|NumForm=Word\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Acc,Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres`, `POS=AUX\|VerbForm=Part`, `POS=VERB\|VerbForm=Part`, `Case=Acc,Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pqp\|VerbForm=Fin`, `POS=PART\|PartType=Inf`, `Case=Dat,Gen\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Pos\|VerbForm=Part`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|PronType=Prs`, `Case=Acc,Nom\|Gender=Masc\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Art`, `Gender=Masc\|Number=Plur\|Number[psor]=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Acc,Nom\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Int,Rel`, `NumForm=Word\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Art`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Strength=Strong`, `Mood=Sub\|POS=AUX\|Person=3\|Tense=Pres`, `Gender=Masc\|NumForm=Word\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Dat\|POS=PRON\|Person=3\|PronType=Prs\|Strength=Weak`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Fem\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=1\|PronType=Prs`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs\|Strength=Strong`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Strength=Weak`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int,Rel`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|VerbForm=Part`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Strength=Strong`, `Mood=Imp\|Number=Sing\|POS=AUX\|Person=2\|Polarity=Pos`, `Case=Acc,Nom\|Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|Position=Prenom\|PronType=Dem`, `Case=Dat,Gen\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Acc\|POS=PRON\|Person=3\|PronType=Prs\|Strength=Strong`, `Case=Acc,Nom\|Definite=Def\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Int,Rel`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs\|Strength=Strong`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|PronType=Prs`, `Case=Dat,Gen\|Definite=Def\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Dat,Gen\|Definite=Def\|Gender=Fem\|Number=Sing\|POS=PROPN`, `NumForm=Digit\|POS=NUM`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Case=Dat,Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Dat,Gen\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos\|VerbForm=Part`, `POS=PROPN`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|PronType=Neg`, `Case=Acc,Nom\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Acc,Nom\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Compound=Yes\|POS=CCONJ\|Polarity=Neg`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|VerbForm=Fin`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs\|Strength=Strong`, `Gender=Fem\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=2\|PronType=Prs`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs\|Strength=Strong`, `POS=AUX\|VerbForm=Inf`, `AdpType=Prep\|Case=Gen\|Compound=Yes\|POS=ADP`, `Case=Dat,Gen\|Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|Position=Postnom\|PronType=Dem`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|PronType=Prs`, `Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|PronType=Prs`, `Case=Acc,Nom\|Definite=Def\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int,Rel`, `Gender=Masc\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|PronType=Prs`, `Case=Dat,Gen\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Dat,Gen\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Tense=Pres`, `Case=Acc,Nom\|Definite=Ind\|Gender=Fem\|NumForm=Word\|NumType=Frac\|Number=Sing\|POS=NUM`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Dat,Gen\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Case=Acc,Nom\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Gender=Masc\|Number=Plur\|POS=DET\|Person=2\|PronType=Emp`, `Case=Acc,Nom\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Acc,Nom\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `POS=VERB\|Polarity=Neg\|VerbForm=Ger`, `Case=Acc,Nom\|Gender=Fem\|Number=Plur\|POS=DET\|Person=3\|PronType=Int,Rel`, `Case=Dat,Gen\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind`, `Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Emp`, `Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|PronType=Emp`, `Gender=Fem\|NumForm=Word\|NumType=Ord\|Number=Sing\|POS=NUM`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs\|Strength=Weak`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|Variant=Long\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Emp`, `Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Neg\|VerbForm=Part`, `Gender=Masc\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=2\|PronType=Prs`, `Case=Voc\|Definite=Def\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Acc,Nom\|Gender=Masc\|Number=Plur\|POS=DET\|Person=3\|Position=Prenom\|PronType=Dem`, `Case=Acc,Nom\|Definite=Def\|Number=Plur\|POS=PRON\|Person=3\|PronType=Int,Rel`, `Gender=Fem\|Number=Plur\|POS=VERB\|Polarity=Pos\|VerbForm=Part`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Dat,Gen\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|PronType=Int,Rel`, `Case=Acc,Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Neg`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|PronType=Prs`, `Mood=Sub\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs\|Strength=Weak`, `Case=Acc,Nom\|Gender=Fem\|Number=Plur\|POS=DET\|Person=3\|Position=Prenom\|PronType=Dem`, `Case=Dat,Gen\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Case=Voc\|Definite=Ind\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Gender=Fem\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|PronType=Prs`, `Case=Acc,Nom\|Gender=Fem\|Number=Plur\|POS=DET\|Person=3\|PronType=Dem`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Acc,Nom\|Gender=Masc\|Number=Plur\|POS=DET\|Person=3\|PronType=Dem`, `Compound=Yes\|POS=CCONJ\|Polarity=Pos`, `Case=Acc,Nom\|Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|Position=Postnom\|PronType=Dem`, `Case=Dat,Gen\|Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|Position=Postnom\|PronType=Dem`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin`, `Case=Dat,Gen\|Gender=Fem\|NumForm=Word\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Dat,Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Acc,Nom\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Voc\|Definite=Def\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Acc,Nom\|Gender=Fem\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Art`, `Case=Dat\|POS=PRON\|Person=3\|PronType=Prs\|Strength=Strong`, `Case=Dat,Gen\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int,Rel`, `Case=Acc,Nom\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=NOUN`, `AdpType=Prep\|Case=Gen\|POS=ADP`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Past`, `Case=Acc,Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int,Rel`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs\|Strength=Strong`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|PronType=Prs`, `Case=Dat,Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Neg`, `Gender=Masc\|Number=Plur\|POS=PRON\|Person=2\|PronType=Emp`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Case=Dat,Gen\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Past`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Past`, `Case=Dat,Gen\|Number=Plur\|POS=DET\|Person=3\|PronType=Ind`, `Gender=Fem\|NumForm=Word\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Acc,Nom\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Gender=Masc\|Number=Sing\|POS=VERB\|Polarity=Neg\|VerbForm=Part`, `Case=Acc,Nom\|Gender=Masc\|NumType=Card\|Number=Plur\|POS=NUM\|PronType=Tot`, `Case=Acc,Nom\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Dat,Gen\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Acc,Nom\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Gender=Masc\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=1\|PronType=Prs`, `POS=VERB\|Variant=Long\|VerbForm=Inf`, `Definite=Ind\|Gender=Masc\|NumForm=Word\|NumType=Ord\|Number=Sing\|POS=NUM`, `Case=Dat,Gen\|Gender=Masc\|Number=Plur\|POS=DET\|Person=3\|PronType=Dem`, `AdpType=Prep\|Case=Dat\|POS=ADP`, `Case=Dat,Gen\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs\|Strength=Weak`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs\|Strength=Weak`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs\|Strength=Strong`, `Case=Acc,Nom\|Definite=Def\|Gender=Masc\|NumForm=Word\|NumType=Ord\|Number=Sing\|POS=NUM`, `Case=Acc,Nom\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|PronType=Prs`, `Case=Dat,Gen\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=2\|PronType=Prs`, `Compound=Yes\|POS=ADV\|Polarity=Neg`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Acc,Nom\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Art`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Gender=Masc\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|PronType=Prs`, `Case=Dat,Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int,Rel`, `Case=Dat,Gen\|NumType=Card\|Number=Plur\|POS=NUM\|PronType=Tot`, `Gender=Masc\|NumForm=Word\|NumType=Ord\|Number=Sing\|POS=NUM`, `Definite=Ind\|NumForm=Word\|NumType=Ord\|POS=NUM`, `Case=Dat,Gen\|Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|PronType=Ind`, `Gender=Fem\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=2\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Imp\|VerbForm=Fin`, `Case=Acc,Nom\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `POS=AUX\|Variant=Long\|VerbForm=Inf`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|PronType=Dem`, `Case=Dat,Gen\|Definite=Ind\|Gender=Fem\|NumForm=Word\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Dat,Gen\|Gender=Fem\|Number=Plur\|POS=DET\|Person=3\|PronType=Ind`, `Case=Dat,Gen\|Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|PronType=Dem`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|Position=Postnom\|PronType=Dem`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Tense=Past\|VerbForm=Fin`, `Case=Acc,Nom\|Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|PronType=Dem`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Neg\|VerbForm=Part`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Imp\|VerbForm=Fin`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs\|Strength=Strong`, `POS=X`, `Case=Dat,Gen\|Number=Plur\|POS=DET\|Person=3\|Position=Prenom\|PronType=Dem`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Neg\|VerbForm=Fin`, `Case=Dat,Gen\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|PronType=Prs`, `Case=Dat,Gen\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|PronType=Prs`, `Case=Dat,Gen\|Number=Plur\|POS=DET\|Person=3\|Position=Postnom\|PronType=Dem`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs\|Strength=Weak`, `Case=Dat,Gen\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Dat,Gen\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|PronType=Prs`, `Gender=Fem\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|PronType=Prs`, `Case=Acc,Nom\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=NUM`, `Gender=Masc\|Number=Sing\|POS=PRON\|Person=2\|PronType=Emp`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc,Nom\|Gender=Fem\|NumType=Card\|Number=Plur\|POS=NUM\|PronType=Tot`, `Case=Acc,Nom\|Number=Plur\|POS=DET\|PronType=Ind`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=VERB\|Polarity=Neg\|VerbForm=Part`, `Case=Acc,Nom\|Definite=Ind\|Gender=Fem\|NumForm=Word\|NumType=Ord\|Number=Sing\|POS=NUM`, `Case=Dat,Gen\|Number=Plur\|POS=PRON\|Person=3\|PronType=Int,Rel`, `Gender=Masc\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=2\|PronType=Prs`, `Gender=Fem\|Number=Plur\|Number[psor]=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Acc,Nom\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Mood=Imp\|Number=Plur\|POS=AUX\|Person=2`, `Case=Dat,Gen\|Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|PronType=Dem`, `Case=Dat,Gen\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `POS=AUX\|Polarity=Neg\|VerbForm=Ger`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Dat,Gen\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres`, `Case=Acc,Nom\|POS=DET\|Person=3\|PronType=Ind`, `Case=Voc\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Dat,Gen\|Number=Sing\|POS=PRON\|Person=3\|PronType=Neg`, `POS=CCONJ\|Polarity=Neg`, `Case=Dat,Gen\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Voc\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Acc,Nom\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Dat,Gen\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Acc,Nom\|Gender=Masc\|Number=Plur\|POS=DET\|Person=3\|Position=Postnom\|PronType=Dem`, `Case=Acc,Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|Polite=Form\|PronType=Prs`, `Case=Dat,Gen\|Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|Position=Prenom\|PronType=Dem`, `Case=Acc,Nom\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Acc,Nom\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Acc,Nom\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Tense=Past`, `Gender=Masc\|Number=Sing\|POS=AUX\|Polarity=Pos\|VerbForm=Part`, `Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Acc,Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Emp`, `Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin`, `Case=Dat,Gen\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int,Rel`, `Case=Acc,Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|Position=Postnom\|PronType=Dem`, `Case=Voc\|Definite=Def\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Acc,Nom\|Definite=Def\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Case=Acc,Nom\|Definite=Ind\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Voc\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `POS=PRON\|Polarity=Pos`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Emp`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Case=Acc,Nom\|Number=Sing\|POS=DET\|Person=3\|PronType=Ind`, `Case=Dat,Gen\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=DET\|Person=2\|PronType=Emp`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Tense=Imp`, `Case=Dat,Gen\|Number=Plur\|POS=PRON\|Person=3\|Position=Postnom\|PronType=Dem`, `Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Acc,Nom\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Acc,Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=2\|Tense=Pres`, `Case=Acc,Nom\|Number=Plur\|POS=PRON\|Person=3\|PronType=Int,Rel`, `Gender=Masc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Emp`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres`, `Gender=Masc\|Number=Plur\|POS=DET\|Person=3\|PronType=Emp`, `Case=Dat,Gen\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Gender=Fem\|Number=Plur\|Number[psor]=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=2\|Tense=Pres`, `Gender=Fem\|Number=Plur\|Number[psor]=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Dat,Gen\|Definite=Def\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Compound=Yes\|POS=ADP\|Polarity=Pos`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres`, `Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Emp`, `Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `POS=ADJ`, `Case=Voc\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Dat,Gen\|Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|Position=Prenom\|PronType=Dem`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pqp\|VerbForm=Fin`, `Case=Dat,Gen\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Acc,Nom\|Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|PronType=Ind`, `Case=Dat,Gen\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Emp`, `Case=Dat,Gen\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Imp\|Number=Sing\|POS=AUX\|Person=2\|Polarity=Neg`, `Case=Acc,Nom\|Definite=Def\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin`, `Case=Acc,Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Int,Rel`, `POS=ADV\|PronType=Ind`, `Case=Acc,Nom\|Definite=Def\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Case=Acc,Nom\|Definite=Def\|Gender=Fem\|NumForm=Word\|NumType=Ord\|Number=Sing\|POS=NUM`, `POS=AUX\|Polarity=Pos`, `Case=Acc,Nom\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Imp`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres`, `NumForm=Roman\|NumType=Ord\|Number=Sing\|POS=NUM`, `Case=Voc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Past`, `Case=Dat,Gen\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `Definite=Ind\|Gender=Masc\|NumForm=Word\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Dat,Gen\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pqp\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres`, `Gender=Fem\|Number=Plur\|Number[psor]=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Imp`, `Definite=Ind\|Gender=Masc\|NumForm=Word\|NumType=Ord\|Number=Plur\|POS=NUM`, `Gender=Masc\|Number=Plur\|Number[psor]=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Dat,Gen\|Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|PronType=Ind`, `Mood=Imp\|Number=Plur\|POS=AUX\|Person=2\|Variant=Long\|VerbForm=Fin`, `Mood=Imp\|Number=Plur\|POS=AUX\|Person=2\|Variant=Long`, `Case=Dat,Gen\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=2\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Tense=Imp`, `Case=Acc,Nom\|Definite=Def\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Dat,Gen\|Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|PronType=Int,Rel`, `Case=Acc,Nom\|POS=PRON\|Person=3\|PronType=Emp`, `NumForm=Word\|NumType=Ord\|POS=NUM`, `Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Emp`, `Case=Dat,Gen\|Number=Plur\|POS=DET\|Person=3\|PronType=Int,Rel`, `Case=Acc,Nom\|Definite=Def\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Int,Rel`, `Case=Dat,Gen\|Gender=Fem\|Number=Plur\|POS=DET\|Person=3\|PronType=Dem`, `Gender=Masc\|Number=Plur\|POS=DET\|Person=1\|PronType=Emp`, `Gender=Masc\|Number=Sing\|POS=DET\|Person=1\|PronType=Emp`, `Mood=Imp\|Number=Plur\|POS=AUX\|Person=2\|VerbForm=Fin`, `Case=Dat,Gen\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Art`, `Gender=Fem\|Number=Plur\|Number[psor]=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Dat,Gen\|Definite=Def\|Gender=Masc\|NumForm=Word\|NumType=Ord\|Number=Sing\|POS=NUM`, `Gender=Fem\|Number=Plur\|POS=DET\|Person=3\|PronType=Emp`, `Case=Dat,Gen\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Definite=Ind\|Degree=Pos\|Gender=Fem\|POS=ADJ`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres`, `Gender=Fem\|Number=Plur\|POS=DET\|Person=3\|PronType=Ind`, `Case=Voc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Tense=Past`, `Case=Dat,Gen\|Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Dat,Gen\|Definite=Def\|Gender=Fem\|NumForm=Word\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Dat,Gen\|Definite=Def\|Gender=Masc\|NumForm=Word\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Dat,Gen\|Number=Plur\|POS=DET\|PronType=Ind`, `Definite=Ind\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|POS=VERB\|Polarity=Pos`, `Gender=Masc\|Number=Plur\|Number[psor]=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Dat,Gen\|Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|PronType=Int,Rel`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pqp\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pqp\|VerbForm=Fin`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Neg`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pqp`, `Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes\|Strength=Weak`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Dat\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes\|Strength=Weak`, `Case=Dat,Gen\|Number=Plur\|POS=PRON\|Person=2\|Polite=Form\|PronType=Prs`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|POS=AUX\|Person=3\|Tense=Pres`, `Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Gender=Fem\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Acc,Nom\|Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|PronType=Neg`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Acc\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes\|Strength=Strong`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Dat,Gen\|Number=Sing\|POS=PRON\|Person=2\|Polite=Form\|PronType=Prs`, `Case=Acc,Nom\|Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|PronType=Int,Rel`, `Case=Acc,Nom\|Gender=Masc\|NumForm=Word\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Acc,Nom\|Definite=Ind\|Degree=Pos\|Gender=Fem\|POS=ADJ`, `POS=DET`, `Case=Acc,Nom\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `POS=ADP`, `Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Acc,Nom\|Definite=Def\|Gender=Fem\|NumForm=Word\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Acc,Nom\|Gender=Masc\|Number=Plur\|POS=DET\|Person=3\|PronType=Int,Rel`, `Case=Acc,Nom\|Definite=Ind\|Gender=Fem\|NumForm=Word\|NumType=Ord\|Number=Plur\|POS=NUM`, `Case=Acc,Nom\|Definite=Ind\|Gender=Fem\|NumType=Mult\|Number=Plur\|POS=NUM`, `Case=Acc,Nom\|Definite=Ind\|Gender=Masc\|NumType=Mult\|Number=Plur\|POS=NUM`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Imp`, `Case=Dat,Gen\|Definite=Def\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Mood=Ind\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Dat,Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Emp`, `Case=Acc,Nom\|Definite=Def\|Gender=Fem\|NumForm=Word\|NumType=Ord\|Number=Plur\|POS=NUM`, `Case=Acc,Nom\|Gender=Fem\|Number=Plur\|POS=DET\|Person=3\|PronType=Neg`, `Case=Dat,Gen\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Degree=Pos\|POS=ADJ`, `Case=Dat,Gen\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind`, `Case=Acc,Nom\|Definite=Ind\|Gender=Masc\|NumType=Mult\|Number=Sing\|POS=NUM`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Part`, `Case=Acc,Nom\|Definite=Def\|Gender=Masc\|NumForm=Word\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Acc,Nom\|Gender=Masc\|Number=Sing\|POS=ADV\|Person=3\|PronType=Int,Rel`, `Case=Dat,Gen\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|Polite=Form\|PronType=Prs` |
| **`parser`** | `ROOT`, `acl`, `advcl`, `advcl:tcl`, `advmod`, `advmod:tmod`, `amod`, `appos`, `aux`, `aux:pass`, `case`, `cc`, `cc:preconj`, `ccomp`, `ccomp:pmod`, `compound`, `conj`, `cop`, `csubj`, `csubj:pass`, `dep`, `det`, `discourse`, `expl`, `expl:impers`, `expl:pass`, `expl:poss`, `expl:pv`, `fixed`, `flat`, `iobj`, `mark`, `nmod`, `nmod:agent`, `nmod:pmod`, `nmod:tmod`, `nsubj`, `nsubj:pass`, `nummod`, `obj`, `obl`, `orphan`, `parataxis`, `punct`, `vocative`, `xcomp` |
| **`experimental_edit_tree_lemmatizer`** | `1`, `3`, `4`, `6`, `8`, `12`, `14`, `16`, `19`, `23`, `29`, `30`, `32`, `35`, `37`, `39`, `40`, `45`, `46`, `47`, `51`, `53`, `54`, `57`, `61`, `63`, `65`, `66`, `69`, `33`, `71`, `73`, `76`, `79`, `80`, `84`, `86`, `87`, `88`, `89`, `92`, `95`, `97`, `100`, `103`, `105`, `107`, `110`, `112`, `113`, `115`, `117`, `120`, `121`, `123`, `125`, `126`, `128`, `130`, `132`, `133`, `136`, `140`, `143`, `145`, `147`, `58`, `148`, `151`, `154`, `157`, `159`, `163`, `165`, `167`, `171`, `174`, `176`, `178`, `180`, `182`, `184`, `185`, `187`, `188`, `190`, `192`, `196`, `197`, `199`, `200`, `202`, `206`, `208`, `210`, `211`, `213`, `215`, `216`, `219`, `221`, `223`, `225`, `226`, `228`, `230`, `232`, `236`, `238`, `241`, `242`, `244`, `246`, `248`, `251`, `253`, `255`, `258`, `260`, `264`, `265`, `267`, `272`, `275`, `278`, `280`, `281`, `284`, `286`, `287`, `290`, `291`, `292`, `295`, `296`, `298`, `300`, `301`, `302`, `305`, `306`, `307`, `309`, `310`, `312`, `314`, `315`, `317`, `319`, `321`, `323`, `324`, `327`, `330`, `332`, `334`, `335`, `337`, `339`, `340`, `343`, `344`, `345`, `346`, `350`, `351`, `353`, `355`, `357`, `360`, `362`, `366`, `368`, `369`, `370`, `371`, `224`, `374`, `376`, `378`, `379`, `381`, `384`, `385`, `386`, `388`, `389`, `391`, `392`, `393`, `396`, `398`, `399`, `403`, `406`, `408`, `411`, `413`, `415`, `418`, `422`, `423`, `426`, `427`, `431`, `433`, `436`, `438`, `440`, `442`, `445`, `448`, `449`, `450`, `451`, `452`, `454`, `455`, `457`, `459`, `460`, `462`, `464`, `466`, `468`, `471`, `472`, `473`, `474`, `475`, `478`, `481`, `482`, `485`, `486`, `488`, `490`, `492`, `494`, `495`, `497`, `498`, `499`, `501`, `503`, `504`, `506`, `508`, `510`, `513`, `514`, `515`, `516`, `518`, `519`, `521`, `523`, `524`, `526`, `527`, `528`, `530`, `533`, `96`, `537`, `538`, `539`, `542`, `544`, `545`, `547`, `548`, `553`, `555`, `556`, `558`, `559`, `561`, `562`, `563`, `565`, `566`, `570`, `572`, `573`, `575`, `577`, `578`, `579`, `581`, `583`, `584`, `586`, `588`, `589`, `592`, `594`, `595`, `596`, `598`, `599`, `600`, `601`, `604`, `606`, `607`, `608`, `612`, `613`, `616`, `619`, `621`, `623`, `625`, `628`, `629`, `630`, `632`, `635`, `636`, `173`, `639`, `641`, `643`, `647`, `649`, `651`, `654`, `656`, `658`, `659`, `661`, `662`, `663`, `666`, `668`, `669`, `670`, `672`, `673`, `676`, `677`, `679`, `681`, `683`, `685`, `687`, `689`, `690`, `691`, `693`, `694`, `695`, `696`, `698`, `699`, `701`, `702`, `703`, `704`, `705`, `706`, `708`, `712`, `713`, `716`, `718`, `720`, `722`, `724`, `725`, `729`, `732`, `734`, `735`, `736`, `739`, `742`, `745`, `747`, `750`, `753`, `755`, `758`, `759`, `761`, `763`, `764`, `766`, `768`, `769`, `771`, `772`, `774`, `777`, `778`, `781`, `784`, `785`, `787`, `790`, `794`, `797`, `800`, `801`, `802`, `804`, `807`, `809`, `814`, `817`, `820`, `821`, `822`, `824`, `827`, `828`, `829`, `832`, `834`, `836`, `837`, `839`, `840`, `841`, `843`, `844`, `846`, `847`, `848`, `850`, `851`, `852`, `855`, `116`, `856`, `860`, `861`, `863`, `866`, `868`, `869`, `871`, `874`, `875`, `877`, `879`, `881`, `884`, `886`, `888`, `890`, `891`, `892`, `894`, `897`, `898`, `900`, `901`, `902`, `904`, `905`, `908`, `913`, `914`, `916`, `917`, `918`, `921`, `922`, `924`, `927`, `929`, `932`, `934`, `935`, `937`, `939`, `941`, `943`, `946`, `948`, `949`, `951`, `952`, `954`, `955`, `956`, `958`, `960`, `963`, `965`, `968`, `971`, `972`, `974`, `978`, `981`, `983`, `984`, `986`, `988`, `989`, `991`, `992`, `994`, `997`, `998`, `1000`, `1001`, `1002`, `1004`, `1006`, `1007`, `1008`, `1010`, `1011`, `1013`, `1014`, `1015`, `1017`, `1019`, `1022`, `1024`, `1029`, `1030`, `1032`, `1034`, `767`, `1035`, `1036`, `1037`, `1038`, `1040`, `1041`, `1042`, `1044`, `1045`, `1046`, `1049`, `1050`, `1052`, `1053`, `1055`, `1058`, `1061`, `1065`, `1067`, `1068`, `1071`, `1072`, `1074`, `1076`, `1078`, `1080`, `1081`, `1083`, `1084`, `1086`, `1087`, `1090`, `1091`, `1093`, `1097`, `1098`, `1099`, `1100`, `1102`, `1105`, `1106`, `1107`, `1110`, `1111`, `1113`, `1116`, `1123`, `1126`, `1127`, `1128`, `1129`, `1131`, `1132`, `1133`, `1135`, `1137`, `1139`, `1141`, `1144`, `1145`, `1147`, `1149`, `1150`, `1152`, `1154`, `1155`, `1156`, `1157`, `1158`, `1115`, `1159`, `1160`, `1162`, `1163`, `1164`, `1165`, `1168`, `1170`, `1172`, `1173`, `1174`, `1175`, `1176`, `1177`, `1178`, `1179`, `1181`, `1183`, `1184`, `1186`, `1187`, `1191`, `1195`, `1197`, `1198`, `1200`, `1201`, `1203`, `1205`, `1207`, `1209`, `1211`, `1212`, `1214`, `1215`, `1217`, `1219`, `1220`, `1223`, `1225`, `1227`, `183`, `1228`, `1231`, `1232`, `1234`, `1237`, `1239`, `1240`, `1242`, `1245`, `1247`, `1248`, `1249`, `1251`, `1252`, `1254`, `1255`, `1257`, `1259`, `1261`, `1263`, `1264`, `1266`, `1268`, `1272`, `1273`, `1277`, `1278`, `1280`, `1281`, `1282`, `1285`, `1286`, `1290`, `1291`, `1294`, `1296`, `1298`, `1300`, `1301`, `1303`, `1305`, `1308`, `1309`, `1310`, `1311`, `1312`, `1314`, `1316`, `1318`, `1320`, `1322`, `1324`, `1325`, `1327`, `1329`, `1331`, `1333`, `1335`, `1337`, `1338`, `1339`, `1341`, `1342`, `1343`, `1344`, `1346`, `1347`, `1350`, `142`, `1354`, `1355`, `1357`, `1358`, `1360`, `1362`, `1365`, `1366`, `1367`, `1368`, `1369`, `744`, `1370`, `1372`, `1373`, `1374`, `1375`, `1376`, `1377`, `1378`, `1380`, `1381`, `1382`, `1383`, `1386`, `1388`, `1389`, `1390`, `1394`, `1396`, `1399`, `1402`, `1405`, `1407`, `1409`, `1411`, `1412`, `1413`, `1414`, `1418`, `1419`, `1421`, `1422`, `1423`, `1424`, `1426`, `1427`, `1430`, `1432`, `1433`, `1434`, `1436`, `1438`, `1439`, `1440`, `1441`, `1442`, `1443`, `1446`, `1447`, `1448`, `1449`, `1450`, `1454`, `1456`, `1458`, `1459`, `1460`, `1464`, `1465`, `1467`, `1468`, `1469`, `1470`, `1472`, `1473`, `1475`, `1478`, `1479`, `1481`, `1483`, `1484`, `1486`, `1003`, `1489`, `1491`, `1493`, `1496`, `1498`, `1499`, `1501`, `1503`, `1506`, `1508`, `1511`, `1514`, `1515`, `1517`, `1518`, `1521`, `1522`, `1523`, `1524`, `1525`, `1528`, `1530`, `1531`, `1532`, `1533`, `1537`, `1539`, `1541`, `1542`, `1543`, `1545`, `1546`, `1547`, `1549`, `1550`, `1551`, `1552`, `1553`, `1555`, `1558`, `1559`, `1561`, `1562`, `1564`, `1566`, `1568`, `1570`, `1572`, `1576`, `1577`, `1579`, `1580`, `1582`, `1584`, `1585`, `1588`, `1590`, `1592`, `1593`, `1594`, `1596`, `1597`, `1599`, `1600`, `1601`, `1603`, `1605`, `1607`, `1609`, `1613`, `1615`, `1617`, `1619`, `1622`, `1623`, `1624`, `1625`, `1626`, `1627`, `1628`, `1629`, `1630`, `1633`, `1636`, `1638`, `1639`, `1640`, `1641`, `1643`, `1645`, `1647`, `1649`, `1652`, `1655`, `1656`, `1658`, `1660`, `1662`, `1665`, `1667`, `1669`, `1670`, `1671`, `1673`, `1674`, `1677`, `1678`, `1679`, `1680`, `1683`, `1686`, `1688`, `1689`, `1691`, `1693`, `1694`, `1696`, `1698`, `1699`, `1703`, `1704`, `1707`, `1708`, `1710`, `1712`, `1714`, `1716`, `1718`, `1720`, `1722`, `1724`, `1725`, `1726`, `1727`, `1729`, `1730`, `1731`, `1733`, `1734`, `1736`, `1737`, `1740`, `1741`, `1743`, `1744`, `1746`, `1747`, `1749`, `1750`, `1751`, `1752`, `1754`, `1755`, `1757`, `1758`, `1760`, `1762`, `1764`, `1766`, `1767`, `1769`, `1771`, `1774`, `1777`, `1779`, `1780`, `1781`, `1783`, `1785`, `1786`, `1789`, `1790`, `1793`, `1796`, `1799`, `1800`, `1802`, `1804`, `1805`, `1807`, `1809`, `1810`, `1813`, `1815`, `1817`, `1819`, `1822`, `1823`, `1825`, `1826`, `1827`, `1829`, `1830`, `1833`, `1835`, `1837`, `1840`, `1843`, `1844`, `1846`, `1848`, `1850`, `1853`, `1854`, `1855`, `1857`, `1859`, `1863`, `1865`, `1867`, `1870`, `1872`, `1873`, `1874`, `1875`, `1876`, `1878`, `1879`, `1880`, `1882`, `1884`, `1885`, `1888`, `1889`, `1892`, `1893`, `1895`, `1896`, `1897`, `1898`, `1899`, `1901`, `1903`, `1905`, `1907`, `1909`, `1911`, `1913`, `1915`, `1916`, `1918`, `1919`, `1921`, `1923`, `1925`, `1928`, `1931`, `1933`, `1935`, `1936`, `1938`, `1940`, `1943`, `1945`, `1946`, `1948`, `1951`, `1954`, `1956`, `1957`, `1958`, `1960`, `1962`, `1963`, `1965`, `1967`, `1969`, `1971`, `1973`, `1976`, `1977`, `1979`, `1981`, `1984`, `1986`, `1988`, `1989`, `1991`, `1994`, `1996`, `1999`, `2000`, `2001`, `2003`, `2004`, `2006`, `2008`, `2010`, `2011`, `2016`, `2017`, `2019`, `2020`, `2022`, `2023`, `2024`, `2025`, `2026`, `2027`, `2029`, `2031`, `2033`, `2034`, `2035`, `2036`, `2038`, `2041`, `2042`, `2043`, `2045`, `2047`, `2048`, `2049`, `2051`, `2053`, `2055`, `2057`, `2060`, `2063`, `2064`, `2066`, `2067`, `2068`, `2070`, `2071`, `2072`, `2073`, `2074`, `2075`, `2076`, `2079`, `2080`, `2082`, `2083`, `2084`, `2085`, `2086`, `2087`, `2089`, `2092`, `2094`, `2095`, `2098`, `2100`, `2102`, `2104`, `2105`, `2107`, `2109`, `2110`, `2112`, `2115`, `2117`, `2119`, `2120`, `2121`, `2123`, `2124`, `1482`, `2125`, `2127`, `2129`, `2132`, `2134`, `2137`, `2139`, `2140`, `2143`, `2146`, `2147`, `2148`, `2149`, `2150`, `2152`, `2154`, `2156`, `2157`, `2158`, `2159`, `2160`, `2161`, `2162`, `2164`, `2166`, `2168`, `2169`, `2170`, `2171`, `2173`, `2174`, `2177`, `2178`, `2180`, `2182`, `2183`, `2186`, `2188`, `2189`, `2191`, `2192`, `2193`, `2194`, `2195`, `2197`, `2198`, `2199`, `2200`, `2202`, `2206`, `2208`, `2209`, `2211`, `2214`, `2216`, `2217`, `2220`, `2221`, `2222`, `2223`, `2224`, `2225`, `2226`, `2228`, `2229`, `2230`, `2232`, `2234`, `2236`, `2237`, `2239`, `2241`, `2242`, `2243`, `2244`, `2245`, `2246`, `2248`, `2249`, `2251`, `2252`, `2172`, `2254`, `2256`, `2257`, `2258`, `2259`, `2261`, `2262`, `2263`, `2265`, `2267`, `2268`, `2270`, `2274`, `2277`, `2279`, `2280`, `2281`, `2282`, `2284`, `2286`, `2287`, `2291`, `2293`, `2294`, `2296`, `2297`, `2298`, `2300`, `2303`, `2305`, `2307`, `2308`, `2310`, `2312`, `2314`, `2316`, `2317`, `2319`, `2321`, `2323`, `2325`, `2326`, `2328`, `2329`, `2330`, `2331`, `2332`, `2333`, `2334`, `2336`, `2338`, `2341`, `2343`, `2345`, `2348`, `2349`, `2351`, `2352`, `2353`, `2355`, `2356`, `2358`, `2359`, `2361`, `2362`, `2364`, `2366`, `2368`, `2369`, `2371`, `2373`, `2375`, `2377`, `2378`, `2379`, `2381`, `2382`, `2383`, `2384`, `2385`, `2387`, `2389`, `2392`, `2395`, `2396`, `2398`, `2399`, `2400`, `2404`, `2405`, `2406`, `2410`, `2411`, `2412`, `2413`, `2415`, `2418`, `2420`, `2421`, `2424`, `2425`, `2426`, `2429`, `2432`, `2434`, `2436`, `2437`, `2439`, `2440`, `2441`, `2443`, `2444`, `2446`, `2447`, `2450`, `2452`, `2454`, `2456`, `2459`, `2461`, `2464`, `2465`, `2467`, `2469`, `2471`, `2473`, `2474`, `2476`, `2478`, `2480`, `2481`, `2482`, `2483`, `2484`, `2486`, `2488`, `2489`, `2490`, `2491`, `2493`, `2495`, `2497`, `2499`, `2500`, `2502`, `2503`, `2505`, `2506`, `2507`, `2509`, `2511`, `2513`, `2514`, `2516`, `2518`, `2519`, `2521`, `2522`, `2524`, `2527`, `2528`, `2529`, `2531`, `2533`, `2534`, `2536`, `2537`, `2538`, `2540`, `2542`, `2543`, `2545`, `2546`, `2547`, `2549`, `2550`, `2552`, `2553`, `2556`, `2558`, `2560`, `2561`, `2562`, `2563`, `2564`, `2566`, `2567`, `2568`, `2572`, `2573`, `2574`, `2576`, `2577`, `2579`, `2580`, `2581`, `2583`, `2584`, `2585`, `2586`, `2587`, `2588`, `2589`, `2590`, `2591`, `2592`, `2594`, `2595`, `2598`, `2599`, `2603`, `2604`, `2606`, `2607`, `2608`, `2609`, `2612`, `2616`, `2619`, `2620`, `2622`, `2624`, `2625`, `2626`, `2627`, `2628`, `2631`, `2633`, `2635`, `2637`, `2638`, `2640`, `2641`, `2642`, `2643`, `2645`, `2646`, `2647`, `2649`, `2651`, `2654`, `2655`, `2658`, `2660`, `2661`, `2662`, `2663`, `2665`, `2666`, `1717`, `2667`, `2668`, `2669`, `2670`, `2671`, `2673`, `2674`, `2675`, `2676`, `2678`, `2680`, `2681`, `2684`, `2685`, `2687`, `2688`, `2690`, `2691`, `2692`, `2694`, `2695`, `2696`, `2697`, `2699`, `2701`, `2702`, `2705`, `2708`, `2709`, `2711`, `2714`, `2715`, `2716`, `2718`, `2721`, `2723`, `2724`, `2727`, `2728`, `2729`, `2732`, `2734`, `2737`, `2739`, `2740`, `2742`, `2743`, `2745`, `2748`, `2751`, `2754`, `2755`, `2756`, `2757`, `2758`, `2760`, `2762`, `2764`, `2765`, `2766`, `2428`, `2767`, `2768`, `2769`, `2770`, `2771`, `2774`, `2777`, `2779`, `2782`, `2783`, `2784`, `2786`, `2788`, `2789`, `2790`, `2791`, `2792`, `2794`, `2795`, `2796`, `2797`, `2799`, `2800`, `2801`, `2803`, `2807`, `2808`, `2809`, `2812`, `2816`, `2819`, `2822`, `2823`, `2824`, `2826`, `2827`, `2828`, `2830`, `2831`, `2832`, `2833`, `2834`, `2835`, `2837`, `2839`, `2840`, `2842`, `2843`, `2845`, `2846`, `2847`, `2848`, `2849`, `2851`, `2853`, `2854`, `2855`, `2856`, `2857`, `2858`, `2859`, `2860`, `2861`, `2862`, `2864`, `2865`, `2866`, `2868`, `2872`, `2875`, `2876`, `2878`, `2880`, `2881`, `2882`, `2883`, `2885`, `2886`, `2888`, `2889`, `2890`, `2891`, `2893`, `2894`, `2895`, `2896`, `2897`, `2898`, `2899`, `2902`, `2904`, `2906`, `2907`, `2908`, `2909`, `2912`, `2913`, `2915`, `2916`, `2917`, `2918`, `2921`, `2922`, `2923`, `2924`, `2925`, `2926`, `2928`, `2930`, `2931`, `2935`, `2936`, `2937`, `2938`, `2940`, `2233`, `2942`, `2944`, `2945`, `2947`, `2948`, `2949`, `2951`, `923`, `2952`, `2953`, `2954`, `2955`, `2957`, `2959`, `2962`, `2964`, `2966`, `2967`, `2969`, `2972`, `2973`, `2974`, `2976`, `1715`, `2977`, `2979`, `2980`, `36`, `2981`, `2983`, `2985`, `2986`, `2990`, `2991`, `2993`, `2995`, `2997`, `2998`, `3001`, `3002`, `3003`, `3005`, `3006`, `3007`, `3009`, `3012`, `3014`, `3015`, `3016`, `3018`, `3020`, `3021`, `3022`, `3023`, `3026`, `3028`, `3029`, `3030`, `3032`, `3035`, `3037`, `3039`, `3040`, `3042`, `3044`, `3047`, `3050`, `3052`, `3053`, `3041`, `3054`, `3055`, `3056`, `3057`, `3058`, `3059`, `3061`, `3062`, `3064`, `3066`, `3067`, `3068`, `3070`, `3071`, `3072`, `3073`, `3075`, `3078`, `3082`, `3084`, `3086`, `3087`, `3088`, `3090`, `3091`, `3092`, `3095`, `3096`, `3097`, `3099`, `3100`, `3102`, `3107`, `3109`, `3111`, `3112`, `3114`, `3116`, `3118`, `3120`, `3121`, `3123`, `3124`, `3126`, `3127`, `3129`, `3130`, `3133`, `3134`, `3135`, `3136`, `3137`, `3138`, `3139`, `3140`, `3142`, `3144`, `3145`, `3146`, `3147`, `3148`, `3149`, `3150`, `3151`, `3153`, `3155`, `3157`, `3158`, `3159`, `3160`, `3161`, `3163`, `3165`, `3167`, `3168`, `3170`, `3171`, `3172`, `3174`, `3176`, `3178`, `3180`, `3181`, `3184`, `3185`, `3186`, `3188`, `3189`, `3190`, `3192`, `3194`, `3195`, `3196`, `3197`, `3200`, `3201`, `3202`, `3203`, `3204`, `3205`, `3206`, `3207`, `3210`, `3211`, `3213`, `3214`, `3217`, `3218`, `3220`, `3222`, `3224`, `3227`, `3229`, `3230`, `3231`, `3233`, `3234`, `3235`, `3236`, `3237`, `3240`, `3241`, `3243`, `3245`, `3247`, `3250`, `3252`, `3253`, `3254`, `3255`, `3257`, `3259`, `3260`, `3262`, `3264`, `3266`, `3268`, `3269`, `3271`, `3273`, `3275`, `3277`, `3278`, `3141`, `3279`, `3280`, `3281`, `3282`, `3284`, `3285`, `3287`, `3288`, `3290`, `3291`, `3293`, `3294`, `3296`, `3297`, `3299`, `3300`, `3302`, `3304`, `3305`, `3306`, `3308`, `3309`, `3311`, `3313`, `3314`, `3315`, `3316`, `3317`, `3319`, `3321`, `3323`, `3324`, `3325`, `3327`, `3329`, `3332`, `3333`, `3334`, `3336`, `3337`, `3338`, `3340`, `3341`, `3342`, `3344`, `3346`, `3348`, `3351`, `3353`, `3355`, `3357`, `3360`, `3361`, `3364`, `3367`, `3369`, `3370`, `3372`, `3373`, `3374`, `3377`, `3379`, `3380`, `3382`, `3384`, `3385`, `3387`, `3389`, `3391`, `3392`, `3393`, `3394`, `3395`, `3397`, `3399`, `3400`, `3402`, `3403`, `3404`, `3405`, `3406`, `3407`, `3408`, `3412`, `3414`, `3416`, `3418`, `3420`, `3422`, `3423`, `3424`, `3425`, `3426`, `3428`, `3429`, `3431`, `3432`, `3435`, `3436`, `3438`, `3439`, `3441`, `3443`, `3445`, `3447`, `3450`, `3451`, `3453`, `3455`, `3456`, `3457`, `3458`, `3459`, `3461`, `3462`, `3464`, `3465`, `3467`, `3469`, `3471`, `3473`, `3474`, `3475`, `3476`, `3478`, `3479`, `3481`, `3482`, `3484`, `3487`, `3488`, `3489`, `3491`, `3492`, `3493`, `3494`, `3497`, `3500`, `3501`, `3502`, `3504`, `3506`, `3507`, `3508`, `3511`, `3515`, `3516`, `3518`, `3521`, `3524`, `3526`, `3528`, `3529`, `3532`, `3535`, `3537`, `3538`, `3539`, `3540`, `3541`, `3543`, `3545`, `3546`, `3547`, `3548`, `3549`, `3550`, `3551`, `3553`, `3555`, `3556`, `3557`, `3559`, `3561`, `3563`, `3564`, `3565`, `3567`, `3570`, `3572`, `3574`, `3575`, `3577`, `3579`, `3581`, `3582`, `3584`, `3585`, `3587`, `3588`, `3590`, `3591`, `3592`, `3594`, `3596`, `3599`, `3600`, `3603`, `3605`, `3606`, `3607`, `3608`, `3610`, `3612`, `3615`, `3617`, `3618`, `3619`, `3620`, `3621`, `3623`, `3624`, `3625`, `3626`, `3628`, `3629`, `3630`, `3632`, `3633`, `3635`, `3637`, `3639`, `3642`, `3643`, `3645`, `3646`, `3649`, `3650`, `3652`, `3653`, `3655`, `3656`, `3657`, `3658`, `3659`, `3662`, `3664`, `3665`, `3666`, `3668`, `3671`, `3672`, `3674`, `3676`, `3678`, `3679`, `3680`, `3681`, `3683`, `3684`, `3685`, `3687`, `3688`, `3689`, `3690`, `3691`, `3693`, `3694`, `3695`, `3697`, `3698`, `3699`, `3700`, `3702`, `3703`, `3704`, `3706`, `3709`, `3712`, `3713`, `3714`, `3718`, `3719`, `3721`, `3722`, `3724`, `3725`, `3726`, `3727`, `3730`, `3731`, `3732`, `3734`, `3735`, `3737`, `3739`, `3742`, `3743`, `3744`, `3745`, `3746`, `3747`, `3748`, `3750`, `3752`, `3753`, `3755`, `3757`, `3759`, `3760`, `3762`, `3763`, `3764`, `3765`, `3766`, `3768`, `3770`, `3771`, `3774`, `3775`, `3776`, `3778`, `3779`, `3780`, `3782`, `3784`, `3785`, `3786`, `3789`, `3792`, `3794`, `3795`, `3796`, `3798`, `3799`, `3800`, `3802`, `3803`, `3805`, `3807`, `3808`, `3809`, `3812`, `3815`, `3817`, `3818`, `3819`, `3821`, `3823`, `3824`, `3826`, `3828`, `3829`, `3831`, `3833`, `3834`, `3836`, `3839`, `3840`, `3843`, `3846`, `3849`, `3851`, `3852`, `3853`, `3855`, `3856`, `3859`, `3860`, `3862`, `3864`, `3865`, `3866`, `3868`, `3870`, `3871`, `3872`, `3874`, `3875`, `3876`, `3878`, `3879`, `3880`, `3881`, `3882`, `3884`, `3886`, `3887`, `3890`, `3891`, `3892`, `3893`, `3894`, `3896`, `3897`, `3899`, `3900`, `3901`, `3903`, `3904`, `3905`, `3906`, `3907`, `3908`, `3909`, `3910`, `3911`, `3912`, `3913`, `3915`, `3916`, `3919`, `3921`, `3923`, `3924`, `3926`, `3927`, `3928`, `3930`, `3931`, `3932`, `3934`, `3936`, `3939`, `3941`, `3942`, `3943`, `3946`, `3948`, `3949`, `3950`, `3951`, `3952`, `3954`, `3956`, `3957`, `3958`, `3960`, `3961`, `3964`, `3967`, `3968`, `3971`, `3974`, `3975`, `3976`, `3979`, `3981`, `3983`, `3985`, `3986`, `3989`, `3990`, `3993`, `3994`, `3995`, `3996`, `3997`, `3998`, `3999`, `4001`, `4003`, `4004`, `4005`, `4007`, `4009`, `4010`, `4011`, `4013`, `4014`, `4015`, `4017`, `4019`, `4022`, `4023`, `4025`, `4026`, `4027`, `4028`, `4029`, `4030`, `4032`, `4035`, `4037`, `4040`, `4041`, `4042`, `4043`, `4045`, `4048`, `4051`, `4053`, `4055`, `4057`, `4058`, `4059`, `4060`, `4061`, `4062`, `4063`, `4065`, `4067`, `4068`, `4070`, `4072`, `4073`, `4074`, `4075`, `4077`, `4080`, `4081`, `4083`, `4085`, `4088`, `4089`, `4091`, `4093`, `4094`, `4095`, `4096`, `4098`, `4101`, `4102`, `4104`, `4105`, `4106`, `4108`, `4109`, `4111`, `4112`, `4113`, `4115`, `4117`, `4119`, `4122`, `4123`, `4124`, `4125`, `4126`, `4127`, `4128`, `4130`, `4131`, `4134`, `4135`, `4136`, `4137`, `4138`, `4139`, `4141`, `4143`, `4145`, `4147`, `4148`, `4150`, `4151`, `4154`, `4155`, `4157`, `4159`, `4160`, `4163`, `4164`, `4166`, `4169`, `4171`, `4172`, `4173`, `4175`, `4176`, `4177`, `4179`, `4180`, `4181`, `4183`, `4184`, `4185`, `4187`, `4188`, `4190`, `4191`, `4193`, `4194`, `4195`, `4198`, `4201`, `4204`, `4205`, `4206`, `4209`, `4210`, `4212`, `4215`, `4216`, `4218`, `4219`, `4224`, `4225`, `4227`, `4229`, `4230`, `4231`, `4232`, `4234`, `4236`, `4237`, `4238`, `4239`, `4242`, `4244`, `4246`, `4247`, `4250`, `4251`, `4253`, `4256`, `4260`, `4261`, `4263`, `4265`, `4267`, `4268`, `4269`, `4270`, `4272`, `4274`, `4277`, `4278`, `4279`, `4281`, `4282`, `4284`, `4286`, `4287`, `4288`, `4291`, `4293`, `4294`, `4295`, `4296`, `4298`, `4299`, `4301`, `4303`, `4305`, `4306`, `4307`, `4308`, `4309`, `4310`, `4313`, `4315`, `4317`, `4319`, `4320`, `4322`, `4324`, `4326`, `4328`, `4329`, `4331`, `4332`, `4333`, `4334`, `4335`, `4336`, `4338`, `4340`, `4343`, `4344`, `4346`, `4347`, `4348`, `4349`, `4351`, `4353`, `4355`, `4357`, `4358`, `4359`, `4360`, `4361`, `4362`, `4363`, `4365`, `4367`, `4369`, `4372`, `4373`, `4374`, `4375`, `4379`, `4381`, `4383`, `4385`, `4386`, `4388`, `4389`, `4391`, `4392`, `4393`, `4395`, `4396`, `4399`, `4400`, `4402`, `4404`, `4406`, `4407`, `4411`, `4412`, `4413`, `4414`, `4415`, `4418`, `4420`, `4422`, `4425`, `4426`, `4428`, `4429`, `4430`, `4432`, `4433`, `4435`, `4438`, `4440`, `4442`, `4444`, `4445`, `4446`, `4448`, `4450`, `4451`, `4452`, `4455`, `4457`, `4459`, `4461`, `4462`, `4464`, `4467`, `4468`, `4469`, `4470`, `4471`, `4473`, `4474`, `4475`, `4478`, `4480`, `4483`, `4485`, `4487`, `4488`, `4490`, `4491`, `4493`, `867`, `4494`, `4496`, `4497`, `4498`, `4499`, `4500`, `4501`, `4503`, `4505`, `4507`, `4508`, `4509`, `4510`, `4512`, `4515`, `4517`, `4518`, `4519`, `4521`, `1589`, `4522`, `4524`, `4525`, `4527`, `4529`, `4531`, `4533`, `4534`, `4535`, `4537`, `4538`, `4539`, `4540`, `4541`, `4542`, `4543`, `4544`, `4545`, `4546`, `4547`, `4549`, `4551`, `4552`, `4553`, `4554`, `4556`, `4557`, `4558`, `4559`, `4562`, `4563`, `4566`, `4567`, `4569`, `4570`, `4572`, `4574`, `4576`, `4577`, `4579`, `4580`, `4581`, `4583`, `4585`, `4586`, `4588`, `4591`, `4592`, `4594`, `4595`, `4596`, `4597`, `4598`, `4599`, `4600`, `4601`, `4603`, `4606`, `4608`, `4609`, `4610`, `4612`, `4614`, `4616`, `4617`, `4620`, `4621`, `4623`, `4624`, `4625`, `4626`, `4627`, `4629`, `4631`, `4633`, `4635`, `4636`, `4637`, `4638`, `4639`, `4640`, `4642`, `4644`, `4646`, `4647`, `4648`, `4649`, `4650`, `4651`, `4653`, `4655`, `4657`, `4658`, `4659`, `4661`, `4662`, `4663`, `4664`, `4665`, `4667`, `4668`, `4669`, `4671`, `4673`, `4675`, `4676`, `4680`, `4681`, `4683`, `4684`, `4686`, `4687`, `4690`, `4693`, `4695`, `4696`, `4699`, `4700`, `4702`, `4703`, `4704`, `4707`, `4708`, `4709`, `4710`, `4711`, `4713`, `4715`, `4716`, `4718`, `4719`, `4721`, `4726`, `4727`, `4729`, `4731`, `4735`, `4737`, `4738`, `4739`, `4741`, `4743`, `4744`, `4748`, `4749`, `4753`, `4755`, `4756`, `4757`, `4758`, `4759`, `4761`, `4763`, `4764`, `4766`, `4768`, `4769`, `4770`, `4772`, `4774`, `4775`, `4777`, `4779`, `4780`, `4782`, `4783`, `4785`, `4787`, `4788`, `4791`, `4792`, `4793`, `4795`, `4797`, `4801`, `4802`, `4804`, `4806`, `4808`, `4809`, `4810`, `4811`, `4813`, `4815`, `4817`, `4818`, `4820`, `4821`, `4823`, `4826`, `4827`, `4828`, `4830`, `4831`, `4833`, `4834`, `4838`, `4840`, `4843`, `4845`, `4847`, `4848`, `4849`, `4850`, `4851`, `4854`, `4855`, `4856`, `4858`, `4860`, `4862`, `4863`, `4864`, `4866`, `4867`, `4869`, `4871`, `4872`, `4874`, `4875`, `4876`, `4878`, `4880`, `4881`, `4883`, `4885`, `4886`, `4889`, `4890`, `4892`, `4893`, `4894`, `4896`, `4897`, `4899`, `4900`, `4902`, `4903`, `4904`, `4905`, `4907`, `4908`, `4909`, `4911`, `4913`, `4914`, `4918`, `4920`, `4922`, `4924`, `4925`, `4926`, `4927`, `4928`, `4929`, `4931`, `4932`, `4933`, `4934`, `4935`, `4937`, `813`, `4941`, `4943`, `4945`, `4946`, `4947`, `4948`, `4950`, `4952`, `4954`, `4955`, `4956`, `4959`, `4962`, `4963`, `4964`, `4967`, `4969`, `4970`, `4972`, `4973`, `4974`, `4976`, `4977`, `4978`, `4980`, `4982`, `4984`, `4986`, `4989`, `4990`, `4991`, `4992`, `4994`, `4995`, `4997`, `4999`, `5002`, `5003`, `5004`, `5005`, `5007`, `5009`, `5010`, `5013`, `5014`, `5016`, `5017`, `5018`, `5019`, `5020`, `5021`, `5022`, `5024`, `5025`, `5026`, `5027`, `5029`, `5030`, `5032`, `5034`, `5035`, `5036`, `5037`, `5039`, `5042`, `5043`, `5045`, `5046`, `5049`, `5051`, `5053`, `5054`, `5056`, `5057`, `5058`, `5061`, `5063`, `5066`, `5068`, `5069`, `5070`, `5071`, `5072`, `5075`, `5077`, `5078`, `5080`, `5082`, `5084`, `5085`, `5087`, `5089`, `5090`, `5092`, `5094`, `5095`, `5096`, `5099`, `5100`, `5101`, `5102`, `5104`, `5105`, `5107`, `5109`, `5110`, `5112`, `5116`, `5120`, `5121`, `5122`, `5124`, `5125`, `5127`, `5128`, `5129`, `5132`, `5133`, `5135`, `5138`, `5141`, `5142`, `5143`, `5144`, `5145`, `5146`, `5148`, `5150`, `5151`, `5154`, `5155`, `5156`, `5159`, `5162`, `5163`, `5164`, `5165`, `5166`, `5168`, `5169`, `5170`, `5172`, `5173`, `5174`, `5176`, `5177`, `5179`, `5181`, `5182`, `957`, `5183`, `5184`, `5185`, `5188`, `5189`, `5191`, `5192`, `5195`, `5196`, `5198`, `5200`, `5201`, `5203`, `5204`, `5205`, `5207`, `5208`, `5210`, `5211`, `5214`, `5215`, `5216`, `5217`, `5218`, `5219`, `5220`, `5221`, `5222`, `5224`, `5225`, `5226`, `5227`, `5229`, `5231`, `5232`, `5234`, `5235`, `5237`, `5238`, `5240`, `5241`, `5242`, `5245`, `5246`, `5251`, `5253`, `5256`, `5257`, `2677`, `5259`, `5261`, `5263`, `5264`, `5266`, `5267`, `5271`, `5274`, `5275`, `5279`, `5280`, `5281`, `5283`, `5285`, `5287`, `5289`, `5290`, `5291`, `5293`, `5296`, `5297`, `5299`, `5300`, `5301`, `5302`, `5305`, `5307`, `5309`, `5311`, `5314`, `5315`, `5316`, `5317`, `5319`, `5320`, `5321`, `5323`, `5324`, `5326`, `5327`, `5329`, `5331`, `5332`, `5333`, `5334`, `5336`, `5337`, `5339`, `5340`, `5341`, `5343`, `5346`, `5347`, `5348`, `5349`, `5351`, `5352`, `5353`, `5354`, `5356`, `5357`, `1020`, `5358`, `5359`, `5360`, `5361`, `5362`, `5363`, `5364`, `5365`, `5367`, `5369`, `5370`, `5371`, `5373`, `5374`, `5377`, `5379`, `5382`, `5383`, `5384`, `5386`, `5387`, `5389`, `5390`, `5393`, `5394`, `5396`, `5397`, `5399`, `5400`, `5402`, `5403`, `5404`, `4463`, `5406`, `5409`, `5410`, `5412`, `5413`, `5415`, `5416`, `5417`, `5419`, `5420`, `5421`, `5422`, `5423`, `5425`, `5428`, `5429`, `5431`, `5432`, `5434`, `5435`, `5437`, `5439`, `5441`, `5446`, `5447`, `5450`, `5452`, `5453`, `5456`, `5458`, `5462`, `5464`, `5465`, `5467`, `5468`, `5469`, `5470`, `5471`, `5473`, `5475`, `5476`, `5477`, `5479`, `5480`, `5482`, `5484`, `5485`, `5487`, `5489`, `3877`, `5490`, `5492`, `5493`, `5494`, `5497`, `5498`, `5499`, `5500`, `5503`, `5505`, `5506`, `5509`, `5510`, `5511`, `5513`, `5514`, `5517`, `5520`, `5521`, `5522`, `5524`, `5526`, `5529`, `5530`, `5531`, `5532`, `5533`, `5534`, `5535`, `5536`, `5537`, `5539`, `5540`, `5542`, `5543`, `5545`, `5546`, `5548`, `5549`, `5550`, `5552`, `5554`, `5556`, `5557`, `5559`, `5560`, `3089`, `5563`, `5564`, `5565`, `5567`, `5569`, `5570`, `5572`, `5575`, `5576`, `5578`, `5579`, `5580`, `5582`, `5583`, `5584`, `5585`, `5587`, `5589`, `5590`, `5591`, `5595`, `5597`, `5598`, `5599`, `5602`, `5603`, `5606`, `5608`, `5611`, `5613`, `4981`, `5614`, `5616`, `5617`, `5622`, `5623`, `5624`, `5625`, `5626`, `5627`, `5630`, `5631`, `5633`, `5634`, `5635`, `5637`, `3169`, `5639`, `5641`, `5643`, `5645`, `5646`, `5649`, `5651`, `5654`, `5655`, `5657`, `5659`, `5660`, `5662`, `5663`, `5664`, `5665`, `5667`, `5668`, `5669`, `5670`, `5671`, `5672`, `5673`, `5676`, `5681`, `5682`, `5683`, `5684`, `5685`, `5687`, `5689`, `5691`, `5693`, `5694`, `5698`, `5700`, `5702`, `5703`, `5704`, `5706`, `5708`, `5709`, `5710`, `5713`, `5715`, `5717`, `5718`, `5719`, `5723`, `5724`, `5725`, `5726`, `5728`, `5730`, `5731`, `5733`, `5734`, `5736`, `5738`, `5741`, `5743`, `5744`, `5747`, `5748`, `5749`, `5751`, `5752`, `5754`, `5756`, `5757`, `5759`, `5760`, `5761`, `5762`, `5763`, `5764`, `5766`, `5768`, `5770`, `5771`, `5773`, `5775`, `5776`, `5777`, `5778`, `5780`, `5782`, `5784`, `5786`, `5787`, `5788`, `5790`, `5791`, `5792`, `5795`, `5796`, `5798`, `5799`, `5800`, `5801`, `5802`, `5805`, `5806`, `5811`, `5813`, `5814`, `5815`, `5816`, `5817`, `5818`, `5820`, `5821`, `5822`, `5823`, `5824`, `5827`, `5830`, `5832`, `5833`, `5834`, `5836`, `5837`, `5839`, `5840`, `5841`, `5842`, `5845`, `5847`, `5849`, `5851`, `5853`, `5856`, `5859`, `5862`, `5863`, `5865`, `5867`, `5868`, `5870`, `5872`, `5873`, `5875`, `5876`, `5877`, `5878`, `5879`, `5881`, `5883`, `5886`, `5887`, `5888`, `5889`, `5891`, `5892`, `5895`, `5896`, `5898`, `5900`, `5903`, `5904`, `5905`, `5906`, `5908`, `5909`, `5912`, `5915`, `5916`, `5917`, `5918`, `5919`, `5920`, `5922`, `5923`, `5925`, `5927`, `5928`, `5929`, `5931`, `5932`, `5933`, `5935`, `5939`, `5940`, `5941`, `5943`, `5945`, `5947`, `5948`, `5950`, `5951`, `5952`, `5955`, `5956`, `5957`, `5958`, `5959`, `5961`, `5962`, `5963`, `5964`, `5965`, `5967`, `5968`, `5969`, `5970`, `5971`, `5972`, `5974`, `5976`, `5977`, `5978`, `5980`, `5982`, `5983`, `5984`, `5986`, `5987`, `5988`, `5990`, `5991`, `5993`, `5995`, `5996`, `5999`, `6000`, `6003`, `6004`, `6006`, `6009`, `6010`, `6011`, `6012`, `6013`, `6015`, `6016`, `6019`, `6020`, `6022`, `6024`, `6025`, `6028`, `6031`, `6032`, `6036`, `6037`, `6039`, `6040`, `6041`, `6042`, `6044`, `6046`, `6047`, `6048`, `6049`, `6050`, `6051`, `6052`, `6054`, `6056`, `6057`, `6058`, `6059`, `6061`, `6062`, `6063`, `6065`, `6066`, `6068`, `6069`, `6071`, `6072`, `6073`, `6074`, `6075`, `6076`, `6078`, `6079`, `6080`, `6082`, `6083`, `6085`, `6087`, `6088`, `6090`, `6091`, `6092`, `6094`, `6095`, `6096`, `6097`, `6099`, `6100`, `6102`, `6104`, `6106`, `6108`, `6109`, `6110`, `6111`, `6112`, `6115`, `6118`, `6121`, `6123`, `6124`, `6125`, `6127`, `6128`, `6129`, `6130`, `6131`, `6132`, `6133`, `6134`, `6135`, `6136`, `6137`, `6138`, `6139`, `6140`, `6141`, `6142`, `6143`, `6144`, `6145`, `6147`, `6149`, `6151`, `6153`, `6154`, `6155`, `6156`, `6157`, `6158`, `6160`, `6161`, `6162`, `6163`, `6165`, `6166`, `6167`, `6168`, `6169`, `6170`, `6172`, `6174`, `6176`, `6177`, `6178`, `6180`, `6183`, `6185`, `6188`, `6190`, `6194`, `6196`, `6197`, `6198`, `6199`, `6201`, `6202`, `6203`, `6206`, `6207`, `6210`, `6211`, `6212`, `6214`, `6215`, `6218`, `6219`, `6220`, `6222`, `6223`, `6224`, `6225`, `6226`, `6228`, `6229`, `6230`, `6232`, `6236`, `6238`, `6240`, `6242`, `6243`, `6245`, `6246`, `6247`, `6249`, `6250`, `6252`, `6253`, `6255`, `6257`, `6258`, `6261`, `6262`, `6263`, `6264`, `6266`, `6268`, `6269`, `6270`, `6273`, `6274`, `6275`, `6276`, `6277`, `6278`, `6280`, `6282`, `6283`, `6284`, `6287`, `6289`, `6290`, `6291`, `6292`, `6293`, `6295`, `1732`, `6296`, `6299`, `6300`, `6302`, `6303`, `6305`, `6306`, `6307`, `6308`, `6309`, `6310`, `6311`, `6312`, `6315`, `6317`, `6319`, `6320`, `6322`, `6323`, `6324`, `6325`, `6328`, `6330`, `6331`, `6332`, `6333`, `6334`, `6336`, `6338`, `6339`, `6341`, `6343`, `6345`, `6347`, `6348`, `6349`, `6351`, `6352`, `6354`, `6357`, `6358`, `6360`, `6361`, `6362`, `6364`, `6365`, `6367`, `6369`, `6370`, `6371`, `111`, `6372`, `6373`, `2065`, `6374`, `6375`, `6377`, `6378`, `6380`, `6381`, `6382`, `6384`, `6385`, `6386`, `6387`, `6388`, `6391`, `6392`, `6393`, `6394`, `6396`, `6397`, `6399`, `6400`, `6401`, `6402`, `6404`, `6407`, `6408`, `6409`, `6411`, `6414`, `6416`, `6418`, `6419`, `6421`, `6422`, `6423`, `6425`, `6426`, `6428`, `6429`, `6430`, `6431`, `6432`, `6434`, `6435`, `6436`, `6437`, `6438`, `6440`, `6441`, `6442`, `6443`, `6444`, `6445`, `6447`, `6449`, `6451`, `6452`, `6455`, `6456`, `6457`, `6458`, `6459`, `6460`, `6462`, `6463`, `6464`, `6465`, `6466`, `6469`, `6470`, `6471`, `6473`, `6474`, `6475`, `6476`, `6478`, `6480`, `6481`, `6482`, `6485`, `6486`, `6487`, `6488`, `6489`, `6490`, `6491`, `6493`, `6494`, `6495`, `6497`, `6498`, `6499`, `5134`, `6500`, `6501`, `6502`, `6503`, `6504`, `6506`, `6508`, `6509`, `6510`, `6511`, `6512`, `6514`, `6515`, `6516`, `6517`, `6518`, `6519`, `6520`, `6521`, `6523`, `6526`, `6527`, `6529`, `6531`, `6533`, `6535`, `6536`, `6537`, `6538`, `6539`, `6540`, `6543`, `6544`, `6545`, `6547`, `6550`, `6551`, `6552`, `6553`, `6554`, `6555`, `6557`, `6559`, `6560`, `6561`, `6562`, `6564`, `6565`, `6567`, `6568`, `6569`, `6570`, `6571`, `6574`, `6575`, `6578`, `6579`, `6580`, `6581`, `6583`, `6584`, `6586`, `6588`, `6589`, `6591`, `6593`, `6595`, `6597`, `6599`, `6600`, `6601`, `6602`, `6604`, `6605`, `6607`, `6609`, `6611`, `6614`, `6615`, `6616`, `6618`, `6619`, `6620`, `6622`, `6623`, `1924`, `6626`, `6628`, `6629`, `6631`, `6633`, `6635`, `6637`, `6638`, `6639`, `6641`, `6643`, `6644`, `6647`, `6649`, `6650`, `6651`, `6652`, `6654`, `6655`, `6656`, `6658`, `6659`, `6661`, `6662`, `6663`, `6664`, `6665`, `6666`, `6667`, `6669`, `6670`, `6672`, `6673`, `6674`, `6675`, `6676`, `6678`, `6680`, `6681`, `6682`, `6684`, `6685`, `6689`, `6690`, `6691`, `6694`, `6696`, `6697`, `6698`, `6699`, `6701`, `6702`, `6703`, `6704`, `6706`, `6707`, `6709`, `6710`, `6712`, `6714`, `6715`, `6717`, `6718`, `6719`, `6720`, `6721`, `6724`, `6725`, `6727`, `6730`, `6732`, `6733`, `6736`, `6739`, `6740`, `6743`, `6745`, `6746`, `6747`, `6748`, `6749`, `6751`, `6754`, `6755`, `6756`, `6757`, `6758`, `6759`, `6761`, `6763`, `6765`, `6768`, `6770`, `6773`, `6774`, `6775`, `6777`, `6778`, `6780`, `6783`, `6784`, `6785`, `6787`, `6789`, `6790`, `6792`, `6796`, `6799`, `6800`, `6801`, `6802`, `6803`, `6805`, `6807`, `6808`, `6810`, `6812`, `6814`, `6817`, `6819`, `6821`, `6822`, `6824`, `6826`, `6828`, `6829`, `6830`, `6832`, `6834`, `6835`, `6836`, `6839`, `6841`, `6844`, `6846`, `6848`, `6850`, `6851`, `6852`, `6853`, `6854`, `6855`, `6856`, `6858`, `6859`, `6860`, `6862`, `6863`, `6864`, `6866`, `6868`, `6869`, `6871`, `6873`, `6877`, `6880`, `6884`, `6885`, `6887`, `6888`, `6889`, `6892`, `6893`, `6894`, `6895`, `6898`, `6900`, `6901`, `6902`, `6904`, `6905`, `6906`, `6907`, `6909`, `6911`, `6914`, `6915`, `6916`, `6918`, `6919`, `6921`, `6922`, `6923`, `6924`, `6925`, `6926`, `6929`, `6930`, `6931`, `6934`, `6935`, `6937`, `6939`, `6940`, `6941`, `6944`, `6946`, `6947`, `6948`, `6950`, `6952`, `6954`, `6956`, `6957`, `6959`, `6960`, `6961`, `6963`, `6964`, `6965`, `6966`, `6968`, `6969`, `6970`, `6971`, `6972`, `6973`, `6974`, `6975`, `6977`, `1222`, `6979`, `6980`, `6981`, `6982`, `6983`, `6984`, `6985`, `6987`, `6988`, `6989`, `6990`, `6991`, `6992`, `6993`, `6994`, `6997`, `6998`, `7000`, `7001`, `7002`, `7003`, `7004`, `7007`, `7009`, `7010`, `7011`, `7013`, `7014`, `7016`, `7017`, `7019`, `7020`, `7021`, `7023`, `7024`, `7026`, `2231`, `7027`, `7028`, `7029`, `7031`, `7032`, `7033`, `7034`, `7035`, `7037`, `7038`, `7039`, `7040`, `7042`, `7043`, `7044`, `7045`, `7046`, `7048`, `7049`, `7051`, `7053`, `7055`, `7059`, `7060`, `7061`, `7062`, `7064`, `7065`, `7067`, `7068`, `7071`, `7072`, `7073`, `7074`, `7076`, `7077`, `7081`, `7084`, `7085`, `7088`, `7090`, `7092`, `7093`, `7095`, `7096`, `7097`, `7098`, `7100`, `7101`, `7102`, `7104`, `7107`, `7108`, `7112`, `7113`, `7115`, `7116`, `7117`, `7120`, `7121`, `7122`, `7123`, `7124`, `7125`, `7126`, `7128`, `7131`, `7132`, `7133`, `7134`, `7135`, `7138`, `7140`, `7141`, `7142`, `7143`, `7145`, `7146`, `7148`, `7149`, `7152`, `7156`, `7158`, `7159`, `7160`, `7161`, `7162`, `7163`, `7166`, `7169`, `7170`, `7173`, `7174`, `7177`, `7178`, `7179`, `7180`, `7181`, `7183`, `7184`, `7185`, `7186`, `7188`, `7189`, `7191`, `7192`, `7195`, `7198`, `7199`, `7201`, `7203`, `7204`, `7205`, `7206`, `7208`, `7213`, `7215`, `7216`, `7219`, `7221`, `7224`, `7225`, `7227`, `7229`, `7231`, `7232`, `7235`, `7236`, `7237`, `7239`, `7240`, `7242`, `7243`, `7245`, `7246`, `7247`, `7248`, `7252`, `7253`, `7254`, `7256`, `7258`, `7259`, `7260`, `7262`, `7263`, `7264`, `7266`, `7268`, `7270`, `7271`, `7272`, `7273`, `7274`, `7276`, `7277`, `7278`, `7281`, `7282`, `7283`, `7286`, `7288`, `7290`, `1256`, `7291`, `7292`, `7293`, `7295`, `7298`, `7299`, `7301`, `7302`, `7303`, `7304`, `7306`, `7307`, `7308`, `7310`, `7312`, `7313`, `7316`, `7317`, `7318`, `7319`, `7320`, `7323`, `7324`, `7326`, `7328`, `7331`, `7332`, `7334`, `7336`, `7337`, `7338`, `7340`, `7342`, `7343`, `7344`, `7345`, `7346`, `7347`, `7348`, `7350`, `7352`, `7353`, `5131`, `7354`, `7356`, `7358`, `7360`, `7362`, `7363`, `7366`, `7367`, `7368`, `7369`, `7373`, `7374`, `7375`, `7376`, `7377`, `7378`, `7379`, `7382`, `7383`, `7384`, `7385`, `7386`, `7387`, `7388`, `7389`, `7392`, `7395`, `7397`, `7398`, `7400`, `7402`, `7405`, `7406`, `7408`, `7410`, `7411`, `7412`, `7414`, `7416`, `7417`, `7419`, `7421`, `7423`, `7425`, `7427`, `7428`, `7429`, `7430`, `7432`, `7434`, `7435`, `7436`, `7437`, `7439`, `7440`, `7443`, `7444`, `7445`, `7447`, `7448`, `7449`, `7451`, `7453`, `7454`, `7456`, `7458`, `7459`, `7460`, `7462`, `7463`, `7464`, `7465`, `7466`, `7467`, `7468`, `7469`, `7470`, `7471`, `7472`, `7475`, `7477`, `7478`, `7479`, `7481`, `7482`, `7483`, `7484`, `7485`, `7486`, `7487`, `7488`, `7490`, `7492`, `7496`, `7497`, `7498`, `7500`, `7501`, `7503`, `7505`, `7506`, `7509`, `7511`, `7512`, `7514`, `7515`, `7516`, `7518`, `7522`, `7523`, `7524`, `7255`, `7526`, `7527`, `7530`, `7532`, `7533`, `7535`, `7536`, `7539`, `7541`, `7544`, `7547`, `7548`, `7550`, `7552`, `7553`, `7555`, `7556`, `7558`, `7559`, `7560`, `7561`, `7563`, `7564`, `7565`, `7566`, `7567`, `7569`, `7571`, `7575`, `7577`, `7578`, `7580`, `7581`, `7585`, `7586`, `7588`, `7590`, `7593`, `7595`, `7597`, `7599`, `7600`, `7601`, `7603`, `7605`, `7607`, `7608`, `7609`, `7610`, `7611`, `7612`, `7613`, `7614`, `7615`, `7616`, `7617`, `7619`, `7620`, `7621`, `7622`, `7623`, `7625`, `7628`, `7630`, `7631`, `7632`, `7634`, `7635`, `3191`, `7636`, `7637`, `7639`, `7641`, `7642`, `7643`, `7644`, `7645`, `7646`, `7647`, `7648`, `7649`, `7650`, `7652`, `7653`, `7654`, `7655`, `7657`, `7658`, `7659`, `7660`, `7661`, `7662`, `7664`, `7665`, `7667`, `7668`, `7670`, `7672`, `7673`, `7674`, `7675`, `7677`, `7678`, `7679`, `7680`, `7681`, `7682`, `7684`, `7686`, `7687`, `7688`, `7690`, `7692`, `7693`, `7695`, `7696`, `7698`, `7700`, `7701`, `7703`, `7704`, `7707`, `7710`, `7711`, `7713`, `7714`, `7715`, `7717`, `7718`, `7719`, `7721`, `7722`, `7723`, `7725`, `7726`, `7728`, `7729`, `7730`, `7731`, `7732`, `7733`, `7734`, `7735`, `7737`, `7739`, `7741`, `7743`, `7744`, `7745`, `7748`, `7750`, `7752`, `7753`, `7755`, `7756`, `7757`, `7758`, `7759`, `7760`, `7761`, `7762`, `7763`, `7764`, `7765`, `7766`, `7767`, `7768`, `7769`, `7771`, `7772`, `7774`, `7775`, `7776`, `7778`, `7779`, `7781`, `7782`, `7784`, `7785`, `7788`, `7789`, `7790`, `7791`, `7793`, `7794`, `7796`, `7798`, `7800`, `7801`, `7803`, `7804`, `7806`, `7808`, `7810`, `7811`, `7813`, `7816`, `7817`, `7819`, `7822`, `7824`, `7826`, `7828`, `7831`, `7833`, `7834`, `7836`, `7838`, `7840`, `7841`, `7842`, `7844`, `7846`, `7848`, `7850`, `7851`, `7852`, `7853`, `7854`, `7855`, `7856`, `7857`, `7859`, `7860`, `7861`, `7862`, `7863`, `7866`, `7868`, `7871`, `7873`, `7875`, `7876`, `7878`, `7880`, `7883`, `7884`, `7885`, `7886`, `7888`, `7889`, `7891`, `7894`, `7895`, `7896`, `7898`, `7899`, `7900`, `7901`, `7902`, `7903`, `7905`, `7907`, `7909`, `7910`, `7912`, `7914`, `7915`, `7916`, `7917`, `7919`, `5472`, `7920`, `7921`, `7922`, `7923`, `7924`, `7926`, `7928`, `7930`, `7931`, `7933`, `7934`, `7935`, `7937`, `7938`, `7939`, `7941`, `7942`, `7945`, `7946`, `7947`, `7948`, `7951`, `7952`, `7953`, `7955`, `7956`, `7959`, `7960`, `7961`, `7962`, `7963`, `7964`, `7965`, `7966`, `7967`, `7969`, `7970`, `7971`, `7972`, `7974`, `7975`, `7976`, `7977`, `7978`, `7979`, `7982`, `7984`, `7985`, `7987`, `7988`, `7989`, `7990`, `7992`, `7993`, `7994`, `7995`, `7997`, `7998`, `7999`, `8000`, `8001`, `8002`, `8007`, `8008`, `8009`, `8011`, `8012`, `8014`, `8016`, `8019`, `8021`, `8023`, `8025`, `8027`, `8028`, `8030`, `8031`, `8032`, `8033`, `8035`, `8037`, `3820`, `8038`, `8040`, `8042`, `8044`, `8046`, `8047`, `8048`, `8049`, `2686`, `8050`, `8051`, `8053`, `8054`, `8055`, `8056`, `8058`, `8061`, `8062`, `8064`, `8065`, `8066`, `8067`, `8068`, `8069`, `8071`, `8072`, `8073`, `8074`, `8075`, `8076`, `8077`, `8078`, `8079`, `8080`, `8081`, `8083`, `8084`, `8085`, `8086`, `8087`, `8088`, `8090`, `8091`, `8093`, `8094`, `8095`, `8097`, `8098`, `8099`, `8101`, `8103`, `8104`, `8106`, `8108`, `8109`, `8110`, `8111`, `8112`, `8113`, `8115`, `8117`, `8118`, `8119`, `8120`, `8121`, `8124`, `8125`, `8127`, `8128`, `8129`, `8130`, `8131`, `8132`, `8133`, `8134`, `8136`, `8137`, `8139`, `8141`, `8142`, `8144`, `8145`, `8147`, `8151`, `8154`, `8155`, `8157`, `8158`, `8160`, `8161`, `8162`, `8164`, `8166`, `8167`, `8168`, `8169`, `8170`, `8171`, `8173`, `8174`, `8176`, `8177`, `8178`, `8179`, `8181`, `8182`, `8183`, `8185`, `8186`, `8187`, `8188`, `8189`, `8190`, `8191`, `8192`, `8193`, `8194`, `8195`, `8197`, `8199`, `8201`, `8202`, `8203`, `7736`, `8204`, `8205`, `8206`, `8207`, `8209`, `8210`, `8211`, `8213`, `8215`, `8216`, `8218`, `8219`, `8220`, `8221`, `8222`, `8223`, `7839`, `8224`, `8225`, `8227`, `2984`, `8229`, `8230`, `8231`, `8232`, `8235`, `8237`, `8239`, `8240`, `8241`, `8245`, `8246`, `8248`, `8249`, `8250`, `8253`, `8254`, `8256`, `8257`, `8259`, `8260`, `8261`, `8263`, `8264`, `8265`, `8266`, `8267`, `8268`, `8269`, `8271`, `8272`, `8273`, `8274`, `8275`, `8280`, `8281`, `8282`, `8284`, `8285`, `8286`, `8287`, `8288`, `8290`, `8291`, `8292`, `8293`, `8294`, `8295`, `8297`, `8299`, `8300`, `8301`, `8302`, `8303`, `8306`, `8308`, `8309`, `8310`, `8312`, `8313`, `8314`, `8316`, `8317`, `8319`, `8321`, `8323`, `8325`, `8326`, `8327`, `8329`, `8330`, `8331`, `8332`, `8333`, `8336`, `8338`, `8339`, `8296`, `8340`, `8342`, `8343`, `8344`, `8345`, `8347`, `8349`, `8350`, `8352`, `8357`, `8359`, `8360`, `8361`, `8362`, `8363`, `8365`, `8366`, `8367`, `8369`, `8370`, `8372`, `8373`, `8375`, `8377`, `8378`, `8379`, `8381`, `8382`, `8383`, `8385`, `8388`, `8389`, `8391`, `8392`, `8394`, `8396`, `8398`, `8270`, `8399`, `8402`, `8404`, `8405`, `8407`, `8409`, `8411`, `8412`, `8414`, `8415`, `8417`, `8419`, `8420`, `8423`, `8426`, `8427`, `8428`, `8431`, `8432`, `8433`, `8434`, `8435`, `8437`, `8438`, `8441`, `8443`, `8444`, `8445`, `8446`, `8447`, `8449`, `8453`, `8455`, `8457`, `8459`, `8460`, `8462`, `8463`, `8464`, `8466`, `8467`, `8468`, `8469`, `8470`, `8472`, `8473`, `8474`, `8475`, `8476`, `8478`, `8479`, `8481`, `8484`, `8485`, `8486`, `8488`, `8489`, `8491`, `8494`, `8495`, `8496`, `8497`, `8498`, `8499`, `8500`, `8503`, `8505`, `8506`, `8508`, `8509`, `8510`, `8511`, `8512`, `8513`, `8514`, `8515`, `8516`, `8517`, `8519`, `8521`, `8522`, `8523`, `8524`, `8525`, `8526`, `8527`, `8529`, `8530`, `8532`, `8535`, `8537`, `8538`, `8539`, `8541`, `8542`, `8543`, `8544`, `8549`, `8550`, `8551`, `8552`, `8553`, `8554`, `8555`, `8557`, `8558`, `8559`, `8562`, `8563`, `8564`, `8566`, `8569`, `8570`, `8571`, `8573`, `8575`, `8577`, `8578`, `8579`, `8580`, `8581`, `8584`, `8585`, `8586`, `8587`, `8589`, `8590`, `8592`, `8593`, `8594`, `8595`, `8597`, `8598`, `8600`, `8601`, `8602`, `8604`, `8605`, `8608`, `8610`, `8611`, `8612`, `8613`, `8614`, `8615`, `8616`, `8618`, `8619`, `8620`, `8621`, `8622`, `8625`, `8627`, `8629`, `8630`, `8632`, `8634`, `8636`, `8637`, `8638`, `8640`, `8642`, `8643`, `8644`, `8646`, `8647`, `8649`, `8650`, `8651`, `8653`, `8655`, `8656`, `8657`, `8658`, `8659`, `8660`, `8662`, `8664`, `8665`, `8666`, `8667`, `8669`, `8670`, `8671`, `8673`, `8674`, `8675`, `8676`, `8677`, `8678`, `8679`, `8680`, `8681`, `8683`, `8685`, `8687`, `8689`, `8691`, `8692`, `8693`, `8694`, `8696`, `8697`, `8698`, `8700`, `8701`, `8702`, `8703`, `8704`, `8705`, `8706`, `8707`, `8708`, `8709`, `8710`, `8712`, `8713`, `8715`, `8717`, `8719`, `8722`, `8723`, `8725`, `8726`, `8727`, `8729`, `8730`, `8732`, `8734`, `8736`, `8738`, `8739`, `8740`, `8741`, `8743`, `8744`, `8745`, `8747`, `8748`, `8752`, `8753`, `8754`, `8755`, `8756`, `8757`, `8758`, `8760`, `8761`, `8762`, `8763`, `8765`, `8766`, `8767`, `8768`, `8770`, `8771`, `8773`, `8774`, `8775`, `8776`, `8778`, `8779`, `8780`, `8781`, `8782`, `8785`, `8786`, `8787`, `8789`, `8790`, `8791`, `8793`, `8795`, `8798`, `8800`, `8801`, `8802`, `8804`, `8805`, `8807`, `8808`, `8809`, `8810`, `8813`, `8815`, `8816`, `8817`, `8819`, `8820`, `8821`, `8822`, `8823`, `401`, `8824`, `8826`, `8827`, `8829`, `8830`, `8831`, `8833`, `8835`, `8837`, `8839`, `8840`, `8841`, `8842`, `8844`, `8845`, `8847`, `8849`, `8851`, `8852`, `8853`, `8855`, `8857`, `8858`, `8859`, `8864`, `8865`, `8866`, `8867`, `8869`, `8870`, `8871`, `8874`, `8877`, `8879`, `8880`, `8881`, `8883`, `8884`, `8886`, `8887`, `8890`, `8891`, `8892`, `8893`, `8895`, `8897`, `8899`, `8900`, `8901`, `8903`, `8906`, `8907`, `8909`, `8911`, `8914`, `8916`, `8917`, `8919`, `8920`, `8921`, `8922`, `8923`, `8927`, `8928`, `8930`, `8931`, `8933`, `8934`, `8937`, `8939`, `8940`, `8941`, `8942`, `8944`, `8945`, `8947`, `8948`, `8949`, `8950`, `8951`, `8953`, `8954`, `8955`, `8958`, `8960`, `8962`, `8965`, `8966`, `8967`, `8968`, `8969`, `8970`, `8971`, `8972`, `8974`, `8976`, `8977`, `8978`, `8979`, `8980`, `8981`, `8982`, `8983`, `8984`, `8985`, `8987`, `8991`, `8992`, `8993`, `8994`, `8995`, `8996`, `8998`, `8999`, `9000`, `9002`, `9003`, `9004`, `9005`, `9007`, `9009`, `9010`, `9011`, `9014`, `9015`, `9016`, `9018`, `9019`, `9020`, `9022`, `9024`, `9025`, `9026`, `9028`, `9030`, `9031`, `9032`, `9034`, `9035`, `9037`, `9038`, `9039`, `9042`, `9043`, `9044`, `9046`, `9048`, `9050`, `9051`, `9053`, `9054`, `9055`, `9057`, `9058`, `8932`, `9059`, `9060`, `9061`, `9062`, `9064`, `9068`, `1932`, `9069`, `9070`, `9071`, `9072`, `9073`, `9074`, `9076`, `9079`, `9080`, `9083`, `9084`, `9087`, `9088`, `9090`, `9091`, `9093`, `9095`, `9096`, `9097`, `9098`, `9100`, `9103`, `9104`, `9105`, `9106`, `9107`, `9108`, `9109`, `9110`, `9111`, `9112`, `9113`, `9114`, `9116`, `9119`, `9120`, `9121`, `9122`, `9123`, `9124`, `9127`, `9128`, `9129`, `9130`, `9131`, `9132`, `9133`, `9134`, `9135`, `9136`, `9138`, `9139`, `9141`, `9142`, `9144`, `9145`, `9146`, `9148`, `9149`, `9150`, `9152`, `9153`, `9156`, `9158`, `9160`, `9162`, `9165`, `7986`, `9168`, `9170`, `9171`, `9172`, `9173`, `9175`, `9176`, `9177`, `9179`, `9180`, `9182`, `9183`, `9185`, `9188`, `9190`, `9191`, `9192`, `9194`, `9198`, `9200`, `9201`, `9202`, `9204`, `9206`, `9207`, `5871`, `9210`, `9211`, `9213`, `9214`, `9215`, `9217`, `9218`, `9220`, `9221`, `9222`, `9226`, `9228`, `9230`, `9231`, `9233`, `9234`, `9235`, `9238`, `9239`, `9241`, `9242`, `9244`, `9246`, `9249`, `9251`, `9252`, `9255`, `9256`, `9259`, `9260`, `9262`, `9263`, `9265`, `9269`, `9270`, `9273`, `9274`, `9277`, `3858`, `9279`, `9281`, `9282`, `9284`, `9287`, `7598`, `9289`, `9292`, `9294`, `9295`, `9296`, `9297`, `9298`, `9299`, `9301`, `9302`, `9304`, `9306`, `9308`, `9311`, `9312`, `9313`, `9314`, `9318`, `9320`, `9322`, `9325`, `9326`, `9327`, `9329`, `9331`, `9333`, `9334`, `9336`, `9338`, `9339`, `9340`, `9341`, `9342`, `9343`, `9344`, `9346`, `9347`, `9349`, `9350`, `9352`, `9353`, `9355`, `9358`, `9359`, `9360`, `9363`, `9365`, `9368`, `9369`, `9371`, `9373`, `9374`, `9375`, `9376`, `9377`, `9379`, `9382`, `9383`, `9384`, `9387`, `9388`, `9389`, `9390`, `9391`, `9392`, `9393`, `9395`, `9396`, `9398`, `9400`, `9401`, `9404`, `9406`, `9409`, `9410`, `9412`, `9414`, `9416`, `9417`, `9418`, `9420`, `9421`, `9424`, `9426`, `9428`, `9429`, `9431`, `9432`, `9433`, `9434`, `9435`, `9436`, `9438`, `9441`, `9443`, `9445`, `9446`, `9447`, `9448`, `9449`, `9450`, `9451`, `9453`, `9454`, `9455`, `9457`, `9458`, `9459`, `9460`, `9461`, `9462`, `9463`, `9464`, `9465`, `9467`, `9469`, `9471`, `9474`, `9476`, `9477`, `9478`, `9479`, `9480`, `973`, `9482`, `9483`, `9485`, `9486`, `9488`, `9489`, `9490`, `9492`, `9493`, `9495`, `9496`, `9498`, `9499`, `9501`, `9502`, `9504`, `9506`, `9507`, `9508`, `9511`, `9512`, `9514`, `9515`, `9518`, `9519`, `9521`, `9523`, `9524`, `9526`, `9528`, `9531`, `9533`, `9534`, `9535`, `9537`, `9539`, `9540`, `9541`, `9543`, `9545`, `9546`, `9548`, `9549`, `9550`, `9551`, `9554`, `9555`, `9556`, `9557`, `9559`, `9561`, `9562`, `9565`, `9567`, `9570`, `9571`, `9573`, `7877`, `9575`, `9578`, `9580`, `9582`, `9583`, `9586`, `9587`, `9588`, `9589`, `9591`, `9592`, `9593`, `9594`, `9595`, `9597`, `9599`, `9601`, `9603`, `9604`, `9605`, `9607`, `9610`, `5979`, `9611`, `9612`, `9613`, `9614`, `9616`, `9617`, `9618`, `9620`, `9621`, `9622`, `9624`, `9627`, `9629`, `9630`, `9632`, `9633`, `9636`, `9637`, `9638`, `9640`, `9641`, `9642`, `9644`, `9646`, `9647`, `9649`, `9650`, `9653`, `9656`, `9657`, `9658`, `9659`, `9660`, `9662`, `9663`, `9664`, `9665`, `9666`, `9667`, `9670`, `9673`, `9675`, `9677`, `9679`, `9681`, `9682`, `9683`, `9684`, `9686`, `9688`, `9689`, `9690`, `9692`, `9693`, `9695`, `9696`, `9697`, `9699`, `9701`, `9703`, `9705`, `9707`, `9710`, `9713`, `9714`, `9715`, `9717`, `9718`, `9721`, `9722`, `9724`, `9725`, `9726`, `9727`, `9729`, `9730`, `9731`, `9732`, `9733`, `9735`, `9737`, `9739`, `9740`, `9741`, `9744`, `9747`, `9748`, `9750`, `9751`, `9753`, `9754`, `9755`, `9756`, `9758`, `9759`, `9760`, `9761`, `9762`, `9764`, `9768`, `9770`, `9772`, `9774`, `9776`, `9777`, `9779`, `9780`, `9782`, `9783`, `9784`, `9787`, `9789`, `9790`, `9791`, `9793`, `9794`, `9795`, `9796`, `9797`, `9798`, `9799`, `9800`, `9803`, `9805`, `9807`, `9809`, `9810`, `9811`, `9813`, `9816`, `9817`, `9819`, `9820`, `9822`, `9823`, `9824`, `9825`, `9827`, `9828`, `9830`, `9831`, `9832`, `9834`, `9836`, `9837`, `9839`, `9840`, `9841`, `9842`, `9844`, `9845`, `9846`, `9847`, `9848`, `9850`, `9851`, `9853`, `9854`, `9855`, `9856`, `9857`, `2337`, `8520`, `9858`, `9861`, `9862`, `9757`, `9864`, `9865`, `9867`, `9868`, `9870`, `9871`, `9872`, `9873`, `9874`, `9877`, `9878`, `9879`, `9880`, `9882`, `9884`, `9885`, `9887`, `9889`, `9890`, `9892`, `9894`, `9895`, `9897`, `9899`, `9901`, `9903`, `9906`, `9907`, `9909`, `9911`, `9914`, `9916`, `9918`, `9919`, `9920`, `9922`, `9924`, `9927`, `9929`, `9930`, `9932`, `9935`, `9936`, `9938`, `9939`, `9940`, `9941`, `9942`, `9943`, `9944`, `9945`, `9946`, `9947`, `9948`, `9949`, `9950`, `9951`, `9952`, `9953`, `9955`, `9956`, `9957`, `9958`, `9960`, `9962`, `9963`, `9964`, `9965`, `9967`, `9968`, `9970`, `9971`, `9974`, `9977`, `9978`, `9980`, `9981`, `6878`, `9982`, `9984`, `9985`, `9987`, `9988`, `9989`, `9992`, `9993`, `9994`, `9995`, `9999`, `10001`, `10002`, `10003`, `10004`, `10006`, `10007`, `1912`, `10008`, `10011`, `10013`, `10014`, `10016`, `10017`, `10019`, `10020`, `10023`, `10025`, `10028`, `10029`, `10030`, `10033`, `10034`, `10036`, `10038`, `10039`, `10040`, `10041`, `10042`, `10044`, `10046`, `10048`, `10050`, `10051`, `10053`, `10055`, `10057`, `10058`, `10060`, `10061`, `10062`, `10063`, `10065`, `10066`, `10069`, `10070`, `10071`, `10073`, `10076`, `10078`, `10079`, `10081`, `10085`, `10086`, `10091`, `10092`, `10093`, `10094`, `10096`, `10098`, `10099`, `10100`, `10101`, `10104`, `10105`, `10106`, `10107`, `10110`, `10111`, `10112`, `10114`, `10115`, `10116`, `10118`, `10119`, `10120`, `10123`, `10124`, `10125`, `10127`, `10128`, `10129`, `10130`, `10131`, `10133`, `10134`, `10136`, `10138`, `10139`, `10142`, `10143`, `10146`, `10148`, `10149`, `10150`, `10152`, `10154`, `10156`, `10159`, `10161`, `10163`, `10164`, `10165`, `10167`, `10168`, `10169`, `10170`, `10171`, `10172`, `10175`, `10176`, `10177`, `10180`, `10183`, `10185`, `10186`, `10187`, `10189`, `10191`, `10193`, `10195`, `10196`, `10197`, `10198`, `10199`, `10200`, `10202`, `10203`, `10204`, `10207`, `10208`, `10210`, `10211`, `10213`, `10214`, `10215`, `10217`, `10218`, `10220`, `10222`, `10224`, `10225`, `10227`, `10228`, `10230`, `10232`, `10234`, `10235`, `10237`, `10238`, `10239`, `10241`, `10242`, `10243`, `10245`, `10248`, `10249`, `10251`, `10252`, `10253`, `10255`, `10258`, `10259`, `10260`, `10261`, `10262`, `10263`, `10265`, `10267`, `10268`, `10269`, `10270`, `10272`, `10273`, `10275`, `10276`, `10277`, `10278`, `10279`, `10280`, `10281`, `10284`, `10285`, `10287`, `10288`, `10291`, `10292`, `10294`, `10296`, `10297`, `10298`, `10300`, `10302`, `10303`, `10304`, `10306`, `10307`, `10308`, `10309`, `10312`, `10313`, `10314`, `10315`, `10316`, `10317`, `10318`, `10319`, `10320`, `10321`, `10323`, `10324`, `10327`, `10328`, `10329`, `10330`, `10332`, `10333`, `10335`, `10336`, `10337`, `10340`, `10341`, `10343`, `10344`, `10345`, `10346`, `10347`, `10348`, `10349`, `10350`, `10351`, `10352`, `10353`, `10354`, `10356`, `10357`, `10359`, `10360`, `10363`, `10365`, `10366`, `10368`, `10370`, `10371`, `10372`, `10373`, `10374`, `10375`, `10376`, `10377` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `TOKEN_F` | 99.06 |
| `TOKEN_P` | 99.06 |
| `TOKEN_R` | 99.06 |
| `TOKEN_ACC` | 99.77 |
| `SENTS_F` | 97.00 |
| `SENTS_P` | 97.32 |
| `SENTS_R` | 96.67 |
| `TAG_ACC` | 93.85 |
| `POS_ACC` | 97.66 |
| `MORPH_ACC` | 93.64 |
| `DEP_UAS` | 92.56 |
| `DEP_LAS` | 87.49 |
| `LEMMA_ACC` | 93.99 |
|
a60126897725a8974803924a5bf60d57
|
jed351/bart-zh-hk-wiki
|
jed351
|
bart
| 20 | 18 |
transformers
| 0 |
text2text-generation
| true | false | false |
other
|
['yue']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['bart', 'cantonese', 'fill-mask']
| false | true | true | 746 | false |
# bart-base-cantonese
This is the Cantonese model of BART base. It is based on another model created by: https://huggingface.co/Ayaka/bart-base-cantonese
## Usage
```python
from transformers import BertTokenizer, BartForConditionalGeneration, Text2TextGenerationPipeline
tokenizer = BertTokenizer.from_pretrained('jed351/bart-zh-hk-wiki')
model = BartForConditionalGeneration.from_pretrained('jed351/bart-zh-hk-wiki')
text2text_generator = Text2TextGenerationPipeline(model, tokenizer)
output = text2text_generator('聽日就要返香港,我激動到[MASK]唔着', max_length=50, do_sample=False)
print(output[0]['generated_text'].replace(' ', ''))
```
**Note**: Please use the `BertTokenizer` for the model vocabulary. DO NOT use the original `BartTokenizer`.
|
a12416b6d16895149b2be0343892e089
|
TransLL/distilbert-base-uncased-finetuned-emotion
|
TransLL
|
distilbert
| 12 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,343 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2236
- Accuracy: 0.9225
- F1: 0.9224
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8532 | 1.0 | 250 | 0.3276 | 0.904 | 0.8999 |
| 0.2564 | 2.0 | 500 | 0.2236 | 0.9225 | 0.9224 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
ca00c7eb98cba2116e8a4b42ba92582e
|
krinal214/bert-all
|
krinal214
|
bert
| 13 | 8 |
transformers
| 0 |
question-answering
| true | false | false |
apache-2.0
| null |
['tydiqa']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,150 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-all
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the tydiqa dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5985
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.1556 | 1.0 | 3552 | 0.5985 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.9.1
- Datasets 2.0.0
- Tokenizers 0.10.3
|
743b4d694ae28b0c0c215c027524e77b
|
elRivx/newhorrorfantasy_styleV2
|
elRivx
| null | 3 | 0 | null | 1 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 |
['stable-diffusion', 'text-to-image']
| false | true | true | 1,642 | false |
# newhorrorfantasy_style V2
Hi guys! In this time, I training a SD 2.1 model fro upgraded newhorrorfantasy_style .This is a SD trainee with a 2010s horror and fantasy illustrations as a style.
The magic word for the tests is = newhorrorfantasy_style
If you wanna test it, you can put this word on the prompt: newhorrorfantasy_style
[](https://www.buymeacoffee.com/elrivx)
Examples:
<img src=https://imgur.com/OuuBNVb.png width=30% height=30%>
<img src=https://imgur.com/QjE6DFK.png width=30% height=30%>
<img src=https://imgur.com/leksMVO.png width=30% height=30%>
<img src=https://imgur.com/3WUVxRN.png width=30% height=30%>
<img src=https://imgur.com/veVe6s1.png width=30% height=30%>
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
|
a413f22df76179aab5812b1f3bf203e6
|
Buseak/BerTurkBase_15_epoch
|
Buseak
|
bert
| 12 | 13 |
transformers
| 0 |
text-classification
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,087 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BerTurkBase_15_epoch
This model is a fine-tuned version of [dbmdz/bert-base-turkish-cased](https://huggingface.co/dbmdz/bert-base-turkish-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0005
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 50 | 0.6526 | 0.5972 |
| No log | 2.0 | 100 | 0.1755 | 0.9653 |
| No log | 3.0 | 150 | 0.0518 | 0.9861 |
| No log | 4.0 | 200 | 0.0065 | 1.0 |
| No log | 5.0 | 250 | 0.0022 | 1.0 |
| No log | 6.0 | 300 | 0.0016 | 1.0 |
| No log | 7.0 | 350 | 0.0007 | 1.0 |
| No log | 8.0 | 400 | 0.0005 | 1.0 |
| No log | 9.0 | 450 | 0.0005 | 1.0 |
| 0.1362 | 10.0 | 500 | 0.0005 | 1.0 |
| 0.1362 | 11.0 | 550 | 0.0006 | 1.0 |
| 0.1362 | 12.0 | 600 | 0.0005 | 1.0 |
| 0.1362 | 13.0 | 650 | 0.0005 | 1.0 |
| 0.1362 | 14.0 | 700 | 0.0005 | 1.0 |
| 0.1362 | 15.0 | 750 | 0.0005 | 1.0 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
39dfb15b1c21f80d5ffb49101b7076d9
|
domenicrosati/deberta-mlm-test
|
domenicrosati
|
deberta-v2
| 27 | 2 |
transformers
| 0 |
fill-mask
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['fill-mask', 'generated_from_trainer']
| true | true | true | 1,511 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-mlm-test
This model is a fine-tuned version of [microsoft/deberta-v3-xsmall](https://huggingface.co/microsoft/deberta-v3-xsmall) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2792
- Accuracy: 0.4766
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 4.4466 | 1.0 | 2067 | 4.1217 | 0.3847 |
| 3.9191 | 2.0 | 4134 | 3.6562 | 0.4298 |
| 3.6397 | 3.0 | 6201 | 3.4417 | 0.4550 |
| 3.522 | 4.0 | 8268 | 3.3239 | 0.4692 |
| 3.4504 | 5.0 | 10335 | 3.2792 | 0.4766 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0a0+17540c5
- Datasets 2.3.2
- Tokenizers 0.12.1
|
859deafc42f8f939040a5b564ba0af4e
|
laion/CLIP-ViT-H-14-laion2B-s32B-b79K
|
laion
|
clip
| 12 | 1,214,399 |
open_clip
| 69 | null | true | false | false |
mit
| null | null | null | 2 | 0 | 2 | 0 | 4 | 3 | 1 |
[]
| false | true | true | 7,930 | false |
# Model Card for CLIP ViT-H/14 - LAION-2B
# Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Training Details](#training-details)
4. [Evaluation](#evaluation)
5. [Acknowledgements](#acknowledgements)
6. [Citation](#citation)
7. [How To Get Started With the Model](#how-to-get-started-with-the-model)
# Model Details
## Model Description
A CLIP ViT-H/14 model trained with the LAION-2B English subset of LAION-5B (https://laion.ai/blog/laion-5b/) using OpenCLIP (https://github.com/mlfoundations/open_clip).
Model training done by Romain Beaumont on the [stability.ai](https://stability.ai/) cluster.
# Uses
As per the original [OpenAI CLIP model card](https://github.com/openai/CLIP/blob/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1/model-card.md), this model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. We also hope it can be used for interdisciplinary studies of the potential impact of such model.
The OpenAI CLIP paper includes a discussion of potential downstream impacts to provide an example for this sort of analysis. Additionally, the LAION-5B blog (https://laion.ai/blog/laion-5b/) and upcoming paper include additional discussion as it relates specifically to the training dataset.
## Direct Use
Zero-shot image classification, image and text retrieval, among others.
## Downstream Use
Image classification and other image task fine-tuning, linear probe image classification, image generation guiding and conditioning, among others.
## Out-of-Scope Use
As per the OpenAI models,
**Any** deployed use case of the model - whether commercial or not - is currently out of scope. Non-deployed use cases such as image search in a constrained environment, are also not recommended unless there is thorough in-domain testing of the model with a specific, fixed class taxonomy. This is because our safety assessment demonstrated a high need for task specific testing especially given the variability of CLIP’s performance with different class taxonomies. This makes untested and unconstrained deployment of the model in any use case currently potentially harmful.
Certain use cases which would fall under the domain of surveillance and facial recognition are always out-of-scope regardless of performance of the model. This is because the use of artificial intelligence for tasks such as these can be premature currently given the lack of testing norms and checks to ensure its fair use.
Since the model has not been purposefully trained in or evaluated on any languages other than English, its use should be limited to English language use cases.
Further the above notice, the LAION-5B dataset used in training of these models has additional considerations, see below.
# Training Details
## Training Data
This model was trained with the 2 Billion sample English subset of LAION-5B (https://laion.ai/blog/laion-5b/).
**IMPORTANT NOTE:** The motivation behind dataset creation is to democratize research and experimentation around large-scale multi-modal model training and handling of uncurated, large-scale datasets crawled from publically available internet. Our recommendation is therefore to use the dataset for research purposes. Be aware that this large-scale dataset is uncurated. Keep in mind that the uncurated nature of the dataset means that collected links may lead to strongly discomforting and disturbing content for a human viewer. Therefore, please use the demo links with caution and at your own risk. It is possible to extract a “safe” subset by filtering out samples based on the safety tags (using a customized trained NSFW classifier that we built). While this strongly reduces the chance for encountering potentially harmful content when viewing, we cannot entirely exclude the possibility for harmful content being still present in safe mode, so that the warning holds also there. We think that providing the dataset openly to broad research and other interested communities will allow for transparent investigation of benefits that come along with training large-scale models as well as pitfalls and dangers that may stay unreported or unnoticed when working with closed large datasets that remain restricted to a small community. Providing our dataset openly, we however do not recommend using it for creating ready-to-go industrial products, as the basic research about general properties and safety of such large-scale models, which we would like to encourage with this release, is still in progress.
## Training Procedure
Please see [training notes](https://docs.google.com/document/d/1EFbMLRWSSV0LUf9Du1pWzWqgeiIRPwEWX2s1C6mAk5c) and [wandb logs](https://wandb.ai/rom1504/eval_openclip/reports/H-14--VmlldzoyNDAxODQ3).
# Evaluation
Evaluation done with code in the [LAION CLIP Benchmark suite](https://github.com/LAION-AI/CLIP_benchmark).
## Testing Data, Factors & Metrics
### Testing Data
The testing is performed with VTAB+ (A combination of VTAB (https://arxiv.org/abs/1910.04867) w/ additional robustness datasets) for classification and COCO and Flickr for retrieval.
**TODO** - more detail
## Results
The model achieves a 78.0 zero-shot top-1 accuracy on ImageNet-1k.
An initial round of benchmarks have been performed on a wider range of datasets, currently viewable at https://github.com/LAION-AI/CLIP_benchmark/blob/main/benchmark/results.ipynb
**TODO** - create table for just this model's metrics.
# Acknowledgements
Acknowledging [stability.ai](https://stability.ai/) for the compute used to train this model.
# Citation
**BibTeX:**
LAION-5B
```bibtex
@inproceedings{schuhmann2022laionb,
title={{LAION}-5B: An open large-scale dataset for training next generation image-text models},
author={Christoph Schuhmann and
Romain Beaumont and
Richard Vencu and
Cade W Gordon and
Ross Wightman and
Mehdi Cherti and
Theo Coombes and
Aarush Katta and
Clayton Mullis and
Mitchell Wortsman and
Patrick Schramowski and
Srivatsa R Kundurthy and
Katherine Crowson and
Ludwig Schmidt and
Robert Kaczmarczyk and
Jenia Jitsev},
booktitle={Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2022},
url={https://openreview.net/forum?id=M3Y74vmsMcY}
}
```
OpenAI CLIP paper
```
@inproceedings{Radford2021LearningTV,
title={Learning Transferable Visual Models From Natural Language Supervision},
author={Alec Radford and Jong Wook Kim and Chris Hallacy and A. Ramesh and Gabriel Goh and Sandhini Agarwal and Girish Sastry and Amanda Askell and Pamela Mishkin and Jack Clark and Gretchen Krueger and Ilya Sutskever},
booktitle={ICML},
year={2021}
}
```
OpenCLIP software
```
@software{ilharco_gabriel_2021_5143773,
author = {Ilharco, Gabriel and
Wortsman, Mitchell and
Wightman, Ross and
Gordon, Cade and
Carlini, Nicholas and
Taori, Rohan and
Dave, Achal and
Shankar, Vaishaal and
Namkoong, Hongseok and
Miller, John and
Hajishirzi, Hannaneh and
Farhadi, Ali and
Schmidt, Ludwig},
title = {OpenCLIP},
month = jul,
year = 2021,
note = {If you use this software, please cite it as below.},
publisher = {Zenodo},
version = {0.1},
doi = {10.5281/zenodo.5143773},
url = {https://doi.org/10.5281/zenodo.5143773}
}
```
# How to Get Started with the Model
Use the code below to get started with the model.
** TODO ** - Hugging Face transformers, OpenCLIP, and timm getting started snippets
|
3210c292d89d813895e1ded630b4f21b
|
Finnish-NLP/t5-base-nl36-finnish
|
Finnish-NLP
|
t5
| 21 | 21 |
transformers
| 1 |
text2text-generation
| true | false | true |
apache-2.0
|
['fi']
|
['Finnish-NLP/mc4_fi_cleaned', 'wikipedia']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['finnish', 't5', 't5x', 'seq2seq']
| false | true | true | 9,443 | false |
# T5-base-nl36 for Finnish
Pretrained T5 model on Finnish language using a span-based masked language modeling (MLM) objective. T5 was introduced in
[this paper](https://arxiv.org/abs/1910.10683)
and first released at [this page](https://github.com/google-research/text-to-text-transfer-transformer).
**Note:** The Hugging Face inference widget is deactivated because this model needs a text-to-text fine-tuning on a specific downstream task to be useful in practice. As an example of a fine-tuned Finnish T5 model, you can check [Finnish-NLP/t5-small-nl24-casing-punctuation-correction](https://huggingface.co/Finnish-NLP/t5-small-nl24-casing-punctuation-correction) which has been fine-tuned to correct missing casing and punctuation for Finnish text.
## Model description
T5 is an encoder-decoder model and treats all NLP problems in a text-to-text format.
Finnish T5 is a transformers model pretrained on a very large corpus of Finnish data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and outputs from those texts.
More precisely, it was pretrained with the span-based masked language modeling (MLM) objective. Spans of the input sequence are masked by so-called sentinel tokens (a.k.a unique mask tokens) and the output sequence is formed as a concatenation of the same sentinel tokens and the real masked tokens. This way, the model learns an inner representation of the Finnish language.
This model used the [T5 v1.1](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) improvements compared to the original T5 model during the pretraining:
- GEGLU activation in feed-forward hidden layer, rather than ReLU - see [here](https://arxiv.org/abs/2002.05202)
- Dropout was turned off in pretraining (quality win). Dropout should be re-enabled during fine-tuning
- Pretrained on span-based masked language modeling (MLM) objective only without mixing in the downstream tasks
- No parameter sharing between embedding and classifier layer
This model also used the "efficient" T5 architecture findings presented in [this paper](https://arxiv.org/abs/2109.10686). In a nutshell, the paper indicates that a Deep-Narrow model architecture is favorable for downstream performance compared to other model architectures of similar parameter count. To be more precise, model depth is defined as the number of transformer blocks that are stacked sequentially.
This model uses the [t5-efficient-base-nl36](https://huggingface.co/google/t5-efficient-base-nl36) architecture's layer depth which means both the encoder and the decoder have 36 transformer layers compared to the original T5 "base" model's architecture of 12 transformer layers.
In total, this model has 814 million parameters.
## Intended uses & limitations
This model was only pretrained in a self-supervised way excluding any supervised training. Therefore, this model has to be fine-tuned before it is usable on a downstream task, like text classification, unlike the Google's original T5 model. **Note:** You most likely need to fine-tune these T5 models without mixed precision so fine-tune them with full fp32 precision. You can also find more fine-tuning tips from [here](https://discuss.huggingface.co/t/t5-finetuning-tips), for example.
### How to use
Here is how to use this model in PyTorch:
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("Finnish-NLP/t5-base-nl36-finnish")
model = T5ForConditionalGeneration.from_pretrained("Finnish-NLP/t5-base-nl36-finnish")
```
and in TensorFlow:
```python
from transformers import T5Tokenizer, TFT5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("Finnish-NLP/t5-base-nl36-finnish")
model = T5ForConditionalGeneration.from_pretrained("Finnish-NLP/t5-base-nl36-finnish", from_pt=True)
```
### Limitations and bias
The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model.
## Training data
This Finnish T5 model was pretrained on the combination of six datasets:
- [mc4_fi_cleaned](https://huggingface.co/datasets/Finnish-NLP/mc4_fi_cleaned), the dataset mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the mC4 dataset and further cleaned it with our own text data cleaning codes (check the dataset repo).
- [wikipedia](https://huggingface.co/datasets/wikipedia) We used the Finnish subset of the wikipedia (August 2021) dataset
- [Yle Finnish News Archive 2011-2018](http://urn.fi/urn:nbn:fi:lb-2017070501)
- [Yle Finnish News Archive 2019-2020](http://urn.fi/urn:nbn:fi:lb-2021050401)
- [Finnish News Agency Archive (STT)](http://urn.fi/urn:nbn:fi:lb-2018121001)
- [The Suomi24 Sentences Corpus](http://urn.fi/urn:nbn:fi:lb-2020021803)
Raw datasets were automatically cleaned to filter out bad quality and non-Finnish examples. Also, a [perplexity](https://huggingface.co/course/chapter7/3#perplexity-for-language-models) score was calculated for all texts with a KenLM model which was trained with very clean Finnish texts only. This perplexity score can then be used to determine how "clean" Finnish language the text contains. Lastly, all datasets were concatenated and the top 90% perplexity score was used as a filtering threshold to filter out the worst quality 10% of texts. Together these cleaned datasets were around 76GB of text.
## Training procedure
### Preprocessing
The texts are tokenized using WordPiece and a vocabulary size of 32000. The inputs and the outputs are sequences of 512 consecutive tokens. Texts are not lower cased so this model is case-sensitive: it makes a difference between finnish and Finnish.
### Pretraining
The model was trained on TPUv3-8 VM, sponsored by the [Google TPU Research Cloud](https://sites.research.google/trc/about/), for 1M steps with a batch size of 64 (in total 33B tokens). The optimizer used was a AdaFactor with learning rate warmup for 10K steps with a constant learning rate of 1e-2, and then an inverse square root decay (exponential decay) of the learning rate after.
Training code was from the Google's Jax/Flax based [t5x framework](https://github.com/google-research/t5x) and also some t5x task definitions were adapted from [Per's t5x work](https://huggingface.co/pere).
## Evaluation results
Evaluation was done by fine-tuning the model on a downstream text classification task with two different labeled Finnish datasets: [Yle News](https://github.com/spyysalo/yle-corpus) and [Eduskunta](https://github.com/aajanki/eduskunta-vkk). Classification fine-tuning was done with a sequence length of 128 tokens.
When fine-tuned on those datasets, this model (the sixth row of the table) achieves the following accuracy results compared to our other T5 models and their parameter counts:
| | Model parameters | Yle News accuracy | Eduskunta accuracy |
|-------------------------------------------------------|------------------|---------------------|----------------------|
|Finnish-NLP/t5-tiny-nl6-finnish | 31 million |92.80 |69.07 |
|Finnish-NLP/t5-mini-nl8-finnish | 72 million |93.89 |71.43 |
|Finnish-NLP/t5-small-nl16-finnish | 184 million |94.46 |74.00 |
|Finnish-NLP/t5-small-nl24-finnish | 260 million |**94.68** |74.90 |
|Finnish-NLP/byt5-base-finnish | 582 million |92.33 |73.13 |
|Finnish-NLP/t5-base-nl36-finnish | 814 million |94.40 |**75.97** |
|Finnish-NLP/t5-large-nl36-finnish | 1425 million |94.17 |73.50 |
Fine-tuning Google's multilingual mT5 models on the same datasets we can clearly see that our monolingual Finnish T5 models achieve much better results on Finnish text classification:
| | Model parameters | Yle News accuracy | Eduskunta accuracy |
|-------------------------------------------------------|------------------|---------------------|----------------------|
|google/mt5-small | 301 million |91.51 |64.10 |
|google/mt5-base | 583 million |92.71 |68.40 |
## Acknowledgements
This project would not have been possible without compute generously provided by Google through the
[TPU Research Cloud](https://sites.research.google/trc/).
## Team Members
- Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/)
- Rasmus Toivanen, [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/)
Feel free to contact us for more details 🤗
|
6edfcd231609cb9df030792d6bd2dc63
|
nalisten1/nalisten-likeness-1
|
nalisten1
| null | 20 | 23 |
diffusers
| 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 2 | 1 | 1 | 0 | 0 | 0 | 0 |
['text-to-image', 'stable-diffusion']
| false | true | true | 750 | false |
### Nalisten-Likeness-1 Dreambooth model trained by nalisten1 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb)
Sample pictures of this concept:
download
.png)
|
23c30490afd6526ee25564a304f60c41
|
AkmalAshirmatov/first_try
|
AkmalAshirmatov
|
wav2vec2
| 13 | 6 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null |
['common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,078 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# first_try
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice_7_0 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 2.4.0
- Tokenizers 0.10.3
|
d6f3005b40ccf738b12b903338417b7f
|
csikasote/whisper-medium-toi
|
csikasote
|
whisper
| 15 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,705 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-medium-toi
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8215
- Wer: 59.6163
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.4727 | 1.47 | 500 | 2.0656 | 70.8002 |
| 0.2033 | 2.95 | 1000 | 2.0971 | 67.6416 |
| 0.0658 | 4.42 | 1500 | 2.3894 | 62.0262 |
| 0.0281 | 5.9 | 2000 | 2.5443 | 62.2134 |
| 0.0104 | 7.37 | 2500 | 2.6873 | 61.8390 |
| 0.0046 | 8.85 | 3000 | 2.7252 | 60.6458 |
| 0.0004 | 10.32 | 3500 | 2.7891 | 60.8563 |
| 0.0003 | 11.8 | 4000 | 2.8215 | 59.6163 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
f91dbfbdf30b5c18876eea6eab767c66
|
mrm8488/convbert-small-spanish
|
mrm8488
|
convbert
| 9 | 3 |
transformers
| 1 |
feature-extraction
| true | true | false |
mit
|
['es']
|
['large_spanish_corpus']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 922 | false |
# ConvBERT small pre-trained on large_spanish_corpus
The ConvBERT architecture is presented in the ["ConvBERT: Improving BERT with Span-based Dynamic Convolution"](https://arxiv.org/abs/2008.02496) paper.
## Metrics on evaluation set
```
disc_accuracy = 0.95163906
disc_auc = 0.9405496
disc_loss = 0.13658184
disc_precision = 0.80829453
disc_recall = 0.49316448
global_step = 1000000
loss = 9.12079
masked_lm_accuracy = 0.53505784
masked_lm_loss = 2.3028736
sampled_masked_lm_accuracy = 0.44047198
```
## Usage
```python
from transformers import AutoModel, AutoTokenizer
model_name = "mrm8488/convbert-small-spanish"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)
```
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) with the support of [Narrativa](https://www.narrativa.com/)
> Made with <span style="color: #e25555;">♥</span> in Spain
|
cfebe33b1efafbb8b9038419cb210c85
|
Pavithra/codeparrot-ds-sample-gpt-small-neo-10epoch1
|
Pavithra
|
gpt_neo
| 13 | 2 |
transformers
| 0 |
text-generation
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,791 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codeparrot-ds-sample-gpt-small-neo-10epoch1
This model is a fine-tuned version of [EleutherAI/gpt-neo-125M](https://huggingface.co/EleutherAI/gpt-neo-125M) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5696
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 3.5639 | 0.94 | 1000 | 2.9253 |
| 2.3253 | 1.88 | 2000 | 2.4563 |
| 1.8494 | 2.82 | 3000 | 2.2655 |
| 1.5133 | 3.77 | 4000 | 2.1635 |
| 1.249 | 4.71 | 5000 | 2.1414 |
| 1.0194 | 5.65 | 6000 | 2.1818 |
| 0.7999 | 6.59 | 7000 | 2.2738 |
| 0.5971 | 7.53 | 8000 | 2.3910 |
| 0.4238 | 8.47 | 9000 | 2.5062 |
| 0.3107 | 9.42 | 10000 | 2.5696 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
cd9f732a01bc7121279eadcea89157f7
|
DrishtiSharma/whisper-large-v2-punjabi
|
DrishtiSharma
|
whisper
| 15 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['pa']
|
['mozilla-foundation/common_voice_11_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['whisper-event', 'generated_from_trainer']
| true | true | true | 1,320 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Large Punjabi - Drishti Sharma
This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2846
- Wer: 19.7125
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0004 | 8.26 | 1000 | 0.2846 | 19.7125 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu116
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
f2ab5a825c5c012c81fb01c1e29c308e
|
gchhablani/bert-base-cased-finetuned-rte
|
gchhablani
|
bert
| 49 | 105 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
|
['en']
|
['glue']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer', 'fnet-bert-base-comparison']
| true | true | true | 2,207 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-finetuned-rte
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the GLUE RTE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7260
- Accuracy: 0.6715
The model was fine-tuned to compare [google/fnet-base](https://huggingface.co/google/fnet-base) as introduced in [this paper](https://arxiv.org/abs/2105.03824) against [bert-base-cased](https://huggingface.co/bert-base-cased).
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
This model is trained using the [run_glue](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) script. The following command was used:
```bash
#!/usr/bin/bash
python ../run_glue.py \\n --model_name_or_path bert-base-cased \\n --task_name rte \\n --do_train \\n --do_eval \\n --max_seq_length 512 \\n --per_device_train_batch_size 16 \\n --learning_rate 2e-5 \\n --num_train_epochs 3 \\n --output_dir bert-base-cased-finetuned-rte \\n --push_to_hub \\n --hub_strategy all_checkpoints \\n --logging_strategy epoch \\n --save_strategy epoch \\n --evaluation_strategy epoch \\n```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6915 | 1.0 | 156 | 0.6491 | 0.6606 |
| 0.55 | 2.0 | 312 | 0.6737 | 0.6570 |
| 0.3955 | 3.0 | 468 | 0.7260 | 0.6715 |
### Framework versions
- Transformers 4.11.0.dev0
- Pytorch 1.9.0
- Datasets 1.12.1
- Tokenizers 0.10.3
|
1691514cb00244564595015d8a3a7b0a
|
JustAdvanceTechonology/bert-fine-tuned-medical-insurance-ner
|
JustAdvanceTechonology
|
bert
| 8 | 11 |
transformers
| 2 |
token-classification
| false | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,460 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# JustAdvanceTechonology/bert-fine-tuned-medical-insurance-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0269
- Validation Loss: 0.0551
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2631, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.1775 | 0.0646 | 0 |
| 0.0454 | 0.0580 | 1 |
| 0.0269 | 0.0551 | 2 |
### Framework versions
- Transformers 4.17.0
- TensorFlow 2.5.0
- Datasets 1.18.3
- Tokenizers 0.11.6
|
46e6ebd1da629e71f3133f5b6bac0bfc
|
underactuated/opt-350m_rl1_va1
|
underactuated
|
opt
| 10 | 0 |
transformers
| 0 |
text-generation
| true | false | false |
other
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 888 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opt-350m_rl1_va1
This model is a fine-tuned version of [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1
- Datasets 2.8.0
- Tokenizers 0.13.2
|
36bebd1a89953df46d9524403c418dc0
|
henryscheible/sst2
|
henryscheible
|
bert
| 14 | 2 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
|
['en']
|
['glue']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 995 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sst2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the GLUE SST2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3521
- Accuracy: 0.9335
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1
- Datasets 2.6.1
- Tokenizers 0.13.1
|
cd1f76c8706cb05aa6fb1a4852b63c9b
|
speechbrain/sepformer-wsj03mix
|
speechbrain
| null | 14 | 123 |
speechbrain
| 1 |
audio-to-audio
| false | false | false |
apache-2.0
|
['en']
|
['WSJ0-3Mix']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Source Separation', 'Speech Separation', 'Audio Source Separation', 'WSJ0-3Mix', 'SepFormer', 'Transformer', 'audio-to-audio', 'audio-source-separation', 'speechbrain']
| false | true | true | 3,801 | false |
<iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe>
<br/><br/>
# SepFormer trained on WSJ0-3Mix
This repository provides all the necessary tools to perform audio source separation with a [SepFormer](https://arxiv.org/abs/2010.13154v2)
model, implemented with SpeechBrain, and pretrained on WSJ0-3Mix dataset. For a better experience we encourage you to learn more about
[SpeechBrain](https://speechbrain.github.io). The model performance is 19.8 dB SI-SNRi on the test set of WSJ0-3Mix dataset.
| Release | Test-Set SI-SNRi | Test-Set SDRi |
|:-------------:|:--------------:|:--------------:|
| 09-03-21 | 19.8dB | 20.0dB |
## Install SpeechBrain
First of all, please install SpeechBrain with the following command:
```
pip install speechbrain
```
Please notice that we encourage you to read our tutorials and learn more about
[SpeechBrain](https://speechbrain.github.io).
### Perform source separation on your own audio file
```python
from speechbrain.pretrained import SepformerSeparation as separator
import torchaudio
model = separator.from_hparams(source="speechbrain/sepformer-wsj03mix", savedir='pretrained_models/sepformer-wsj03mix')
est_sources = model.separate_file(path='speechbrain/sepformer-wsj03mix/test_mixture_3spks.wav')
torchaudio.save("source1hat.wav", est_sources[:, :, 0].detach().cpu(), 8000)
torchaudio.save("source2hat.wav", est_sources[:, :, 1].detach().cpu(), 8000)
torchaudio.save("source3hat.wav", est_sources[:, :, 2].detach().cpu(), 8000)
```
The system expects input recordings sampled at 8kHz (single channel).
If your signal has a different sample rate, resample it (e.g, using torchaudio or sox) before using the interface.
### Inference on GPU
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
### Training
The model was trained with SpeechBrain (fc2eabb7).
To train it from scratch follows these steps:
1. Clone SpeechBrain:
```bash
git clone https://github.com/speechbrain/speechbrain/
```
2. Install it:
```
cd speechbrain
pip install -r requirements.txt
pip install -e .
```
3. Run Training:
```
cd recipes/WSJ0Mix/separation
python train.py hparams/sepformer.yaml --data_folder=your_data_folder
```
Note: change num_spks to 3 in the yaml file.
You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1ruScDoqiSDNeoDa__u5472UUPKPu54b2?usp=sharing).
### Limitations
The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.
#### Referencing SpeechBrain
```bibtex
@misc{speechbrain,
title={{SpeechBrain}: A General-Purpose Speech Toolkit},
author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio},
year={2021},
eprint={2106.04624},
archivePrefix={arXiv},
primaryClass={eess.AS},
note={arXiv:2106.04624}
}
```
#### Referencing SepFormer
```bibtex
@inproceedings{subakan2021attention,
title={Attention is All You Need in Speech Separation},
author={Cem Subakan and Mirco Ravanelli and Samuele Cornell and Mirko Bronzi and Jianyuan Zhong},
year={2021},
booktitle={ICASSP 2021}
}
```
# **About SpeechBrain**
- Website: https://speechbrain.github.io/
- Code: https://github.com/speechbrain/speechbrain/
- HuggingFace: https://huggingface.co/speechbrain/
|
b3559d93e2ad0bcd3216692fc7877c1c
|
Helsinki-NLP/opus-mt-fi-uk
|
Helsinki-NLP
|
marian
| 10 | 18 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 768 | false |
### opus-mt-fi-uk
* source languages: fi
* target languages: uk
* OPUS readme: [fi-uk](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-uk/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-uk/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-uk/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-uk/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fi.uk | 23.3 | 0.445 |
|
5ccd0a77547167723dc9c0809e384d48
|
IDEA-CCNL/Zhouwenwang-Unified-1.3B
|
IDEA-CCNL
|
megatron-bert
| 5 | 49 |
transformers
| 1 | null | true | false | false |
apache-2.0
|
['zh']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 4,952 | false |
# Zhouwenwang-Unified-1.3B
- Github: [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)
- Docs: [Fengshenbang-Docs](https://fengshenbang-doc.readthedocs.io/)
## 简介 Brief Introduction
与追一科技合作探索的中文统一模型,13亿参数的编码器结构模型。
The Chinese unified model explored in cooperation with Zhuiyi Technology, the encoder structure model with 1.3B parameters.
## 模型分类 Model Taxonomy
| 需求 Demand | 任务 Task | 系列 Series | 模型 Model | 参数 Parameter | 额外 Extra |
| :----: | :----: | :----: | :----: | :----: | :----: |
| 特殊 Special | 探索 Exploration | 周文王 Zhouwenwang | 待定 TBD | 1.3B | 中文 Chinese |
## 模型信息 Model Information
IDEA研究院认知计算中心联合追一科技有限公司提出的具有新结构的大模型。该模型在预训练阶段时考虑统一LM和MLM的任务,这让其同时具备生成和理解的能力,并且增加了旋转位置编码技术。目前已有13亿参数的Zhouwenwang-Unified-1.3B大模型,是中文领域中可以同时做LM和MLM任务的最大的模型。我们后续会持续在模型规模、知识融入、监督辅助任务等方向不断优化。
A large-scale model (Zhouwenwang-Unified-1.3B) with a new structure proposed by IDEA CCNL and Zhuiyi Technology. The model considers the task of unifying LM (Language Modeling) and MLM (Masked Language Modeling) during the pre-training phase, which gives it both generative and comprehension capabilities, and applys rotational position encoding. At present, Zhouwenwang-Unified-1.3B with 13B parameters is the largest Chinese model that can do both LM and MLM tasks. In the future, we will continue to optimize it in the direction of model size, knowledge incorporation, and supervisory assistance tasks.
### 下游任务 Performance
下游中文任务的得分(没有做任何数据增强)。
Scores on downstream chinese tasks (without any data augmentation)
| 模型 Model | afqmc | tnews | iflytek | ocnli | cmnli | wsc | csl |
| :--------: | :-----: | :----: | :-----: | :----: | :----: | :----: | :----: |
| roberta-wwm-ext-large | 0.7514 | 0.5872 | 0.6152 | 0.7770 | 0.8140 | 0.8914 | 0.8600 |
| Zhouwenwang-Unified-1.3B | 0.7463 | 0.6036 | 0.6288 | 0.7654 | 0.7741 | 0.8849 | 0. 8777 |
## 使用 Usage
因为[transformers](https://github.com/huggingface/transformers)库中是没有 Zhouwenwang-Unified-1.3B相关的模型结构的,所以你可以在我们的[Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)中找到并且运行代码。
Since there is no structure of Zhouwenwang-Unified-1.3B in [transformers library](https://github.com/huggingface/transformers), you can find the structure of Zhouwenwang-Unified-1.3B and run the codes in [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM).
```shell
git clone https://github.com/IDEA-CCNL/Fengshenbang-LM.git
```
### 加载模型 Loading Models
```python
from fengshen import RoFormerModel
from fengshen import RoFormerConfig
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained("IDEA-CCNL/Zhouwenwang-Unified-1.3B")
config = RoFormerConfig.from_pretrained("IDEA-CCNL/Zhouwenwang-Unified-1.3B")
model = RoFormerModel.from_pretrained("IDEA-CCNL/Zhouwenwang-Unified-1.3B")
```
### 使用示例 Usage Examples
你可以使用该模型进行续写任务。
You can use the model for continuation writing tasks.
```python
from fengshen import RoFormerModel
from transformers import AutoTokenizer
import torch
import numpy as np
sentence = '清华大学位于'
max_length = 32
tokenizer = AutoTokenizer.from_pretrained("IDEA-CCNL/Zhouwenwang-Unified-1.3B")
model = RoFormerModel.from_pretrained("IDEA-CCNL/Zhouwenwang-Unified-1.3B")
for i in range(max_length):
encode = torch.tensor(
[[tokenizer.cls_token_id]+tokenizer.encode(sentence, add_special_tokens=False)]).long()
logits = model(encode)[0]
logits = torch.nn.functional.linear(
logits, model.embeddings.word_embeddings.weight)
logits = torch.nn.functional.softmax(
logits, dim=-1).cpu().detach().numpy()[0]
sentence = sentence + \
tokenizer.decode(int(np.random.choice(logits.shape[1], p=logits[-1])))
if sentence[-1] == '。':
break
print(sentence)
```
## 引用 Citation
如果您在您的工作中使用了我们的模型,可以引用我们的[论文](https://arxiv.org/abs/2209.02970):
If you are using the resource for your work, please cite the our [paper](https://arxiv.org/abs/2209.02970):
```text
@article{fengshenbang,
author = {Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen and Ruyi Gan and Jiaxing Zhang},
title = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence},
journal = {CoRR},
volume = {abs/2209.02970},
year = {2022}
}
```
也可以引用我们的[网站](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
```text
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2021},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
```
|
d8d8b0375a3902a4b5b64757c940b76f
|
celinelee/bart-finetuned-conala-3
|
celinelee
|
bart
| 16 | 1 |
transformers
| 1 |
text2text-generation
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 5,241 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-finetuned-conala-3
This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on an CoNaLa.
It achieves the following results on the evaluation set:
- Loss: 1.8253
- Rouge1: 47.4345
- Rouge2: 23.8936
- Rougel: 45.317
- Rougelsum: 45.4339
- Bleu: 0.0657
- Gen Len: 58.0
## Model description
More information needed
## Intended uses & limitations
Code snippet -> NL intent
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:------:|:-------:|
| No log | 0.08 | 50 | 2.7823 | 35.8458 | 12.1898 | 33.7466 | 33.8377 | 0.0041 | 58.0 |
| No log | 0.17 | 100 | 2.4223 | 37.2633 | 13.429 | 34.4943 | 34.5533 | 0.0087 | 58.0 |
| No log | 0.25 | 150 | 2.2696 | 40.6963 | 16.5785 | 38.1213 | 38.16 | 0.0167 | 58.0 |
| No log | 0.34 | 200 | 2.3168 | 41.3324 | 17.292 | 39.0117 | 39.113 | 0.0173 | 58.0 |
| No log | 0.42 | 250 | 2.3187 | 41.1345 | 16.6829 | 38.8514 | 38.891 | 0.0237 | 58.0 |
| No log | 0.5 | 300 | 2.1701 | 41.0145 | 17.5601 | 39.166 | 39.249 | 0.0206 | 58.0 |
| No log | 0.59 | 350 | 2.2035 | 41.7506 | 17.7251 | 39.4856 | 39.5647 | 0.0292 | 58.0 |
| No log | 0.67 | 400 | 2.1006 | 43.0324 | 19.9801 | 40.8704 | 40.9399 | 0.0319 | 58.0 |
| No log | 0.76 | 450 | 2.0563 | 43.2151 | 18.7409 | 40.4183 | 40.502 | 0.0244 | 58.0 |
| 2.4902 | 0.84 | 500 | 2.0468 | 43.2215 | 18.3484 | 40.9498 | 41.0682 | 0.0317 | 58.0 |
| 2.4902 | 0.92 | 550 | 2.0222 | 44.9934 | 19.8389 | 42.4478 | 42.5687 | 0.0372 | 58.0 |
| 2.4902 | 1.01 | 600 | 2.1095 | 43.8293 | 19.5682 | 40.882 | 40.9518 | 0.0311 | 58.0 |
| 2.4902 | 1.09 | 650 | 2.0124 | 43.6928 | 19.6878 | 39.6602 | 39.7368 | 0.0417 | 58.0 |
| 2.4902 | 1.18 | 700 | 2.0027 | 46.2115 | 21.9475 | 43.5869 | 43.6713 | 0.0477 | 58.0 |
| 2.4902 | 1.26 | 750 | 1.9599 | 45.9388 | 22.0368 | 43.4731 | 43.5656 | 0.043 | 58.0 |
| 2.4902 | 1.34 | 800 | 1.9467 | 44.7518 | 20.4755 | 42.489 | 42.6274 | 0.0394 | 58.0 |
| 2.4902 | 1.43 | 850 | 1.9643 | 44.1584 | 20.8833 | 41.8848 | 41.9733 | 0.0441 | 58.0 |
| 2.4902 | 1.51 | 900 | 1.8926 | 47.3789 | 22.9104 | 45.0164 | 45.0822 | 0.0445 | 58.0 |
| 2.4902 | 1.6 | 950 | 1.8855 | 46.8329 | 22.1133 | 44.1788 | 44.2666 | 0.0431 | 58.0 |
| 1.8023 | 1.68 | 1000 | 1.9160 | 47.1319 | 22.9792 | 44.4807 | 44.6103 | 0.0475 | 58.0 |
| 1.8023 | 1.76 | 1050 | 1.8498 | 48.8005 | 24.4785 | 46.4564 | 46.5427 | 0.0576 | 58.0 |
| 1.8023 | 1.85 | 1100 | 1.8611 | 47.8327 | 23.2086 | 45.5999 | 45.6868 | 0.0487 | 58.0 |
| 1.8023 | 1.93 | 1150 | 1.8497 | 47.7267 | 23.2021 | 45.5104 | 45.546 | 0.0512 | 58.0 |
| 1.8023 | 2.02 | 1200 | 1.8335 | 47.1502 | 22.8336 | 44.7614 | 44.7927 | 0.0566 | 58.0 |
| 1.8023 | 2.1 | 1250 | 1.8779 | 46.6645 | 22.9162 | 44.0086 | 44.2021 | 0.0539 | 58.0 |
| 1.8023 | 2.18 | 1300 | 1.8514 | 48.1544 | 24.7977 | 45.949 | 46.0254 | 0.0719 | 58.0 |
| 1.8023 | 2.27 | 1350 | 1.8658 | 46.7655 | 23.4813 | 44.5872 | 44.6907 | 0.069 | 58.0 |
| 1.8023 | 2.35 | 1400 | 1.8400 | 46.2749 | 23.6528 | 44.3149 | 44.4056 | 0.0572 | 58.0 |
| 1.8023 | 2.44 | 1450 | 1.8343 | 46.6169 | 23.8005 | 44.5486 | 44.6125 | 0.0547 | 58.0 |
| 1.3851 | 2.52 | 1500 | 1.8220 | 47.4739 | 24.3457 | 45.4959 | 45.6216 | 0.0662 | 58.0 |
| 1.3851 | 2.61 | 1550 | 1.8333 | 47.6311 | 24.3616 | 45.5904 | 45.6146 | 0.0666 | 58.0 |
| 1.3851 | 2.69 | 1600 | 1.8091 | 47.4633 | 24.0785 | 45.2493 | 45.2845 | 0.0645 | 58.0 |
| 1.3851 | 2.77 | 1650 | 1.8085 | 47.6495 | 23.8386 | 45.5077 | 45.5848 | 0.0639 | 58.0 |
| 1.3851 | 2.86 | 1700 | 1.8377 | 46.9721 | 23.4325 | 44.8386 | 44.9003 | 0.0647 | 58.0 |
| 1.3851 | 2.94 | 1750 | 1.8238 | 47.5266 | 23.9843 | 45.3897 | 45.473 | 0.0653 | 58.0 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu102
- Datasets 2.1.0
- Tokenizers 0.10.3
|
7f1c387078d73778a300e0d88f2a30fa
|
lmqg/flan-t5-small-squad-qg-ae
|
lmqg
|
t5
| 20 | 4 |
transformers
| 0 |
text2text-generation
| true | false | false |
cc-by-4.0
|
['en']
|
['lmqg/qg_squad']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['question generation', 'answer extraction']
| true | true | true | 7,077 | false |
# Model Card of `lmqg/flan-t5-small-squad-qg-ae`
This model is fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) for question generation and answer extraction jointly on the [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
### Overview
- **Language model:** [google/flan-t5-small](https://huggingface.co/google/flan-t5-small)
- **Language:** en
- **Training data:** [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="en", model="lmqg/flan-t5-small-squad-qg-ae")
# model prediction
question_answer_pairs = model.generate_qa("William Turner was an English painter who specialised in watercolour landscapes")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "lmqg/flan-t5-small-squad-qg-ae")
# answer extraction
answer = pipe("generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.")
# question generation
question = pipe("extract answers: <hl> Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records. <hl> Her performance in the film received praise from critics, and she garnered several nominations for her portrayal of James, including a Satellite Award nomination for Best Supporting Actress, and a NAACP Image Award nomination for Outstanding Supporting Actress.")
```
## Evaluation
- ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/lmqg/flan-t5-small-squad-qg-ae/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_squad.default.json)
| | Score | Type | Dataset |
|:-----------|--------:|:--------|:---------------------------------------------------------------|
| BERTScore | 90.22 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_1 | 56.61 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_2 | 40.44 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_3 | 31.01 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_4 | 24.42 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| METEOR | 25.56 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| MoverScore | 63.83 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| ROUGE_L | 51.44 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
- ***Metric (Question & Answer Generation)***: [raw metric file](https://huggingface.co/lmqg/flan-t5-small-squad-qg-ae/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qg_squad.default.json)
| | Score | Type | Dataset |
|:--------------------------------|--------:|:--------|:---------------------------------------------------------------|
| QAAlignedF1Score (BERTScore) | 93.28 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| QAAlignedF1Score (MoverScore) | 64.4 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| QAAlignedPrecision (BERTScore) | 92.66 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| QAAlignedPrecision (MoverScore) | 63.59 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| QAAlignedRecall (BERTScore) | 93.91 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| QAAlignedRecall (MoverScore) | 65.29 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
- ***Metric (Answer Extraction)***: [raw metric file](https://huggingface.co/lmqg/flan-t5-small-squad-qg-ae/raw/main/eval/metric.first.answer.paragraph_sentence.answer.lmqg_qg_squad.default.json)
| | Score | Type | Dataset |
|:-----------------|--------:|:--------|:---------------------------------------------------------------|
| AnswerExactMatch | 55.95 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| AnswerF1Score | 67.9 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| BERTScore | 91.2 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_1 | 49.09 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_2 | 44.12 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_3 | 39.18 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| Bleu_4 | 34.94 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| METEOR | 41.48 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| MoverScore | 80.7 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
| ROUGE_L | 67.28 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_squad
- dataset_name: default
- input_types: ['paragraph_answer', 'paragraph_sentence']
- output_types: ['question', 'answer']
- prefix_types: ['qg', 'ae']
- model: google/flan-t5-small
- max_length: 512
- max_length_output: 32
- epoch: 7
- batch: 32
- lr: 0.0001
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 2
- label_smoothing: 0.15
The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/flan-t5-small-squad-qg-ae/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
|
d225e5a013104a88beb1109c1b5eaa3d
|
succinctly/text2image-prompt-generator
|
succinctly
|
gpt2
| 11 | 12,614 |
transformers
| 70 |
text-generation
| true | false | false |
cc-by-2.0
|
['en']
|
['succinctly/midjourney-prompts']
| null | 1 | 0 | 1 | 0 | 1 | 1 | 0 |
['text2image', 'prompting']
| false | true | true | 1,412 | false |
This is a GPT-2 model fine-tuned on the [succinctly/midjourney-prompts](https://huggingface.co/datasets/succinctly/midjourney-prompts) dataset, which contains 250k text prompts that users issued to the [Midjourney](https://www.midjourney.com/) text-to-image service over a month period. For more details on how this dataset was scraped, see [Midjourney User Prompts & Generated Images (250k)](https://www.kaggle.com/datasets/succinctlyai/midjourney-texttoimage).
This prompt generator can be used to auto-complete prompts for any text-to-image model (including the DALL·E family):

Note that, while this model can be used together with any text-to-image model, it occasionally produces Midjourney-specific tags. Users can specify certain requirements via [double-dashed parameters](https://midjourney.gitbook.io/docs/imagine-parameters) (e.g. `--ar 16:9` sets the aspect ratio to 16:9, and `--no snake` asks the model to exclude snakes from the generated image) or set the importance of various entities in the image via [explicit weights](https://midjourney.gitbook.io/docs/user-manual#advanced-text-weights) (e.g. `hot dog::1.5 food::-1` is likely to produce the image of an animal instead of a frankfurter).
When using this model, please attribute credit to [Succinctly AI](https://succinctly.ai).
|
cb321e75820b72c0370887530df2c504
|
fathyshalab/domain_transfer_clinic_credit_cards-massive_social-roberta-large-v1-1-5
|
fathyshalab
|
roberta
| 14 | 0 |
sentence-transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['setfit', 'sentence-transformers', 'text-classification']
| false | true | true | 1,532 | false |
# fathyshalab/domain_transfer_clinic_credit_cards-massive_social-roberta-large-v1-1-5
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("fathyshalab/domain_transfer_clinic_credit_cards-massive_social-roberta-large-v1-1-5")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
cc44a5a09576c577150e5cc7f0d68840
|
sd-concepts-library/daycare-attendant-sun-fnaf
|
sd-concepts-library
| null | 10 | 0 | null | 0 | null | false | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,264 | false |
### Daycare Attendant Sun FNAF on Stable Diffusion
This is the `<biblic-sun-fnaf>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:





|
d5e11d723a6a316a13a843828b31fdd9
|
Katsiaryna/distilbert-base-uncased-finetuned_9th
|
Katsiaryna
|
distilbert
| 12 | 3 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,475 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned_9th
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2826
- Accuracy: 0.4462
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2357 | 1.0 | 569 | 0.2277 | 0.3474 |
| 0.2237 | 2.0 | 1138 | 0.2316 | 0.3474 |
| 0.1847 | 3.0 | 1707 | 0.2456 | 0.3712 |
| 0.1302 | 4.0 | 2276 | 0.2763 | 0.4602 |
| 0.0863 | 5.0 | 2845 | 0.2826 | 0.4462 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
023e13b36cb3d538d88abccecdf8ac41
|
hopkins/codeparrot-ds
|
hopkins
|
gpt2
| 13 | 4 |
transformers
| 0 |
text-generation
| true | false | false |
mit
| null |
['generator']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,032 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codeparrot-ds
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 1024
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu117
- Datasets 2.8.0
- Tokenizers 0.13.2
|
fcaa96ccd5256b9d00fb54e1babd5a86
|
juro95/xlm-roberta-finetuned-ner-2
|
juro95
|
xlm-roberta
| 9 | 3 |
transformers
| 0 |
token-classification
| false | true | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,436 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# juro95/xlm-roberta-finetuned-ner-2
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0997
- Validation Loss: 0.1174
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 65805, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.2954 | 0.1690 | 0 |
| 0.1468 | 0.1274 | 1 |
| 0.0997 | 0.1174 | 2 |
### Framework versions
- Transformers 4.25.1
- TensorFlow 2.6.5
- Datasets 2.3.2
- Tokenizers 0.13.2
|
fc9dbb1be2376bedb075b0aa95c3f24c
|
nagolinc/sd-dune
|
nagolinc
| null | 182 | 5 |
diffusers
| 3 | null | false | false | false |
creativeml-openrail-m
| null | null | null | 3 | 2 | 1 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,068 | false |
This model was trained based off of https://huggingface.co/runwayml/stable-diffusion-v1-5 for 15000 steps using 2.5k images from https://dune.fandom.com/wiki/Dune_Wiki
"bene gesserit"

"dune"

"paul atreides"

"sandworm"

"taylor swift"

"yoda"

"shai hulud"

|
82740967fa27eda5e04bda58f5671e26
|
Davlan/xlm-roberta-large-finetuned-hausa
|
Davlan
|
xlm-roberta
| 10 | 4 |
transformers
| 0 |
fill-mask
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,074 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hau_xlmr
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7674
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 5
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 2
- total_train_batch_size: 10
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.21.1
- Pytorch 1.7.1+cu110
- Datasets 1.16.1
- Tokenizers 0.12.1
|
b5f65aaf9b4764433a4ec66d3bc831d3
|
hfl/rbt6
|
hfl
|
bert
| 11 | 9,071 |
transformers
| 4 |
fill-mask
| true | true | true |
apache-2.0
|
['zh']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['bert']
| false | true | true | 2,003 | false |
# This is a re-trained 6-layer RoBERTa-wwm-ext model.
## Chinese BERT with Whole Word Masking
For further accelerating Chinese natural language processing, we provide **Chinese pre-trained BERT with Whole Word Masking**.
**[Pre-Training with Whole Word Masking for Chinese BERT](https://arxiv.org/abs/1906.08101)**
Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang, Shijin Wang, Guoping Hu
This repository is developed based on:https://github.com/google-research/bert
You may also interested in,
- Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm
- Chinese MacBERT: https://github.com/ymcui/MacBERT
- Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA
- Chinese XLNet: https://github.com/ymcui/Chinese-XLNet
- Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer
More resources by HFL: https://github.com/ymcui/HFL-Anthology
## Citation
If you find the technical report or resource is useful, please cite the following technical report in your paper.
- Primary: https://arxiv.org/abs/2004.13922
```
@inproceedings{cui-etal-2020-revisiting,
title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing",
author = "Cui, Yiming and
Che, Wanxiang and
Liu, Ting and
Qin, Bing and
Wang, Shijin and
Hu, Guoping",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58",
pages = "657--668",
}
```
- Secondary: https://arxiv.org/abs/1906.08101
```
@article{chinese-bert-wwm,
title={Pre-Training with Whole Word Masking for Chinese BERT},
author={Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Yang, Ziqing and Wang, Shijin and Hu, Guoping},
journal={arXiv preprint arXiv:1906.08101},
year={2019}
}
```
|
da47a124055b3c0560a4669163633a01
|
Helsinki-NLP/opus-mt-be-es
|
Helsinki-NLP
|
marian
| 11 | 123 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['be', 'es']
| null | null | 2 | 2 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 2,008 | false |
### bel-spa
* source group: Belarusian
* target group: Spanish
* OPUS readme: [bel-spa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bel-spa/README.md)
* model: transformer-align
* source language(s): bel bel_Latn
* target language(s): spa
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/bel-spa/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bel-spa/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/bel-spa/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.bel.spa | 11.8 | 0.272 |
### System Info:
- hf_name: bel-spa
- source_languages: bel
- target_languages: spa
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/bel-spa/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['be', 'es']
- src_constituents: {'bel', 'bel_Latn'}
- tgt_constituents: {'spa'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/bel-spa/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/bel-spa/opus-2020-06-16.test.txt
- src_alpha3: bel
- tgt_alpha3: spa
- short_pair: be-es
- chrF2_score: 0.272
- bleu: 11.8
- brevity_penalty: 0.892
- ref_len: 1412.0
- src_name: Belarusian
- tgt_name: Spanish
- train_date: 2020-06-16
- src_alpha2: be
- tgt_alpha2: es
- prefer_old: False
- long_pair: bel-spa
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
ca40e7f77e2a28b009cc4b47af36d035
|
imdanboy/jets
|
imdanboy
| null | 36 | 210 |
espnet
| 1 |
text-to-speech
| false | false | false |
cc-by-4.0
|
['en']
|
['ljspeech']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['espnet', 'audio', 'text-to-speech']
| false | true | true | 11,456 | false |
## ESPnet2 TTS model
### `imdanboy/jets`
This model was trained by imdanboy using ljspeech recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout c173c30930631731e6836c274a591ad571749741
pip install -e .
cd egs2/ljspeech/tts1
./run.sh --skip_data_prep false --skip_train true --download_model imdanboy/jets
```
## TTS config
<details><summary>expand</summary>
```
config: conf/tuning/train_jets.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/tts_train_jets_raw_phn_tacotron_g2p_en_no_space
ngpu: 1
seed: 777
num_workers: 4
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: 4
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 39471
dist_launcher: null
multiprocessing_distributed: true
unused_parameters: true
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: false
collect_stats: false
write_collected_feats: false
max_epoch: 1000
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- text2mel_loss
- min
- - train
- text2mel_loss
- min
- - train
- total_count
- max
keep_nbest_models: 5
nbest_averaging_interval: 0
grad_clip: -1
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: 50
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: 1000
batch_size: 20
valid_batch_size: null
batch_bins: 3000000
valid_batch_bins: null
train_shape_file:
- exp/tts_stats_raw_phn_tacotron_g2p_en_no_space/train/text_shape.phn
- exp/tts_stats_raw_phn_tacotron_g2p_en_no_space/train/speech_shape
valid_shape_file:
- exp/tts_stats_raw_phn_tacotron_g2p_en_no_space/valid/text_shape.phn
- exp/tts_stats_raw_phn_tacotron_g2p_en_no_space/valid/speech_shape
batch_type: numel
valid_batch_type: null
fold_length:
- 150
- 204800
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/tr_no_dev/text
- text
- text
- - dump/raw/tr_no_dev/wav.scp
- speech
- sound
- - exp/tts_stats_raw_phn_tacotron_g2p_en_no_space/train/collect_feats/pitch.scp
- pitch
- npy
- - exp/tts_stats_raw_phn_tacotron_g2p_en_no_space/train/collect_feats/energy.scp
- energy
- npy
valid_data_path_and_name_and_type:
- - dump/raw/dev/text
- text
- text
- - dump/raw/dev/wav.scp
- speech
- sound
- - exp/tts_stats_raw_phn_tacotron_g2p_en_no_space/valid/collect_feats/pitch.scp
- pitch
- npy
- - exp/tts_stats_raw_phn_tacotron_g2p_en_no_space/valid/collect_feats/energy.scp
- energy
- npy
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adamw
optim_conf:
lr: 0.0002
betas:
- 0.8
- 0.99
eps: 1.0e-09
weight_decay: 0.0
scheduler: exponentiallr
scheduler_conf:
gamma: 0.999875
optim2: adamw
optim2_conf:
lr: 0.0002
betas:
- 0.8
- 0.99
eps: 1.0e-09
weight_decay: 0.0
scheduler2: exponentiallr
scheduler2_conf:
gamma: 0.999875
generator_first: true
token_list:
- <blank>
- <unk>
- AH0
- N
- T
- D
- S
- R
- L
- DH
- K
- Z
- IH1
- IH0
- M
- EH1
- W
- P
- AE1
- AH1
- V
- ER0
- F
- ','
- AA1
- B
- HH
- IY1
- UW1
- IY0
- AO1
- EY1
- AY1
- .
- OW1
- SH
- NG
- G
- ER1
- CH
- JH
- Y
- AW1
- TH
- UH1
- EH2
- OW0
- EY2
- AO0
- IH2
- AE2
- AY2
- AA2
- UW0
- EH0
- OY1
- EY0
- AO2
- ZH
- OW2
- AE0
- UW2
- AH2
- AY0
- IY2
- AW2
- AA0
- ''''
- ER2
- UH2
- '?'
- OY2
- '!'
- AW0
- UH0
- OY0
- ..
- <sos/eos>
odim: null
model_conf: {}
use_preprocessor: true
token_type: phn
bpemodel: null
non_linguistic_symbols: null
cleaner: tacotron
g2p: g2p_en_no_space
feats_extract: fbank
feats_extract_conf:
n_fft: 1024
hop_length: 256
win_length: null
fs: 22050
fmin: 80
fmax: 7600
n_mels: 80
normalize: global_mvn
normalize_conf:
stats_file: exp/tts_stats_raw_phn_tacotron_g2p_en_no_space/train/feats_stats.npz
tts: jets
tts_conf:
generator_type: jets_generator
generator_params:
adim: 256
aheads: 2
elayers: 4
eunits: 1024
dlayers: 4
dunits: 1024
positionwise_layer_type: conv1d
positionwise_conv_kernel_size: 3
duration_predictor_layers: 2
duration_predictor_chans: 256
duration_predictor_kernel_size: 3
use_masking: true
encoder_normalize_before: true
decoder_normalize_before: true
encoder_type: transformer
decoder_type: transformer
conformer_rel_pos_type: latest
conformer_pos_enc_layer_type: rel_pos
conformer_self_attn_layer_type: rel_selfattn
conformer_activation_type: swish
use_macaron_style_in_conformer: true
use_cnn_in_conformer: true
conformer_enc_kernel_size: 7
conformer_dec_kernel_size: 31
init_type: xavier_uniform
transformer_enc_dropout_rate: 0.2
transformer_enc_positional_dropout_rate: 0.2
transformer_enc_attn_dropout_rate: 0.2
transformer_dec_dropout_rate: 0.2
transformer_dec_positional_dropout_rate: 0.2
transformer_dec_attn_dropout_rate: 0.2
pitch_predictor_layers: 5
pitch_predictor_chans: 256
pitch_predictor_kernel_size: 5
pitch_predictor_dropout: 0.5
pitch_embed_kernel_size: 1
pitch_embed_dropout: 0.0
stop_gradient_from_pitch_predictor: true
energy_predictor_layers: 2
energy_predictor_chans: 256
energy_predictor_kernel_size: 3
energy_predictor_dropout: 0.5
energy_embed_kernel_size: 1
energy_embed_dropout: 0.0
stop_gradient_from_energy_predictor: false
generator_out_channels: 1
generator_channels: 512
generator_global_channels: -1
generator_kernel_size: 7
generator_upsample_scales:
- 8
- 8
- 2
- 2
generator_upsample_kernel_sizes:
- 16
- 16
- 4
- 4
generator_resblock_kernel_sizes:
- 3
- 7
- 11
generator_resblock_dilations:
- - 1
- 3
- 5
- - 1
- 3
- 5
- - 1
- 3
- 5
generator_use_additional_convs: true
generator_bias: true
generator_nonlinear_activation: LeakyReLU
generator_nonlinear_activation_params:
negative_slope: 0.1
generator_use_weight_norm: true
segment_size: 64
idim: 78
odim: 80
discriminator_type: hifigan_multi_scale_multi_period_discriminator
discriminator_params:
scales: 1
scale_downsample_pooling: AvgPool1d
scale_downsample_pooling_params:
kernel_size: 4
stride: 2
padding: 2
scale_discriminator_params:
in_channels: 1
out_channels: 1
kernel_sizes:
- 15
- 41
- 5
- 3
channels: 128
max_downsample_channels: 1024
max_groups: 16
bias: true
downsample_scales:
- 2
- 2
- 4
- 4
- 1
nonlinear_activation: LeakyReLU
nonlinear_activation_params:
negative_slope: 0.1
use_weight_norm: true
use_spectral_norm: false
follow_official_norm: false
periods:
- 2
- 3
- 5
- 7
- 11
period_discriminator_params:
in_channels: 1
out_channels: 1
kernel_sizes:
- 5
- 3
channels: 32
downsample_scales:
- 3
- 3
- 3
- 3
- 1
max_downsample_channels: 1024
bias: true
nonlinear_activation: LeakyReLU
nonlinear_activation_params:
negative_slope: 0.1
use_weight_norm: true
use_spectral_norm: false
generator_adv_loss_params:
average_by_discriminators: false
loss_type: mse
discriminator_adv_loss_params:
average_by_discriminators: false
loss_type: mse
feat_match_loss_params:
average_by_discriminators: false
average_by_layers: false
include_final_outputs: true
mel_loss_params:
fs: 22050
n_fft: 1024
hop_length: 256
win_length: null
window: hann
n_mels: 80
fmin: 0
fmax: null
log_base: null
lambda_adv: 1.0
lambda_mel: 45.0
lambda_feat_match: 2.0
lambda_var: 1.0
lambda_align: 2.0
sampling_rate: 22050
cache_generator_outputs: true
pitch_extract: dio
pitch_extract_conf:
reduction_factor: 1
use_token_averaged_f0: false
fs: 22050
n_fft: 1024
hop_length: 256
f0max: 400
f0min: 80
pitch_normalize: global_mvn
pitch_normalize_conf:
stats_file: exp/tts_stats_raw_phn_tacotron_g2p_en_no_space/train/pitch_stats.npz
energy_extract: energy
energy_extract_conf:
reduction_factor: 1
use_token_averaged_energy: false
fs: 22050
n_fft: 1024
hop_length: 256
win_length: null
energy_normalize: global_mvn
energy_normalize_conf:
stats_file: exp/tts_stats_raw_phn_tacotron_g2p_en_no_space/train/energy_stats.npz
required:
- output_dir
- token_list
version: '202204'
distributed: true
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
583f3e3a6ffc35d365fddfcc3f2ae555
|
DrishtiSharma/wav2vec2-large-xls-r-300m-as-v9
|
DrishtiSharma
|
wav2vec2
| 12 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['as']
|
['mozilla-foundation/common_voice_8_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'as', 'robust-speech-event', 'model_for_talk', 'hf-asr-leaderboard']
| true | true | true | 2,679 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-as-v9
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1679
- Wer: 0.5761
### Evaluation Command
1. To evaluate on mozilla-foundation/common_voice_8_0 with test split
python eval.py --model_id DrishtiSharma/wav2vec2-large-xls-r-300m-as-v9 --dataset mozilla-foundation/common_voice_8_0 --config as --split test --log_outputs
2. To evaluate on speech-recognition-community-v2/dev_data
Assamese (as) language isn't available in speech-recognition-community-v2/dev_data
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000111
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 300
- num_epochs: 200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 8.3852 | 10.51 | 200 | 3.6402 | 1.0 |
| 3.5374 | 21.05 | 400 | 3.3894 | 1.0 |
| 2.8645 | 31.56 | 600 | 1.3143 | 0.8303 |
| 1.1784 | 42.1 | 800 | 0.9417 | 0.6661 |
| 0.7805 | 52.62 | 1000 | 0.9292 | 0.6237 |
| 0.5973 | 63.15 | 1200 | 0.9489 | 0.6014 |
| 0.4784 | 73.67 | 1400 | 0.9916 | 0.5962 |
| 0.4138 | 84.21 | 1600 | 1.0272 | 0.6121 |
| 0.3491 | 94.72 | 1800 | 1.0412 | 0.5984 |
| 0.3062 | 105.26 | 2000 | 1.0769 | 0.6005 |
| 0.2707 | 115.77 | 2200 | 1.0708 | 0.5752 |
| 0.2459 | 126.31 | 2400 | 1.1285 | 0.6009 |
| 0.2234 | 136.82 | 2600 | 1.1209 | 0.5949 |
| 0.2035 | 147.36 | 2800 | 1.1348 | 0.5842 |
| 0.1876 | 157.87 | 3000 | 1.1480 | 0.5872 |
| 0.1669 | 168.41 | 3200 | 1.1496 | 0.5838 |
| 0.1595 | 178.92 | 3400 | 1.1721 | 0.5778 |
| 0.1505 | 189.46 | 3600 | 1.1654 | 0.5744 |
| 0.1486 | 199.97 | 3800 | 1.1679 | 0.5761 |
### Framework versions
- Transformers 4.16.1
- Pytorch 1.10.0+cu111
- Datasets 1.18.2
- Tokenizers 0.11.0
|
4617f621b5a98cf89ce0543f00c243e2
|
flyswot/test
|
flyswot
|
vit
| 10 | 3 |
transformers
| 0 |
image-classification
| true | false | false |
apache-2.0
| null |
['image_folder']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,199 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the image_folder dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2724
- F1: 0.1240
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 0.001
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 0.0 | 1 | 2.2724 | 0.1240 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
3afb9518eaac8b4ed1366f5954fc02cf
|
timm/convnext_base.clip_laiona_augreg_ft_in1k_384
|
timm
| null | 4 | 29 |
timm
| 0 |
image-classification
| true | false | false |
apache-2.0
| null |
['imagenet-1k', 'laion-aesthetic']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['image-classification', 'timm']
| false | true | true | 24,153 | false |
# Model card for convnext_base.clip_laiona_augreg_ft_in1k_384
A ConvNeXt image classification model. CLIP image tower weights pretrained in [OpenCLIP](https://github.com/mlfoundations/open_clip) on LAION and fine-tuned on ImageNet-1k in `timm` by Ross Wightman.
Please see related OpenCLIP model cards for more details on pretrain:
* https://huggingface.co/laion/CLIP-convnext_large_d.laion2B-s26B-b102K-augreg
* https://huggingface.co/laion/CLIP-convnext_base_w-laion2B-s13B-b82K-augreg
* https://huggingface.co/laion/CLIP-convnext_base_w_320-laion_aesthetic-s13B-b82K
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 88.6
- GMACs: 45.2
- Activations (M): 84.5
- Image size: 384 x 384
- **Papers:**
- LAION-5B: An open large-scale dataset for training next generation image-text models: https://arxiv.org/abs/2210.08402
- A ConvNet for the 2020s: https://arxiv.org/abs/2201.03545
- Learning Transferable Visual Models From Natural Language Supervision: https://arxiv.org/abs/2103.00020
- **Original:** https://github.com/mlfoundations/open_clip
- **Pretrain Dataset:** LAION-Aesthetic
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(
urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'))
model = timm.create_model('convnext_base.clip_laiona_augreg_ft_in1k_384', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(
urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'))
model = timm.create_model(
'convnext_base.clip_laiona_augreg_ft_in1k_384',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g. for convnext_base:
# torch.Size([1, 128, 56, 56])
# torch.Size([1, 256, 28, 28])
# torch.Size([1, 512, 14, 14])
# torch.Size([1, 1024, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(
urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'))
model = timm.create_model(
'convnext_base.clip_laiona_augreg_ft_in1k_384',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled (ie.e a (batch_size, num_features, H, W) tensor
output = model.forward_head(output, pre_logits=True)
# output is (batch_size, num_features) tensor
```
## Model Comparison
### By Top-1
All timing numbers from eager model PyTorch 1.13 on RTX 3090 w/ AMP.
|model |top1 |top5 |img_size|param_count|gmacs |macts |samples_per_sec|batch_size|
|----------------------------------------------|------|------|--------|-----------|------|------|---------------|----------|
|[convnextv2_huge.fcmae_ft_in22k_in1k_512](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_512)|88.848|98.742|512 |660.29 |600.81|413.07|28.58 |48 |
|[convnextv2_huge.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_384)|88.668|98.738|384 |660.29 |337.96|232.35|50.56 |64 |
|[convnextv2_large.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k_384)|88.196|98.532|384 |197.96 |101.1 |126.74|128.94 |128 |
|[convnext_xlarge.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k_384)|87.75 |98.556|384 |350.2 |179.2 |168.99|124.85 |192 |
|[convnextv2_base.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k_384)|87.646|98.422|384 |88.72 |45.21 |84.49 |209.51 |256 |
|[convnext_large.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k_384)|87.476|98.382|384 |197.77 |101.1 |126.74|194.66 |256 |
|[convnext_large_mlp.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_augreg_ft_in1k)|87.344|98.218|256 |200.13 |44.94 |56.33 |438.08 |256 |
|[convnextv2_large.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k)|87.26 |98.248|224 |197.96 |34.4 |43.13 |376.84 |256 |
|[convnext_xlarge.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k)|87.002|98.208|224 |350.2 |60.98 |57.5 |368.01 |256 |
|[convnext_base.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k_384)|86.796|98.264|384 |88.59 |45.21 |84.49 |366.54 |256 |
|[convnextv2_base.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k)|86.74 |98.022|224 |88.72 |15.38 |28.75 |624.23 |256 |
|[convnext_large.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k)|86.636|98.028|224 |197.77 |34.4 |43.13 |581.43 |256 |
|[convnext_base.clip_laiona_augreg_ft_in1k_384](https://huggingface.co/timm/convnext_base.clip_laiona_augreg_ft_in1k_384)|86.504|97.97 |384 |88.59 |45.21 |84.49 |368.14 |256 |
|[convnextv2_huge.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in1k)|86.256|97.75 |224 |660.29 |115.0 |79.07 |154.72 |256 |
|[convnext_small.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_small.in12k_ft_in1k_384)|86.182|97.92 |384 |50.22 |25.58 |63.37 |516.19 |256 |
|[convnext_base.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in1k)|86.154|97.68 |256 |88.59 |20.09 |37.55 |819.86 |256 |
|[convnext_base.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k)|85.822|97.866|224 |88.59 |15.38 |28.75 |1037.66 |256 |
|[convnext_small.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k_384)|85.778|97.886|384 |50.22 |25.58 |63.37 |518.95 |256 |
|[convnextv2_large.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in1k)|85.742|97.584|224 |197.96 |34.4 |43.13 |375.23 |256 |
|[convnext_small.in12k_ft_in1k](https://huggingface.co/timm/convnext_small.in12k_ft_in1k)|85.174|97.506|224 |50.22 |8.71 |21.56 |1474.31 |256 |
|[convnext_tiny.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k_384)|85.118|97.608|384 |28.59 |13.14 |39.48 |856.76 |256 |
|[convnextv2_tiny.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k_384)|85.112|97.63 |384 |28.64 |13.14 |39.48 |491.32 |256 |
|[convnextv2_base.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in1k)|84.874|97.09 |224 |88.72 |15.38 |28.75 |625.33 |256 |
|[convnext_small.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k)|84.562|97.394|224 |50.22 |8.71 |21.56 |1478.29 |256 |
|[convnext_large.fb_in1k](https://huggingface.co/timm/convnext_large.fb_in1k)|84.282|96.892|224 |197.77 |34.4 |43.13 |584.28 |256 |
|[convnext_tiny.in12k_ft_in1k](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k)|84.186|97.124|224 |28.59 |4.47 |13.44 |2433.7 |256 |
|[convnext_tiny.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k_384)|84.084|97.14 |384 |28.59 |13.14 |39.48 |862.95 |256 |
|[convnextv2_tiny.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k)|83.894|96.964|224 |28.64 |4.47 |13.44 |1452.72 |256 |
|[convnext_base.fb_in1k](https://huggingface.co/timm/convnext_base.fb_in1k)|83.82 |96.746|224 |88.59 |15.38 |28.75 |1054.0 |256 |
|[convnextv2_nano.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k_384)|83.37 |96.742|384 |15.62 |7.22 |24.61 |801.72 |256 |
|[convnext_small.fb_in1k](https://huggingface.co/timm/convnext_small.fb_in1k)|83.142|96.434|224 |50.22 |8.71 |21.56 |1464.0 |256 |
|[convnextv2_tiny.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in1k)|82.92 |96.284|224 |28.64 |4.47 |13.44 |1425.62 |256 |
|[convnext_tiny.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k)|82.898|96.616|224 |28.59 |4.47 |13.44 |2480.88 |256 |
|[convnext_nano.in12k_ft_in1k](https://huggingface.co/timm/convnext_nano.in12k_ft_in1k)|82.282|96.344|224 |15.59 |2.46 |8.37 |3926.52 |256 |
|[convnext_tiny_hnf.a2h_in1k](https://huggingface.co/timm/convnext_tiny_hnf.a2h_in1k)|82.216|95.852|224 |28.59 |4.47 |13.44 |2529.75 |256 |
|[convnext_tiny.fb_in1k](https://huggingface.co/timm/convnext_tiny.fb_in1k)|82.066|95.854|224 |28.59 |4.47 |13.44 |2346.26 |256 |
|[convnextv2_nano.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k)|82.03 |96.166|224 |15.62 |2.46 |8.37 |2300.18 |256 |
|[convnextv2_nano.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in1k)|81.83 |95.738|224 |15.62 |2.46 |8.37 |2321.48 |256 |
|[convnext_nano_ols.d1h_in1k](https://huggingface.co/timm/convnext_nano_ols.d1h_in1k)|80.866|95.246|224 |15.65 |2.65 |9.38 |3523.85 |256 |
|[convnext_nano.d1h_in1k](https://huggingface.co/timm/convnext_nano.d1h_in1k)|80.768|95.334|224 |15.59 |2.46 |8.37 |3915.58 |256 |
|[convnextv2_pico.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_pico.fcmae_ft_in1k)|80.304|95.072|224 |9.07 |1.37 |6.1 |3274.57 |256 |
|[convnext_pico.d1_in1k](https://huggingface.co/timm/convnext_pico.d1_in1k)|79.526|94.558|224 |9.05 |1.37 |6.1 |5686.88 |256 |
|[convnext_pico_ols.d1_in1k](https://huggingface.co/timm/convnext_pico_ols.d1_in1k)|79.522|94.692|224 |9.06 |1.43 |6.5 |5422.46 |256 |
|[convnextv2_femto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_femto.fcmae_ft_in1k)|78.488|93.98 |224 |5.23 |0.79 |4.57 |4264.2 |256 |
|[convnext_femto_ols.d1_in1k](https://huggingface.co/timm/convnext_femto_ols.d1_in1k)|77.86 |93.83 |224 |5.23 |0.82 |4.87 |6910.6 |256 |
|[convnext_femto.d1_in1k](https://huggingface.co/timm/convnext_femto.d1_in1k)|77.454|93.68 |224 |5.22 |0.79 |4.57 |7189.92 |256 |
|[convnextv2_atto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_atto.fcmae_ft_in1k)|76.664|93.044|224 |3.71 |0.55 |3.81 |4728.91 |256 |
|[convnext_atto_ols.a2_in1k](https://huggingface.co/timm/convnext_atto_ols.a2_in1k)|75.88 |92.846|224 |3.7 |0.58 |4.11 |7963.16 |256 |
|[convnext_atto.d2_in1k](https://huggingface.co/timm/convnext_atto.d2_in1k)|75.664|92.9 |224 |3.7 |0.55 |3.81 |8439.22 |256 |
### By Throughput (samples / sec)
All timing numbers from eager model PyTorch 1.13 on RTX 3090 w/ AMP.
|model |top1 |top5 |img_size|param_count|gmacs |macts |samples_per_sec|batch_size|
|----------------------------------------------|------|------|--------|-----------|------|------|---------------|----------|
|[convnext_atto.d2_in1k](https://huggingface.co/timm/convnext_atto.d2_in1k)|75.664|92.9 |224 |3.7 |0.55 |3.81 |8439.22 |256 |
|[convnext_atto_ols.a2_in1k](https://huggingface.co/timm/convnext_atto_ols.a2_in1k)|75.88 |92.846|224 |3.7 |0.58 |4.11 |7963.16 |256 |
|[convnext_femto.d1_in1k](https://huggingface.co/timm/convnext_femto.d1_in1k)|77.454|93.68 |224 |5.22 |0.79 |4.57 |7189.92 |256 |
|[convnext_femto_ols.d1_in1k](https://huggingface.co/timm/convnext_femto_ols.d1_in1k)|77.86 |93.83 |224 |5.23 |0.82 |4.87 |6910.6 |256 |
|[convnext_pico.d1_in1k](https://huggingface.co/timm/convnext_pico.d1_in1k)|79.526|94.558|224 |9.05 |1.37 |6.1 |5686.88 |256 |
|[convnext_pico_ols.d1_in1k](https://huggingface.co/timm/convnext_pico_ols.d1_in1k)|79.522|94.692|224 |9.06 |1.43 |6.5 |5422.46 |256 |
|[convnextv2_atto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_atto.fcmae_ft_in1k)|76.664|93.044|224 |3.71 |0.55 |3.81 |4728.91 |256 |
|[convnextv2_femto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_femto.fcmae_ft_in1k)|78.488|93.98 |224 |5.23 |0.79 |4.57 |4264.2 |256 |
|[convnext_nano.in12k_ft_in1k](https://huggingface.co/timm/convnext_nano.in12k_ft_in1k)|82.282|96.344|224 |15.59 |2.46 |8.37 |3926.52 |256 |
|[convnext_nano.d1h_in1k](https://huggingface.co/timm/convnext_nano.d1h_in1k)|80.768|95.334|224 |15.59 |2.46 |8.37 |3915.58 |256 |
|[convnext_nano_ols.d1h_in1k](https://huggingface.co/timm/convnext_nano_ols.d1h_in1k)|80.866|95.246|224 |15.65 |2.65 |9.38 |3523.85 |256 |
|[convnextv2_pico.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_pico.fcmae_ft_in1k)|80.304|95.072|224 |9.07 |1.37 |6.1 |3274.57 |256 |
|[convnext_tiny_hnf.a2h_in1k](https://huggingface.co/timm/convnext_tiny_hnf.a2h_in1k)|82.216|95.852|224 |28.59 |4.47 |13.44 |2529.75 |256 |
|[convnext_tiny.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k)|82.898|96.616|224 |28.59 |4.47 |13.44 |2480.88 |256 |
|[convnext_tiny.in12k_ft_in1k](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k)|84.186|97.124|224 |28.59 |4.47 |13.44 |2433.7 |256 |
|[convnext_tiny.fb_in1k](https://huggingface.co/timm/convnext_tiny.fb_in1k)|82.066|95.854|224 |28.59 |4.47 |13.44 |2346.26 |256 |
|[convnextv2_nano.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in1k)|81.83 |95.738|224 |15.62 |2.46 |8.37 |2321.48 |256 |
|[convnextv2_nano.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k)|82.03 |96.166|224 |15.62 |2.46 |8.37 |2300.18 |256 |
|[convnext_small.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k)|84.562|97.394|224 |50.22 |8.71 |21.56 |1478.29 |256 |
|[convnext_small.in12k_ft_in1k](https://huggingface.co/timm/convnext_small.in12k_ft_in1k)|85.174|97.506|224 |50.22 |8.71 |21.56 |1474.31 |256 |
|[convnext_small.fb_in1k](https://huggingface.co/timm/convnext_small.fb_in1k)|83.142|96.434|224 |50.22 |8.71 |21.56 |1464.0 |256 |
|[convnextv2_tiny.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k)|83.894|96.964|224 |28.64 |4.47 |13.44 |1452.72 |256 |
|[convnextv2_tiny.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in1k)|82.92 |96.284|224 |28.64 |4.47 |13.44 |1425.62 |256 |
|[convnext_base.fb_in1k](https://huggingface.co/timm/convnext_base.fb_in1k)|83.82 |96.746|224 |88.59 |15.38 |28.75 |1054.0 |256 |
|[convnext_base.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k)|85.822|97.866|224 |88.59 |15.38 |28.75 |1037.66 |256 |
|[convnext_tiny.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k_384)|84.084|97.14 |384 |28.59 |13.14 |39.48 |862.95 |256 |
|[convnext_tiny.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k_384)|85.118|97.608|384 |28.59 |13.14 |39.48 |856.76 |256 |
|[convnext_base.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in1k)|86.154|97.68 |256 |88.59 |20.09 |37.55 |819.86 |256 |
|[convnextv2_nano.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k_384)|83.37 |96.742|384 |15.62 |7.22 |24.61 |801.72 |256 |
|[convnextv2_base.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in1k)|84.874|97.09 |224 |88.72 |15.38 |28.75 |625.33 |256 |
|[convnextv2_base.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k)|86.74 |98.022|224 |88.72 |15.38 |28.75 |624.23 |256 |
|[convnext_large.fb_in1k](https://huggingface.co/timm/convnext_large.fb_in1k)|84.282|96.892|224 |197.77 |34.4 |43.13 |584.28 |256 |
|[convnext_large.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k)|86.636|98.028|224 |197.77 |34.4 |43.13 |581.43 |256 |
|[convnext_small.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k_384)|85.778|97.886|384 |50.22 |25.58 |63.37 |518.95 |256 |
|[convnext_small.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_small.in12k_ft_in1k_384)|86.182|97.92 |384 |50.22 |25.58 |63.37 |516.19 |256 |
|[convnextv2_tiny.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k_384)|85.112|97.63 |384 |28.64 |13.14 |39.48 |491.32 |256 |
|[convnext_large_mlp.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_augreg_ft_in1k)|87.344|98.218|256 |200.13 |44.94 |56.33 |438.08 |256 |
|[convnextv2_large.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k)|87.26 |98.248|224 |197.96 |34.4 |43.13 |376.84 |256 |
|[convnextv2_large.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in1k)|85.742|97.584|224 |197.96 |34.4 |43.13 |375.23 |256 |
|[convnext_base.clip_laiona_augreg_ft_in1k_384](https://huggingface.co/timm/convnext_base.clip_laiona_augreg_ft_in1k_384)|86.504|97.97 |384 |88.59 |45.21 |84.49 |368.14 |256 |
|[convnext_xlarge.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k)|87.002|98.208|224 |350.2 |60.98 |57.5 |368.01 |256 |
|[convnext_base.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k_384)|86.796|98.264|384 |88.59 |45.21 |84.49 |366.54 |256 |
|[convnextv2_base.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k_384)|87.646|98.422|384 |88.72 |45.21 |84.49 |209.51 |256 |
|[convnext_large.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k_384)|87.476|98.382|384 |197.77 |101.1 |126.74|194.66 |256 |
|[convnextv2_huge.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in1k)|86.256|97.75 |224 |660.29 |115.0 |79.07 |154.72 |256 |
|[convnextv2_large.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k_384)|88.196|98.532|384 |197.96 |101.1 |126.74|128.94 |128 |
|[convnext_xlarge.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k_384)|87.75 |98.556|384 |350.2 |179.2 |168.99|124.85 |192 |
|[convnextv2_huge.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_384)|88.668|98.738|384 |660.29 |337.96|232.35|50.56 |64 |
|[convnextv2_huge.fcmae_ft_in22k_in1k_512](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_512)|88.848|98.742|512 |660.29 |600.81|413.07|28.58 |48 |
## Citation
```bibtex
@software{ilharco_gabriel_2021_5143773,
author = {Ilharco, Gabriel and
Wortsman, Mitchell and
Wightman, Ross and
Gordon, Cade and
Carlini, Nicholas and
Taori, Rohan and
Dave, Achal and
Shankar, Vaishaal and
Namkoong, Hongseok and
Miller, John and
Hajishirzi, Hannaneh and
Farhadi, Ali and
Schmidt, Ludwig},
title = {OpenCLIP},
month = jul,
year = 2021,
note = {If you use this software, please cite it as below.},
publisher = {Zenodo},
version = {0.1},
doi = {10.5281/zenodo.5143773},
url = {https://doi.org/10.5281/zenodo.5143773}
}
```
```bibtex
@inproceedings{schuhmann2022laionb,
title={{LAION}-5B: An open large-scale dataset for training next generation image-text models},
author={Christoph Schuhmann and
Romain Beaumont and
Richard Vencu and
Cade W Gordon and
Ross Wightman and
Mehdi Cherti and
Theo Coombes and
Aarush Katta and
Clayton Mullis and
Mitchell Wortsman and
Patrick Schramowski and
Srivatsa R Kundurthy and
Katherine Crowson and
Ludwig Schmidt and
Robert Kaczmarczyk and
Jenia Jitsev},
booktitle={Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2022},
url={https://openreview.net/forum?id=M3Y74vmsMcY}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/rwightman/pytorch-image-models}}
}
```
```bibtex
@inproceedings{Radford2021LearningTV,
title={Learning Transferable Visual Models From Natural Language Supervision},
author={Alec Radford and Jong Wook Kim and Chris Hallacy and A. Ramesh and Gabriel Goh and Sandhini Agarwal and Girish Sastry and Amanda Askell and Pamela Mishkin and Jack Clark and Gretchen Krueger and Ilya Sutskever},
booktitle={ICML},
year={2021}
}
```
```bibtex
@article{liu2022convnet,
author = {Zhuang Liu and Hanzi Mao and Chao-Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie},
title = {A ConvNet for the 2020s},
journal = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2022},
}
```
|
7f9b413082956e9863f3b8a655112bbf
|
HCKLab/BiBert-MultiTask-2
|
HCKLab
|
bert
| 16 | 15 |
transformers
| 0 |
text-classification
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,042 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BiBert-MultiTask-2
This model is a fine-tuned version of [nlptown/bert-base-multilingual-uncased-sentiment](https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
a6595584b881d614b26d521593b6397f
|
blmnk/distilbert-base-uncased-finetuned-news
|
blmnk
|
distilbert
| 10 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 930 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-news
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu116
- Datasets 2.5.2
- Tokenizers 0.12.1
|
6a66c9b946062209792ffb16133136a0
|
ViktorDo/DistilBERT-POWO_MGH_Life_Form_Finetuned
|
ViktorDo
|
distilbert
| 12 | 5 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,317 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DistilBERT-POWO_MGH_Life_Form_Finetuned
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3845
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5891 | 1.0 | 914 | 0.4130 |
| 0.4207 | 2.0 | 1828 | 0.3868 |
| 0.3722 | 3.0 | 2742 | 0.3845 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
619db6ab5de446bbd140c39f4baa0792
|
microsoft/deberta-xlarge
|
microsoft
|
deberta
| 9 | 8,801 |
transformers
| 1 |
fill-mask
| true | true | false |
mit
|
['en']
| null | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 |
['deberta-v1', 'fill-mask']
| false | true | true | 3,751 | false |
## DeBERTa: Decoding-enhanced BERT with Disentangled Attention
[DeBERTa](https://arxiv.org/abs/2006.03654) improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. With those two improvements, DeBERTa out perform RoBERTa on a majority of NLU tasks with 80GB training data.
Please check the [official repository](https://github.com/microsoft/DeBERTa) for more details and updates.
This the DeBERTa XLarge model with 48 layers, 1024 hidden size. Total parameters 750M.
### Fine-tuning on NLU tasks
We present the dev results on SQuAD 1.1/2.0 and several GLUE benchmark tasks.
| Model | SQuAD 1.1 | SQuAD 2.0 | MNLI-m/mm | SST-2 | QNLI | CoLA | RTE | MRPC | QQP |STS-B |
|---------------------------|-----------|-----------|-------------|-------|------|------|--------|-------|-------|------|
| | F1/EM | F1/EM | Acc | Acc | Acc | MCC | Acc |Acc/F1 |Acc/F1 |P/S |
| BERT-Large | 90.9/84.1 | 81.8/79.0 | 86.6/- | 93.2 | 92.3 | 60.6 | 70.4 | 88.0/- | 91.3/- |90.0/- |
| RoBERTa-Large | 94.6/88.9 | 89.4/86.5 | 90.2/- | 96.4 | 93.9 | 68.0 | 86.6 | 90.9/- | 92.2/- |92.4/- |
| XLNet-Large | 95.1/89.7 | 90.6/87.9 | 90.8/- | 97.0 | 94.9 | 69.0 | 85.9 | 90.8/- | 92.3/- |92.5/- |
| [DeBERTa-Large](https://huggingface.co/microsoft/deberta-large)<sup>1</sup> | 95.5/90.1 | 90.7/88.0 | 91.3/91.1| 96.5|95.3| 69.5| 91.0| 92.6/94.6| 92.3/- |92.8/92.5 |
| [DeBERTa-XLarge](https://huggingface.co/microsoft/deberta-xlarge)<sup>1</sup> | -/- | -/- | 91.5/91.2| 97.0 | - | - | 93.1 | 92.1/94.3 | - |92.9/92.7|
| [DeBERTa-V2-XLarge](https://huggingface.co/microsoft/deberta-v2-xlarge)<sup>1</sup>|95.8/90.8| 91.4/88.9|91.7/91.6| **97.5**| 95.8|71.1|**93.9**|92.0/94.2|92.3/89.8|92.9/92.9|
|**[DeBERTa-V2-XXLarge](https://huggingface.co/microsoft/deberta-v2-xxlarge)<sup>1,2</sup>**|**96.1/91.4**|**92.2/89.7**|**91.7/91.9**|97.2|**96.0**|**72.0**| 93.5| **93.1/94.9**|**92.7/90.3** |**93.2/93.1** |
--------
#### Notes.
- <sup>1</sup> Following RoBERTa, for RTE, MRPC, STS-B, we fine-tune the tasks based on [DeBERTa-Large-MNLI](https://huggingface.co/microsoft/deberta-large-mnli), [DeBERTa-XLarge-MNLI](https://huggingface.co/microsoft/deberta-xlarge-mnli), [DeBERTa-V2-XLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xlarge-mnli), [DeBERTa-V2-XXLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xxlarge-mnli). The results of SST-2/QQP/QNLI/SQuADv2 will also be slightly improved when start from MNLI fine-tuned models, however, we only report the numbers fine-tuned from pretrained base models for those 4 tasks.
- <sup>2</sup> To try the **XXLarge** model with **[HF transformers](https://huggingface.co/transformers/main_classes/trainer.html)**, you need to specify **--sharded_ddp**
```bash
cd transformers/examples/text-classification/
export TASK_NAME=mrpc
python -m torch.distributed.launch --nproc_per_node=8 run_glue.py --model_name_or_path microsoft/deberta-v2-xxlarge \
--task_name $TASK_NAME --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 4 \
--learning_rate 3e-6 --num_train_epochs 3 --output_dir /tmp/$TASK_NAME/ --overwrite_output_dir --sharded_ddp --fp16
```
### Citation
If you find DeBERTa useful for your work, please cite the following paper:
``` latex
@inproceedings{
he2021deberta,
title={DEBERTA: DECODING-ENHANCED BERT WITH DISENTANGLED ATTENTION},
author={Pengcheng He and Xiaodong Liu and Jianfeng Gao and Weizhu Chen},
booktitle={International Conference on Learning Representations},
year={2021},
url={https://openreview.net/forum?id=XPZIaotutsD}
}
```
|
e61b3a9fc3a4b8f307ee462ddd9a5eda
|
Shobhank-iiitdwd/RoBERTA-rrQA
|
Shobhank-iiitdwd
|
roberta
| 11 | 120 |
transformers
| 0 |
question-answering
| true | true | true |
cc-by-4.0
|
['en']
|
['squad_v2']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| true | true | true | 2,318 | false |
# roberta-base for QA
This is the [roberta-base](https://huggingface.co/roberta-base) model, fine-tuned using the [SQuAD2.0](https://huggingface.co/datasets/squad_v2) dataset. It's been trained on question-answer pairs, including unanswerable questions, for the task of Question Answering.
## Overview
**Language model:** roberta-base
**Language:** English
**Downstream-task:** Extractive QA
**Training data:** SQuAD 2.0
**Eval data:** SQuAD 2.0
## Hyperparameters
```
batch_size = 96
n_epochs = 2
base_LM_model = "roberta-base"
max_seq_len = 386
learning_rate = 3e-5
lr_schedule = LinearWarmup
warmup_proportion = 0.2
doc_stride=128
max_query_length=64
``` The distilled model has a comparable prediction quality and runs at twice the speed of the base model.
## Usage
### In Haystack
Haystack is an NLP framework by deepset. You can use this model in a Haystack pipeline to do question answering at scale (over many documents). To load the model in [Haystack](https://github.com/deepset-ai/haystack/):
```python
reader = FARMReader(model_name_or_path="Shobhank-iiitdwd/RoBERTA-rrQA")
# or
reader = TransformersReader(model_name_or_path="Shobhank-iiitdwd/RoBERTA-rrQA",tokenizer="Shobhank-iiitdwd/RoBERTA-rrQA")
```
### In Transformers
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "Shobhank-iiitdwd/RoBERTA-rrQA"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'Why is model conversion important?',
'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.'
}
res = nlp(QA_input)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Performance
Evaluated on the SQuAD 2.0 dev set with the [official eval script](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/).
```
"exact": 79.87029394424324,
"f1": 82.91251169582613,
"total": 11873,
"HasAns_exact": 77.93522267206478,
"HasAns_f1": 84.02838248389763,
"HasAns_total": 5928,
"NoAns_exact": 81.79983179142137,
"NoAns_f1": 81.79983179142137,
"NoAns_total": 5945
```
|
60e8459f46755e05fd8d548352d63133
|
mohammed/wav2vec2-large-xlsr-arabic
|
mohammed
|
wav2vec2
| 9 | 7 |
transformers
| 2 |
automatic-speech-recognition
| true | false | true |
apache-2.0
|
['ar']
|
['common_voice', 'arabic_speech_corpus']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
| true | true | true | 4,661 | false |
# Wav2Vec2-Large-XLSR-53-Arabic
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53)
on Arabic using the `train` splits of [Common Voice](https://huggingface.co/datasets/common_voice)
and [Arabic Speech Corpus](https://huggingface.co/datasets/arabic_speech_corpus).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
%%capture
!pip install datasets
!pip install transformers==4.4.0
!pip install torchaudio
!pip install jiwer
!pip install tnkeeh
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "ar", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("mohammed/wav2vec2-large-xlsr-arabic")
model = Wav2Vec2ForCTC.from_pretrained("mohammed/wav2vec2-large-xlsr-arabic")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("The predicted sentence is: ", processor.batch_decode(predicted_ids))
print("The original sentence is:", test_dataset["sentence"][:2])
```
The output is:
```
The predicted sentence is : ['ألديك قلم', 'ليست نارك مكسافة على هذه الأرض أبعد من يوم أمس']
The original sentence is: ['ألديك قلم ؟', 'ليست هناك مسافة على هذه الأرض أبعد من يوم أمس.']
```
## Evaluation
The model can be evaluated as follows on the Arabic test data of Common Voice:
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
# creating a dictionary with all diacritics
dict = {
'ِ': '',
'ُ': '',
'ٓ': '',
'ٰ': '',
'ْ': '',
'ٌ': '',
'ٍ': '',
'ً': '',
'ّ': '',
'َ': '',
'~': '',
',': '',
'ـ': '',
'—': '',
'.': '',
'!': '',
'-': '',
';': '',
':': '',
'\'': '',
'"': '',
'☭': '',
'«': '',
'»': '',
'؛': '',
'ـ': '',
'_': '',
'،': '',
'“': '',
'%': '',
'‘': '',
'”': '',
'�': '',
'_': '',
',': '',
'?': '',
'#': '',
'‘': '',
'.': '',
'؛': '',
'get': '',
'؟': '',
' ': ' ',
'\'ۖ ': '',
'\'': '',
'\'ۚ' : '',
' \'': '',
'31': '',
'24': '',
'39': ''
}
# replacing multiple diacritics using dictionary (stackoverflow is amazing)
def remove_special_characters(batch):
# Create a regular expression from the dictionary keys
regex = re.compile("(%s)" % "|".join(map(re.escape, dict.keys())))
# For each match, look-up corresponding value in dictionary
batch["sentence"] = regex.sub(lambda mo: dict[mo.string[mo.start():mo.end()]], batch["sentence"])
return batch
test_dataset = load_dataset("common_voice", "ar", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("mohammed/wav2vec2-large-xlsr-arabic")
model = Wav2Vec2ForCTC.from_pretrained("mohammed/wav2vec2-large-xlsr-arabic")
model.to("cuda")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
test_dataset = test_dataset.map(remove_special_characters)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 36.699%
## Future Work
One can use *data augmentation*, *transliteration*, or *attention_mask* to increase the accuracy.
|
464d834071d9a672a485070dd62ad606
|
antgoldbloom/distilbert-rater
|
antgoldbloom
|
distilbert
| 6 | 3 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 920 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-rater
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.9.1
- Datasets 1.18.4
- Tokenizers 0.11.6
|
ae14e41dce89e7af668fb9f3a7b18b24
|
ainize/kobart-news
|
ainize
|
bart
| 7 | 389 |
transformers
| 5 |
summarization
| true | false | false |
mit
|
['ko']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['summarization', 'bart']
| false | true | true | 2,061 | false |
# kobart-news
- This model is a [kobart](https://huggingface.co/hyunwoongko/kobart) fine-tuned on the [문서요약 텍스트/신문기사](https://aihub.or.kr/aidata/8054) using [Ainize Teachable-NLP](https://ainize.ai/teachable-nlp).
## Usage
### Python Code
```python
from transformers import PreTrainedTokenizerFast, BartForConditionalGeneration
# Load Model and Tokenize
tokenizer = PreTrainedTokenizerFast.from_pretrained("ainize/kobart-news")
model = BartForConditionalGeneration.from_pretrained("ainize/kobart-news")
# Encode Input Text
input_text = '국내 전반적인 경기침체로 상가 건물주의 수익도 전국적인 감소세를 보이고 있는 것으로 나타났다. 수익형 부동산 연구개발기업 상가정보연구소는 한국감정원 통계를 분석한 결과 전국 중대형 상가 순영업소득(부동산에서 발생하는 임대수입, 기타수입에서 제반 경비를 공제한 순소득)이 1분기 ㎡당 3만4200원에서 3분기 2만5800원으로 감소했다고 17일 밝혔다. 수도권, 세종시, 지방광역시에서 순영업소득이 가장 많이 감소한 지역은 3분기 1만3100원을 기록한 울산으로, 1분기 1만9100원 대비 31.4% 감소했다. 이어 대구(-27.7%), 서울(-26.9%), 광주(-24.9%), 부산(-23.5%), 세종(-23.4%), 대전(-21%), 경기(-19.2%), 인천(-18.5%) 순으로 감소했다. 지방 도시의 경우도 비슷했다. 경남의 3분기 순영업소득은 1만2800원으로 1분기 1만7400원 대비 26.4% 감소했으며 제주(-25.1%), 경북(-24.1%), 충남(-20.9%), 강원(-20.9%), 전남(-20.1%), 전북(-17%), 충북(-15.3%) 등도 감소세를 보였다. 조현택 상가정보연구소 연구원은 "올해 내수 경기의 침체된 분위기가 유지되며 상가, 오피스 등을 비롯한 수익형 부동산 시장의 분위기도 경직된 모습을 보였고 오피스텔, 지식산업센터 등의 수익형 부동산 공급도 증가해 공실의 위험도 늘었다"며 "실제 올 3분기 전국 중대형 상가 공실률은 11.5%를 기록하며 1분기 11.3% 대비 0.2% 포인트 증가했다"고 말했다. 그는 "최근 소셜커머스(SNS를 통한 전자상거래), 음식 배달 중개 애플리케이션, 중고 물품 거래 애플리케이션 등의 사용 증가로 오프라인 매장에 영향을 미쳤다"며 "향후 지역, 콘텐츠에 따른 상권 양극화 현상은 심화될 것으로 보인다"고 덧붙였다.'
input_ids = tokenizer.encode(input_text, return_tensors="pt")
# Generate Summary Text Ids
summary_text_ids = model.generate(
input_ids=input_ids,
bos_token_id=model.config.bos_token_id,
eos_token_id=model.config.eos_token_id,
length_penalty=2.0,
max_length=142,
min_length=56,
num_beams=4,
)
# Decoding Text
print(tokenizer.decode(summary_text_ids[0], skip_special_tokens=True))
```
### API and Demo
You can experience this model through [ainize-api](https://ainize.ai/gkswjdzz/summarize-torchserve?branch=main) and [ainize-demo](https://main-summarize-torchserve-gkswjdzz.endpoint.ainize.ai/).
|
24912fafcb75656e7d9cc459a80b2952
|
sd-concepts-library/kinda-sus
|
sd-concepts-library
| null | 9 | 0 | null | 2 | null | false | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,005 | false |
### Kinda-sus on Stable Diffusion
This is the `<amogus>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:




|
5d38f47a3dfd1e2e761b5664e3718b6b
|
fathyshalab/all-roberta-large-v1-small_talk-7-16-5
|
fathyshalab
|
roberta
| 11 | 3 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,515 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-small_talk-7-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3566
- Accuracy: 0.3855
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7259 | 1.0 | 1 | 2.5917 | 0.2551 |
| 2.217 | 2.0 | 2 | 2.5059 | 0.3275 |
| 1.7237 | 3.0 | 3 | 2.4355 | 0.3768 |
| 1.4001 | 4.0 | 4 | 2.3837 | 0.3739 |
| 1.1937 | 5.0 | 5 | 2.3566 | 0.3855 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
4c5cef9610e90963a085d5cfbe919823
|
Helsinki-NLP/opus-mt-is-it
|
Helsinki-NLP
|
marian
| 11 | 15 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['is', 'it']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 1,989 | false |
### isl-ita
* source group: Icelandic
* target group: Italian
* OPUS readme: [isl-ita](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/isl-ita/README.md)
* model: transformer-align
* source language(s): isl
* target language(s): ita
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/isl-ita/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/isl-ita/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/isl-ita/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.isl.ita | 46.7 | 0.662 |
### System Info:
- hf_name: isl-ita
- source_languages: isl
- target_languages: ita
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/isl-ita/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['is', 'it']
- src_constituents: {'isl'}
- tgt_constituents: {'ita'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/isl-ita/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/isl-ita/opus-2020-06-17.test.txt
- src_alpha3: isl
- tgt_alpha3: ita
- short_pair: is-it
- chrF2_score: 0.662
- bleu: 46.7
- brevity_penalty: 0.977
- ref_len: 1450.0
- src_name: Icelandic
- tgt_name: Italian
- train_date: 2020-06-17
- src_alpha2: is
- tgt_alpha2: it
- prefer_old: False
- long_pair: isl-ita
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
2eebce33009d01f151edb23b7f506d33
|
luffycodes/roberta-large-ner-conllpp-v1
|
luffycodes
|
roberta
| 11 | 7 |
transformers
| 0 |
token-classification
| true | false | false |
mit
| null |
['conllpp']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,699 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-ner-conllpp-v1
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the conllpp dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Precision: 0.9581
- Recall: 0.9586
- F1: 0.9584
- Accuracy: 0.9629
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.3602 | 1.0 | 878 | nan | 0.9450 | 0.9484 | 0.9467 | 0.9541 |
| 0.1101 | 2.0 | 1756 | nan | 0.9547 | 0.9569 | 0.9558 | 0.9620 |
| 0.053 | 3.0 | 2634 | nan | 0.9537 | 0.9572 | 0.9554 | 0.9614 |
| 0.0331 | 4.0 | 3512 | nan | 0.9560 | 0.9567 | 0.9563 | 0.9614 |
| 0.0219 | 5.0 | 4390 | nan | 0.9581 | 0.9586 | 0.9584 | 0.9629 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1
- Datasets 2.4.0
- Tokenizers 0.12.1
|
da0e9d8198899648451c8831a89c0fea
|
muhtasham/finetuned-base_base
|
muhtasham
|
bert
| 10 | 16 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['imdb']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,489 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-base_base
This model is a fine-tuned version of [google/bert_uncased_L-12_H-768_A-12](https://huggingface.co/google/bert_uncased_L-12_H-768_A-12) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3594
- Accuracy: 0.9094
- F1: 0.9525
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 50
- eval_batch_size: 50
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.2414 | 1.0 | 500 | 0.1796 | 0.9343 | 0.9660 |
| 0.1235 | 2.0 | 1000 | 0.2042 | 0.9311 | 0.9643 |
| 0.0633 | 3.0 | 1500 | 0.3590 | 0.8997 | 0.9472 |
| 0.0398 | 4.0 | 2000 | 0.3594 | 0.9094 | 0.9525 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
f230e64d71c5bc227dd13afd6d2ddff5
|
Helsinki-NLP/opus-mt-fr-hr
|
Helsinki-NLP
|
marian
| 10 | 15 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 768 | false |
### opus-mt-fr-hr
* source languages: fr
* target languages: hr
* OPUS readme: [fr-hr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-hr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-hr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-hr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-hr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.fr.hr | 20.7 | 0.442 |
|
e749e1587a3f011ee6aba598a46a5f51
|
chcaa/da_dacy_small_trf
|
chcaa
| null | 26 | 244 |
spacy
| 2 |
token-classification
| false | false | false |
apache-2.0
|
['da']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['spacy', 'token-classification']
| false | true | true | 16,335 | false |
<a href="https://github.com/centre-for-humanities-computing/Dacy"><img src="https://centre-for-humanities-computing.github.io/DaCy/_static/icon.png" width="175" height="175" align="right" /></a>
# DaCy small transformer
DaCy is a Danish language processing framework with state-of-the-art pipelines as well as functionality for analysing Danish pipelines.
DaCy's largest pipeline has achieved State-of-the-Art performance on Named entity recognition, part-of-speech tagging and dependency
parsing for Danish on the DaNE dataset. Check out the [DaCy repository](https://github.com/centre-for-humanities-computing/DaCy) for material on how to use DaCy and reproduce the results.
DaCy also contains guides on usage of the package as well as behavioural test for biases and robustness of Danish NLP pipelines.
| Feature | Description |
| --- | --- |
| **Name** | `da_dacy_small_trf` |
| **Version** | `0.1.0` |
| **spaCy** | `>=3.1.1,<3.2.0` |
| **Default Pipeline** | `transformer`, `morphologizer`, `parser`, `attribute_ruler`, `lemmatizer`, `ner` |
| **Components** | `transformer`, `morphologizer`, `parser`, `attribute_ruler`, `lemmatizer`, `ner` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | [UD Danish DDT v2.5](https://github.com/UniversalDependencies/UD_Danish-DDT) (Johannsen, Anders; Martínez Alonso, Héctor; Plank, Barbara)<br />[DaNE](https://github.com/alexandrainst/danlp/blob/master/docs/datasets.md#danish-dependency-treebank-dane) (Rasmus Hvingelby, Amalie B. Pauli, Maria Barrett, Christina Rosted, Lasse M. Lidegaard, Anders Søgaard)<br />[Maltehb/-l-ctra-danish-electra-small-cased](https://huggingface.co/Maltehb/-l-ctra-danish-electra-small-cased) (Malte Højmark-Bertelsen) |
| **License** | `Apache-2.0 License` |
| **Author** | [Centre for Humanities Computing Aarhus](https://chcaa.io/#/) |
### Label Scheme
<details>
<summary>View label scheme (192 labels for 3 components)</summary>
| Component | Labels |
| --- | --- |
| **`morphologizer`** | `AdpType=Prep\|POS=ADP`, `Definite=Ind\|Gender=Com\|Number=Sing\|POS=NOUN`, `Mood=Ind\|POS=AUX\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `POS=PROPN`, `Definite=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Definite=Def\|Gender=Neut\|Number=Sing\|POS=NOUN`, `POS=SCONJ`, `Definite=Def\|Gender=Com\|Number=Sing\|POS=NOUN`, `Mood=Ind\|POS=VERB\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `POS=ADV`, `Number=Plur\|POS=DET\|PronType=Dem`, `Degree=Pos\|Number=Plur\|POS=ADJ`, `Definite=Ind\|Gender=Com\|Number=Plur\|POS=NOUN`, `POS=PUNCT`, `POS=CCONJ`, `Definite=Ind\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Degree=Cmp\|POS=ADJ`, `POS=PRON\|PartType=Inf`, `Gender=Com\|Number=Sing\|POS=DET\|PronType=Ind`, `Definite=Ind\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Definite=Ind\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Definite=Def\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Degree=Pos\|POS=ADV`, `Definite=Def\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Definite=Ind\|Gender=Neut\|Number=Sing\|POS=NOUN`, `POS=PRON\|PronType=Dem`, `NumType=Card\|POS=NUM`, `Definite=Ind\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Com\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Degree=Pos\|Gender=Com\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Com\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `NumType=Ord\|POS=ADJ`, `Gender=Com\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Mood=Ind\|POS=AUX\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `POS=VERB\|VerbForm=Inf\|Voice=Act`, `Mood=Ind\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `POS=NOUN`, `Mood=Ind\|POS=VERB\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `POS=ADP\|PartType=Inf`, `Degree=Pos\|POS=ADJ`, `Definite=Def\|Gender=Com\|Number=Plur\|POS=NOUN`, `Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Definite=Def\|Gender=Com\|Number=Sing\|POS=NOUN`, `POS=AUX\|VerbForm=Inf\|Voice=Act`, `Definite=Ind\|Degree=Pos\|Gender=Com\|Number=Sing\|POS=ADJ`, `Gender=Com\|Number=Sing\|POS=DET\|PronType=Dem`, `Number=Plur\|POS=DET\|PronType=Ind`, `Gender=Com\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Acc\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `POS=PART\|PartType=Inf`, `Gender=Neut\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Definite=Def\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Nom\|Gender=Com\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Nom\|Gender=Com\|POS=PRON\|PronType=Ind`, `Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Ind`, `Mood=Imp\|POS=VERB`, `Gender=Com\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Definite=Ind\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Part`, `POS=X`, `Case=Nom\|Gender=Com\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Gen\|Definite=Def\|Gender=Com\|Number=Plur\|POS=NOUN`, `POS=VERB\|Tense=Pres\|VerbForm=Part`, `Number=Plur\|POS=PRON\|PronType=Int,Rel`, `POS=VERB\|VerbForm=Inf\|Voice=Pass`, `Case=Gen\|Definite=Ind\|Gender=Com\|Number=Sing\|POS=NOUN`, `Degree=Cmp\|POS=ADV`, `POS=ADV\|PartType=Inf`, `Degree=Sup\|POS=ADV`, `Number=Plur\|POS=PRON\|PronType=Dem`, `Number=Plur\|POS=PRON\|PronType=Ind`, `Definite=Def\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Acc\|Gender=Com\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Gen\|POS=PROPN`, `POS=ADP`, `Degree=Cmp\|Number=Plur\|POS=ADJ`, `Definite=Def\|Degree=Sup\|POS=ADJ`, `Gender=Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Degree=Pos\|Number=Sing\|POS=ADJ`, `Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Gender=Com\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs\|Style=Form`, `Number=Plur\|POS=PRON\|PronType=Rcp`, `Case=Gen\|Degree=Cmp\|POS=ADJ`, `Case=Gen\|Definite=Def\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `POS=INTJ`, `Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Gender=Neut\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs\|Style=Form`, `Case=Acc\|Gender=Com\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Gender=Com\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Definite=Ind\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Number=Sing\|POS=PRON\|PronType=Int,Rel`, `Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs\|Style=Form`, `Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Int,Rel`, `Definite=Def\|Degree=Sup\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Com\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Gender=Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Definite=Ind\|Number=Sing\|POS=NOUN`, `Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Number=Plur\|Number[psor]=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `POS=SYM`, `Case=Nom\|Gender=Com\|POS=PRON\|Person=2\|Polite=Form\|PronType=Prs`, `Degree=Sup\|POS=ADJ`, `Number=Plur\|POS=DET\|PronType=Ind\|Style=Arch`, `Case=Gen\|Gender=Com\|Number=Sing\|POS=DET\|PronType=Dem`, `Foreign=Yes\|POS=X`, `POS=DET\|Person=2\|Polite=Form\|Poss=Yes\|PronType=Prs`, `Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Acc\|Gender=Com\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Gen\|Definite=Ind\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Gen\|POS=PRON\|PronType=Int,Rel`, `Gender=Com\|Number=Sing\|POS=PRON\|PronType=Dem`, `Abbr=Yes\|POS=X`, `Case=Gen\|Definite=Ind\|Gender=Com\|Number=Plur\|POS=NOUN`, `Definite=Def\|Degree=Abs\|POS=ADJ`, `Definite=Ind\|Degree=Sup\|Number=Sing\|POS=ADJ`, `Definite=Ind\|POS=NOUN`, `Gender=Com\|Number=Plur\|POS=NOUN`, `Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Gender=Com\|POS=PRON\|PronType=Int,Rel`, `Case=Nom\|Gender=Com\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Degree=Abs\|POS=ADV`, `POS=VERB\|VerbForm=Ger`, `POS=VERB\|Tense=Past\|VerbForm=Part`, `Definite=Def\|Degree=Sup\|Number=Sing\|POS=ADJ`, `Number=Plur\|Number[psor]=Plur\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs\|Style=Form`, `Case=Gen\|Definite=Def\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Gen\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Com\|POS=PRON\|Person=2\|Polite=Form\|PronType=Prs`, `Gender=Com\|Number=Sing\|POS=PRON\|PronType=Int,Rel`, `POS=VERB\|Tense=Pres`, `Case=Gen\|Number=Plur\|POS=DET\|PronType=Ind`, `Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `POS=PRON\|Person=2\|Polite=Form\|Poss=Yes\|PronType=Prs`, `Gender=Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `POS=AUX\|Tense=Pres\|VerbForm=Part`, `Mood=Ind\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Gender=Com\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Degree=Sup\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Com\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Gender=Neut\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Definite=Ind\|Number=Plur\|POS=NOUN`, `Case=Gen\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Mood=Imp\|POS=AUX`, `Gender=Com\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs`, `Number[psor]=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Definite=Def\|Gender=Com\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Com\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Gen\|POS=NOUN`, `Number[psor]=Plur\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `POS=DET\|PronType=Dem`, `Definite=Def\|Number=Plur\|POS=NOUN` |
| **`parser`** | `ROOT`, `acl:relcl`, `advcl`, `advmod`, `amod`, `appos`, `aux`, `case`, `cc`, `ccomp`, `compound:prt`, `conj`, `cop`, `dep`, `det`, `expl`, `fixed`, `flat`, `iobj`, `list`, `mark`, `nmod`, `nmod:poss`, `nsubj`, `nummod`, `obj`, `obl`, `obl:loc`, `obl:tmod`, `punct`, `xcomp` |
| **`ner`** | `LOC`, `MISC`, `ORG`, `PER` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `POS_ACC` | 95.83 |
| `MORPH_ACC` | 95.70 |
| `DEP_UAS` | 84.92 |
| `DEP_LAS` | 81.76 |
| `SENTS_P` | 86.04 |
| `SENTS_R` | 87.41 |
| `SENTS_F` | 86.72 |
| `LEMMA_ACC` | 84.91 |
| `ENTS_F` | 82.32 |
| `ENTS_P` | 81.72 |
| `ENTS_R` | 82.92 |
| `TRANSFORMER_LOSS` | 41746686.63 |
| `MORPHOLOGIZER_LOSS` | 3458966.49 |
| `PARSER_LOSS` | 15104898.38 |
| `NER_LOSS` | 546098.45 |
## Bias and Robustness
Besides the validation done by SpaCy on the DaNE testset, DaCy also provides a series of augmentations to the DaNE test set to see how well the models deal with these types of augmentations.
The can be seen as behavioural probes akinn to the NLP checklist.
### Deterministic Augmentations
Deterministic augmentations are augmentation which always yield the same result.
| Augmentation | Part-of-speech tagging (Accuracy) | Morphological tagging (Accuracy) | Dependency Parsing (UAS) | Dependency Parsing (LAS) | Sentence segmentation (F1) | Lemmatization (Accuracy) | Named entity recognition (F1) |
| --- | --- | --- | --- | --- | --- | --- | --- |
| No augmentation | 0.98 | 0.974 | 0.868 | 0.836 | 0.936 | 0.844 | 0.765 |
| Æøå Augmentation | 0.955 | 0.948 | 0.823 | 0.783 | 0.922 | 0.754 | 0.718 |
| Lowercase | 0.974 | 0.97 | 0.862 | 0.828 | 0.905 | 0.848 | 0.681 |
| No Spacing | 0.229 | 0.229 | 0.004 | 0.003 | 0.824 | 0.225 | 0.048 |
| Abbreviated first names | 0.979 | 0.973 | 0.864 | 0.832 | 0.94 | 0.845 | 0.699 |
| Input size augmentation 5 sentences | 0.956 | 0.956 | 0.851 | 0.818 | 0.883 | 0.844 | 0.743 |
| Input size augmentation 10 sentences | 0.959 | 0.958 | 0.853 | 0.821 | 0.897 | 0.844 | 0.755 |
### Stochastic Augmentations
Stochastic augmentations are augmentation which are repeated mulitple times to estimate the effect of the augmentation.
| Augmentation | Part-of-speech tagging (Accuracy) | Morphological tagging (Accuracy) | Dependency Parsing (UAS) | Dependency Parsing (LAS) | Sentence segmentation (F1) | Lemmatization (Accuracy) | Named entity recognition (F1) |
| --- | --- | --- | --- | --- | --- | --- | --- |
| Keystroke errors 2% | 0.931 (0.003) | 0.929 (0.003) | 0.797 (0.003) | 0.753 (0.003) | 0.884 (0.003) | 0.772 (0.003) | 0.657 (0.003) |
| Keystroke errors 5% | 0.859 (0.003) | 0.863 (0.003) | 0.699 (0.003) | 0.641 (0.003) | 0.824 (0.003) | 0.681 (0.003) | 0.53 (0.003) |
| Keystroke errors 15% | 0.633 (0.006) | 0.662 (0.006) | 0.439 (0.006) | 0.358 (0.006) | 0.688 (0.006) | 0.459 (0.006) | 0.293 (0.006) |
| Danish names | 0.979 (0.0) | 0.974 (0.0) | 0.867 (0.0) | 0.835 (0.0) | 0.943 (0.0) | 0.847 (0.0) | 0.748 (0.0) |
| Muslim names | 0.979 (0.0) | 0.974 (0.0) | 0.865 (0.0) | 0.833 (0.0) | 0.94 (0.0) | 0.847 (0.0) | 0.732 (0.0) |
| Female names | 0.979 (0.0) | 0.974 (0.0) | 0.867 (0.0) | 0.835 (0.0) | 0.946 (0.0) | 0.847 (0.0) | 0.754 (0.0) |
| Male names | 0.979 (0.0) | 0.974 (0.0) | 0.867 (0.0) | 0.835 (0.0) | 0.943 (0.0) | 0.847 (0.0) | 0.748 (0.0) |
| Spacing Augmention 5% | 0.941 (0.002) | 0.936 (0.002) | 0.755 (0.002) | 0.725 (0.002) | 0.907 (0.002) | 0.811 (0.002) | 0.699 (0.002) |
<details>
<summary> Description of Augmenters </summary>
**No augmentation:**
Applies no augmentation to the DaNE test set.
**Æøå Augmentation:**
This augmentation replace the æ,ø, and å with their spelling variations ae, oe and aa respectively.
**Lowercase:**
This augmentation lowercases all text.
**No Spacing:**
This augmentation removed all spacing from the text.
**Abbreviated first names:**
This agmentation abbreviates the first names of entities. For instance 'Kenneth Enevoldsen' would turn to 'K. Enevoldsen'.
**Keystroke errors 2%:**
This agmentation simulate keystroke errors by replacing 2% of keys with a neighbouring key on a Danish QWERTY keyboard. As this agmentation is stochastic it is repeated 20 times to obtain a consistent estimate and the mean is provided with its standard deviation in parenthesis.
**Keystroke errors 5%:**
This agmentation simulate keystroke errors by replacing 5% of keys with a neighbouring key on a Danish QWERTY keyboard. As this agmentation is stochastic it is repeated 20 times to obtain a consistent estimate and the mean is provided with its standard deviation in parenthesis.
**Keystroke errors 15%:**
This agmentation simulate keystroke errors by replacing 15% of keys with a neighbouring key on a Danish QWERTY keyboard. As this agmentation is stochastic it is repeated 20 times to obtain a consistent estimate and the mean is provided with its standard deviation in parenthesis.
**Danish names:**
This agmentation replace all names with Danish names derived from Danmarks Statistik (2021). As this agmentation is stochastic it is repeated 20 times to obtain a consistent estimate and the mean is provided with its standard deviation in parenthesis.
**Muslim names:**
This agmentation replace all names with Muslim names derived from Meldgaard (2005). As this agmentation is stochastic it is repeated 20 times to obtain a consistent estimate and the mean is provided with its standard deviation in parenthesis.
**Female names:**
This agmentation replace all names with Danish female names derived from Danmarks Statistik (2021). As this agmentation is stochastic it is repeated 20 times to obtain a consistent estimate and the mean is provided with its standard deviation in parenthesis.
**Male names:**
This agmentation replace all names with Danish male names derived from Danmarks Statistik (2021). As this agmentation is stochastic it is repeated 20 times to obtain a consistent estimate and the mean is provided with its standard deviation in parenthesis.
**Spacing Augmention 5%:**
This agmentation replace all names with Danish male names derived from Danmarks Statistik (2021). As this agmentation is stochastic it is repeated 20 times to obtain a consistent estimate and the mean is provided with its standard deviation in parenthesis.
</details>
<br />
### Hardware
This was run an trained on a Quadro RTX 8000 GPU.
|
a66ec3975000a07a5e30996ca59f9616
|
jonatasgrosman/exp_w2v2t_et_vp-fr_s600
|
jonatasgrosman
|
wav2vec2
| 10 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['et']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'et']
| false | true | true | 469 | false |
# exp_w2v2t_et_vp-fr_s600
Fine-tuned [facebook/wav2vec2-large-fr-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-fr-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (et)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
5ce4bf973cb683d8dce014a1e6db7fd4
|
ajtamayoh/NLP-CIC-WFU_Clinical_Cases_NER_Sents_Tokenized_bertin_roberta_base_spanish_fine_tuned
|
ajtamayoh
|
roberta
| 13 | 11 |
transformers
| 0 |
token-classification
| true | false | false |
cc-by-4.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,011 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NLP-CIC-WFU_Clinical_Cases_NER_Sents_Tokenized_bertin_roberta_base_spanish_fine_tuned
This model is a fine-tuned version of [bertin-project/bertin-roberta-base-spanish](https://huggingface.co/bertin-project/bertin-roberta-base-spanish) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0973
- Precision: 0.9012
- Recall: 0.6942
- F1: 0.7842
- Accuracy: 0.9857
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0605 | 1.0 | 2568 | 0.0625 | 0.9400 | 0.6322 | 0.7560 | 0.9836 |
| 0.0475 | 2.0 | 5136 | 0.0622 | 0.9533 | 0.6572 | 0.7781 | 0.9849 |
| 0.0374 | 3.0 | 7704 | 0.0552 | 0.9261 | 0.6784 | 0.7831 | 0.9855 |
| 0.0246 | 4.0 | 10272 | 0.0693 | 0.9381 | 0.6658 | 0.7788 | 0.9849 |
| 0.0126 | 5.0 | 12840 | 0.0974 | 0.8918 | 0.6830 | 0.7735 | 0.9849 |
| 0.0061 | 6.0 | 15408 | 0.0886 | 0.8771 | 0.7099 | 0.7847 | 0.9850 |
| 0.0031 | 7.0 | 17976 | 0.0973 | 0.9012 | 0.6942 | 0.7842 | 0.9857 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
133e88cdac7d32ec363aff85e9c156ab
|
Helsinki-NLP/opus-mt-pl-no
|
Helsinki-NLP
|
marian
| 11 | 8 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['pl', False]
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 2,003 | false |
### pol-nor
* source group: Polish
* target group: Norwegian
* OPUS readme: [pol-nor](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/pol-nor/README.md)
* model: transformer-align
* source language(s): pol
* target language(s): nob
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/pol-nor/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/pol-nor/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/pol-nor/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.pol.nor | 27.5 | 0.479 |
### System Info:
- hf_name: pol-nor
- source_languages: pol
- target_languages: nor
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/pol-nor/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['pl', 'no']
- src_constituents: {'pol'}
- tgt_constituents: {'nob', 'nno'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/pol-nor/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/pol-nor/opus-2020-06-17.test.txt
- src_alpha3: pol
- tgt_alpha3: nor
- short_pair: pl-no
- chrF2_score: 0.479
- bleu: 27.5
- brevity_penalty: 0.9690000000000001
- ref_len: 2045.0
- src_name: Polish
- tgt_name: Norwegian
- train_date: 2020-06-17
- src_alpha2: pl
- tgt_alpha2: no
- prefer_old: False
- long_pair: pol-nor
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
1f71772618a750bb7bde4cb5a4720303
|
sahilrajpal121/train5a1e8w7-label-classification
|
sahilrajpal121
| null | 4 | 0 |
sklearn
| 0 |
tabular-classification
| false | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['tabular-classification', 'baseline-trainer']
| false | true | true | 10,284 | false |
## Baseline Model trained on train5a1e8w7 to apply classification on label
**Metrics of the best model:**
accuracy 0.693101
recall_macro 0.665973
precision_macro 0.657625
f1_macro 0.656998
Name: LogisticRegression(C=0.1, class_weight='balanced', max_iter=1000), dtype: float64
**See model plot below:**
<style>#sk-container-id-1 {color: black;background-color: white;}#sk-container-id-1 pre{padding: 0;}#sk-container-id-1 div.sk-toggleable {background-color: white;}#sk-container-id-1 label.sk-toggleable__label {cursor: pointer;display: block;width: 100%;margin-bottom: 0;padding: 0.3em;box-sizing: border-box;text-align: center;}#sk-container-id-1 label.sk-toggleable__label-arrow:before {content: "▸";float: left;margin-right: 0.25em;color: #696969;}#sk-container-id-1 label.sk-toggleable__label-arrow:hover:before {color: black;}#sk-container-id-1 div.sk-estimator:hover label.sk-toggleable__label-arrow:before {color: black;}#sk-container-id-1 div.sk-toggleable__content {max-height: 0;max-width: 0;overflow: hidden;text-align: left;background-color: #f0f8ff;}#sk-container-id-1 div.sk-toggleable__content pre {margin: 0.2em;color: black;border-radius: 0.25em;background-color: #f0f8ff;}#sk-container-id-1 input.sk-toggleable__control:checked~div.sk-toggleable__content {max-height: 200px;max-width: 100%;overflow: auto;}#sk-container-id-1 input.sk-toggleable__control:checked~label.sk-toggleable__label-arrow:before {content: "▾";}#sk-container-id-1 div.sk-estimator input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-1 div.sk-label input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-1 input.sk-hidden--visually {border: 0;clip: rect(1px 1px 1px 1px);clip: rect(1px, 1px, 1px, 1px);height: 1px;margin: -1px;overflow: hidden;padding: 0;position: absolute;width: 1px;}#sk-container-id-1 div.sk-estimator {font-family: monospace;background-color: #f0f8ff;border: 1px dotted black;border-radius: 0.25em;box-sizing: border-box;margin-bottom: 0.5em;}#sk-container-id-1 div.sk-estimator:hover {background-color: #d4ebff;}#sk-container-id-1 div.sk-parallel-item::after {content: "";width: 100%;border-bottom: 1px solid gray;flex-grow: 1;}#sk-container-id-1 div.sk-label:hover label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-1 div.sk-serial::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: 0;}#sk-container-id-1 div.sk-serial {display: flex;flex-direction: column;align-items: center;background-color: white;padding-right: 0.2em;padding-left: 0.2em;position: relative;}#sk-container-id-1 div.sk-item {position: relative;z-index: 1;}#sk-container-id-1 div.sk-parallel {display: flex;align-items: stretch;justify-content: center;background-color: white;position: relative;}#sk-container-id-1 div.sk-item::before, #sk-container-id-1 div.sk-parallel-item::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: -1;}#sk-container-id-1 div.sk-parallel-item {display: flex;flex-direction: column;z-index: 1;position: relative;background-color: white;}#sk-container-id-1 div.sk-parallel-item:first-child::after {align-self: flex-end;width: 50%;}#sk-container-id-1 div.sk-parallel-item:last-child::after {align-self: flex-start;width: 50%;}#sk-container-id-1 div.sk-parallel-item:only-child::after {width: 0;}#sk-container-id-1 div.sk-dashed-wrapped {border: 1px dashed gray;margin: 0 0.4em 0.5em 0.4em;box-sizing: border-box;padding-bottom: 0.4em;background-color: white;}#sk-container-id-1 div.sk-label label {font-family: monospace;font-weight: bold;display: inline-block;line-height: 1.2em;}#sk-container-id-1 div.sk-label-container {text-align: center;}#sk-container-id-1 div.sk-container {/* jupyter's `normalize.less` sets `[hidden] { display: none; }` but bootstrap.min.css set `[hidden] { display: none !important; }` so we also need the `!important` here to be able to override the default hidden behavior on the sphinx rendered scikit-learn.org. See: https://github.com/scikit-learn/scikit-learn/issues/21755 */display: inline-block !important;position: relative;}#sk-container-id-1 div.sk-text-repr-fallback {display: none;}</style><div id="sk-container-id-1" class="sk-top-container"><div class="sk-text-repr-fallback"><pre>Pipeline(steps=[('easypreprocessor',EasyPreprocessor(types= continuous dirty_float low_card_int ... date free_string useless
v_21 False False False ... False False False
v_32 True False False ... False False False
v_15 False False False ... False False False
v_4 True False False ... False False False
v_1 False False False ... False False False
v_8 False False False ... False False False
v_12 False False Fa...
v_34 False False False ... False False False
v_35 True False False ... False False False
v_36 True False False ... False False False
v_37 True False False ... False False False
v_38 True False False ... False False False
v_39 True False False ... False False False
v_40 False False False ... False False False[40 rows x 7 columns])),('logisticregression',LogisticRegression(C=0.1, class_weight='balanced',max_iter=1000))])</pre><b>In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook. <br />On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.</b></div><div class="sk-container" hidden><div class="sk-item sk-dashed-wrapped"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-1" type="checkbox" ><label for="sk-estimator-id-1" class="sk-toggleable__label sk-toggleable__label-arrow">Pipeline</label><div class="sk-toggleable__content"><pre>Pipeline(steps=[('easypreprocessor',EasyPreprocessor(types= continuous dirty_float low_card_int ... date free_string useless
v_21 False False False ... False False False
v_32 True False False ... False False False
v_15 False False False ... False False False
v_4 True False False ... False False False
v_1 False False False ... False False False
v_8 False False False ... False False False
v_12 False False Fa...
v_34 False False False ... False False False
v_35 True False False ... False False False
v_36 True False False ... False False False
v_37 True False False ... False False False
v_38 True False False ... False False False
v_39 True False False ... False False False
v_40 False False False ... False False False[40 rows x 7 columns])),('logisticregression',LogisticRegression(C=0.1, class_weight='balanced',max_iter=1000))])</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-2" type="checkbox" ><label for="sk-estimator-id-2" class="sk-toggleable__label sk-toggleable__label-arrow">EasyPreprocessor</label><div class="sk-toggleable__content"><pre>EasyPreprocessor(types= continuous dirty_float low_card_int ... date free_string useless
v_21 False False False ... False False False
v_32 True False False ... False False False
v_15 False False False ... False False False
v_4 True False False ... False False False
v_1 False False False ... False False False
v_8 False False False ... False False False
v_12 False False False ... False False False
v_25 True False Fa...
v_7 True False False ... False False False
v_2 True False False ... False False False
v_16 True False False ... False False False
v_34 False False False ... False False False
v_35 True False False ... False False False
v_36 True False False ... False False False
v_37 True False False ... False False False
v_38 True False False ... False False False
v_39 True False False ... False False False
v_40 False False False ... False False False[40 rows x 7 columns])</pre></div></div></div><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-3" type="checkbox" ><label for="sk-estimator-id-3" class="sk-toggleable__label sk-toggleable__label-arrow">LogisticRegression</label><div class="sk-toggleable__content"><pre>LogisticRegression(C=0.1, class_weight='balanced', max_iter=1000)</pre></div></div></div></div></div></div></div>
**Disclaimer:** This model is trained with dabl library as a baseline, for better results, use [AutoTrain](https://huggingface.co/autotrain).
**Logs of training** including the models tried in the process can be found in logs.txt
|
4f3b66c56df4bd3be4da9ff9b5120904
|
danielbubiola/daniel_asr
|
danielbubiola
|
wav2vec2
| 12 | 6 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,621 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# daniel_asr
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4565
- Wer: 0.3423
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.4909 | 4.0 | 500 | 1.3485 | 0.8887 |
| 0.5887 | 8.0 | 1000 | 0.4957 | 0.4641 |
| 0.2207 | 12.0 | 1500 | 0.4621 | 0.3971 |
| 0.125 | 16.0 | 2000 | 0.4339 | 0.3756 |
| 0.0829 | 20.0 | 2500 | 0.4618 | 0.3613 |
| 0.0601 | 24.0 | 3000 | 0.4564 | 0.3535 |
| 0.0456 | 28.0 | 3500 | 0.4565 | 0.3423 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
d794a44040cab6bffab68233d048e96b
|
SiddharthaM/distilbert-hate-final
|
SiddharthaM
|
distilbert
| 12 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,837 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-hate-final
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6212
- Accuracy: 0.7253
- Precision: 0.7207
- Recall: 0.7253
- F1: 0.7206
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| No log | 1.0 | 296 | 0.5760 | 0.7025 | 0.7053 | 0.7025 | 0.6771 |
| 0.569 | 2.0 | 592 | 0.5629 | 0.7215 | 0.7168 | 0.7215 | 0.7122 |
| 0.569 | 3.0 | 888 | 0.5616 | 0.7310 | 0.7274 | 0.7310 | 0.7215 |
| 0.4683 | 4.0 | 1184 | 0.5651 | 0.7338 | 0.7295 | 0.7338 | 0.7274 |
| 0.4683 | 5.0 | 1480 | 0.5898 | 0.7338 | 0.7305 | 0.7338 | 0.7246 |
| 0.4086 | 6.0 | 1776 | 0.6212 | 0.7253 | 0.7207 | 0.7253 | 0.7206 |
### Framework versions
- Transformers 4.24.0.dev0
- Pytorch 1.11.0+cu102
- Datasets 2.6.1
- Tokenizers 0.13.1
|
4061afdf7dabb1988a31bc391036db87
|
51la5/bert-large-NER
|
51la5
|
bert
| 9 | 29 |
transformers
| 0 |
token-classification
| true | true | true |
mit
|
['en']
|
['conll2003']
| null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 4,743 | false |
# bert-base-NER
## Model description
**bert-large-NER** is a fine-tuned BERT model that is ready to use for **Named Entity Recognition** and achieves **state-of-the-art performance** for the NER task. It has been trained to recognize four types of entities: location (LOC), organizations (ORG), person (PER) and Miscellaneous (MISC).
Specifically, this model is a *bert-large-cased* model that was fine-tuned on the English version of the standard [CoNLL-2003 Named Entity Recognition](https://www.aclweb.org/anthology/W03-0419.pdf) dataset.
If you'd like to use a smaller BERT model fine-tuned on the same dataset, a [**bert-base-NER**](https://huggingface.co/dslim/bert-base-NER/) version is also available.
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for NER.
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("dslim/bert-base-NER")
model = AutoModelForTokenClassification.from_pretrained("dslim/bert-base-NER")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "My name is Wolfgang and I live in Berlin"
ner_results = nlp(example)
print(ner_results)
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. Furthermore, the model occassionally tags subword tokens as entities and post-processing of results may be necessary to handle those cases.
## Training data
This model was fine-tuned on English version of the standard [CoNLL-2003 Named Entity Recognition](https://www.aclweb.org/anthology/W03-0419.pdf) dataset.
The training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes:
Abbreviation|Description
-|-
O|Outside of a named entity
B-MIS |Beginning of a miscellaneous entity right after another miscellaneous entity
I-MIS | Miscellaneous entity
B-PER |Beginning of a person’s name right after another person’s name
I-PER |Person’s name
B-ORG |Beginning of an organization right after another organization
I-ORG |organization
B-LOC |Beginning of a location right after another location
I-LOC |Location
### CoNLL-2003 English Dataset Statistics
This dataset was derived from the Reuters corpus which consists of Reuters news stories. You can read more about how this dataset was created in the CoNLL-2003 paper.
#### # of training examples per entity type
Dataset|LOC|MISC|ORG|PER
-|-|-|-|-
Train|7140|3438|6321|6600
Dev|1837|922|1341|1842
Test|1668|702|1661|1617
#### # of articles/sentences/tokens per dataset
Dataset |Articles |Sentences |Tokens
-|-|-|-
Train |946 |14,987 |203,621
Dev |216 |3,466 |51,362
Test |231 |3,684 |46,435
## Training procedure
This model was trained on a single NVIDIA V100 GPU with recommended hyperparameters from the [original BERT paper](https://arxiv.org/pdf/1810.04805) which trained & evaluated the model on CoNLL-2003 NER task.
## Eval results
metric|dev|test
-|-|-
f1 |95.7 |91.7
precision |95.3 |91.2
recall |96.1 |92.3
The test metrics are a little lower than the official Google BERT results which encoded document context & experimented with CRF. More on replicating the original results [here](https://github.com/google-research/bert/issues/223).
### BibTeX entry and citation info
```
@article{DBLP:journals/corr/abs-1810-04805,
author = {Jacob Devlin and
Ming{-}Wei Chang and
Kenton Lee and
Kristina Toutanova},
title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language
Understanding},
journal = {CoRR},
volume = {abs/1810.04805},
year = {2018},
url = {http://arxiv.org/abs/1810.04805},
archivePrefix = {arXiv},
eprint = {1810.04805},
timestamp = {Tue, 30 Oct 2018 20:39:56 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
```
@inproceedings{tjong-kim-sang-de-meulder-2003-introduction,
title = "Introduction to the {C}o{NLL}-2003 Shared Task: Language-Independent Named Entity Recognition",
author = "Tjong Kim Sang, Erik F. and
De Meulder, Fien",
booktitle = "Proceedings of the Seventh Conference on Natural Language Learning at {HLT}-{NAACL} 2003",
year = "2003",
url = "https://www.aclweb.org/anthology/W03-0419",
pages = "142--147",
}
```
|
72cebc925c956dec2540d87116083172
|
monologg/koelectra-base-v3-discriminator
|
monologg
|
electra
| 6 | 13,363 |
transformers
| 14 | null | true | false | false |
apache-2.0
|
['ko']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['korean']
| false | true | true | 1,693 | false |
# KoELECTRA v3 (Base Discriminator)
Pretrained ELECTRA Language Model for Korean (`koelectra-base-v3-discriminator`)
For more detail, please see [original repository](https://github.com/monologg/KoELECTRA/blob/master/README_EN.md).
## Usage
### Load model and tokenizer
```python
>>> from transformers import ElectraModel, ElectraTokenizer
>>> model = ElectraModel.from_pretrained("monologg/koelectra-base-v3-discriminator")
>>> tokenizer = ElectraTokenizer.from_pretrained("monologg/koelectra-base-v3-discriminator")
```
### Tokenizer example
```python
>>> from transformers import ElectraTokenizer
>>> tokenizer = ElectraTokenizer.from_pretrained("monologg/koelectra-base-v3-discriminator")
>>> tokenizer.tokenize("[CLS] 한국어 ELECTRA를 공유합니다. [SEP]")
['[CLS]', '한국어', 'EL', '##EC', '##TRA', '##를', '공유', '##합니다', '.', '[SEP]']
>>> tokenizer.convert_tokens_to_ids(['[CLS]', '한국어', 'EL', '##EC', '##TRA', '##를', '공유', '##합니다', '.', '[SEP]'])
[2, 11229, 29173, 13352, 25541, 4110, 7824, 17788, 18, 3]
```
## Example using ElectraForPreTraining
```python
import torch
from transformers import ElectraForPreTraining, ElectraTokenizer
discriminator = ElectraForPreTraining.from_pretrained("monologg/koelectra-base-v3-discriminator")
tokenizer = ElectraTokenizer.from_pretrained("monologg/koelectra-base-v3-discriminator")
sentence = "나는 방금 밥을 먹었다."
fake_sentence = "나는 내일 밥을 먹었다."
fake_tokens = tokenizer.tokenize(fake_sentence)
fake_inputs = tokenizer.encode(fake_sentence, return_tensors="pt")
discriminator_outputs = discriminator(fake_inputs)
predictions = torch.round((torch.sign(discriminator_outputs[0]) + 1) / 2)
print(list(zip(fake_tokens, predictions.tolist()[1:-1])))
```
|
40a24a34d5e2ed1747d569246f1fae7c
|
CouchCat/ma_mlc_v7_distil
|
CouchCat
|
distilbert
| 7 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
mit
|
['en']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['multi-label']
| false | true | true | 592 | false |
### Description
A Multi-label text classification model trained on a customer feedback data using DistilBert.
Possible labels are:
- Delivery (delivery status, time of arrival, etc.)
- Return (return confirmation, return label requests, etc.)
- Product (quality, complaint, etc.)
- Monetary (pending transactions, refund, etc.)
### Usage
```
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("CouchCat/ma_mlc_v7_distil")
model = AutoModelForSequenceClassification.from_pretrained("CouchCat/ma_mlc_v7_distil")
```
|
9564b5975dc6ce3c00a72a3bf19d69ff
|
Gnanesh5/SAF
|
Gnanesh5
|
xlnet
| 6 | 3 |
transformers
| 0 |
text-classification
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 900 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SAF
This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
952250061f35c5afcf42d0c32a068ef6
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.