repo_id
stringlengths 4
110
| author
stringlengths 2
27
⌀ | model_type
stringlengths 2
29
⌀ | files_per_repo
int64 2
15.4k
| downloads_30d
int64 0
19.9M
| library
stringlengths 2
37
⌀ | likes
int64 0
4.34k
| pipeline
stringlengths 5
30
⌀ | pytorch
bool 2
classes | tensorflow
bool 2
classes | jax
bool 2
classes | license
stringlengths 2
30
| languages
stringlengths 4
1.63k
⌀ | datasets
stringlengths 2
2.58k
⌀ | co2
stringclasses 29
values | prs_count
int64 0
125
| prs_open
int64 0
120
| prs_merged
int64 0
15
| prs_closed
int64 0
28
| discussions_count
int64 0
218
| discussions_open
int64 0
148
| discussions_closed
int64 0
70
| tags
stringlengths 2
513
| has_model_index
bool 2
classes | has_metadata
bool 1
class | has_text
bool 1
class | text_length
int64 401
598k
| is_nc
bool 1
class | readme
stringlengths 0
598k
| hash
stringlengths 32
32
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
jonatasgrosman/exp_w2v2t_et_vp-nl_s353
|
jonatasgrosman
|
wav2vec2
| 10 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['et']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'et']
| false | true | true | 469 | false |
# exp_w2v2t_et_vp-nl_s353
Fine-tuned [facebook/wav2vec2-large-nl-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-nl-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (et)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
0518ff5feed68accbdb7ccaee0c6b303
|
gokuls/distilbert_sa_GLUE_Experiment_rte_96
|
gokuls
|
distilbert
| 17 | 4 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
|
['en']
|
['glue']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,173 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_sa_GLUE_Experiment_rte_96
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE RTE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6925
- Accuracy: 0.5271
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6932 | 1.0 | 10 | 0.6928 | 0.5271 |
| 0.6934 | 2.0 | 20 | 0.6927 | 0.5271 |
| 0.6934 | 3.0 | 30 | 0.6932 | 0.4729 |
| 0.6931 | 4.0 | 40 | 0.6930 | 0.5271 |
| 0.6936 | 5.0 | 50 | 0.6932 | 0.4440 |
| 0.6932 | 6.0 | 60 | 0.6927 | 0.5271 |
| 0.6932 | 7.0 | 70 | 0.6926 | 0.5271 |
| 0.6928 | 8.0 | 80 | 0.6932 | 0.4477 |
| 0.6935 | 9.0 | 90 | 0.6932 | 0.4260 |
| 0.6933 | 10.0 | 100 | 0.6925 | 0.5271 |
| 0.6929 | 11.0 | 110 | 0.6932 | 0.4440 |
| 0.693 | 12.0 | 120 | 0.6935 | 0.4729 |
| 0.6926 | 13.0 | 130 | 0.6931 | 0.5307 |
| 0.6916 | 14.0 | 140 | 0.6932 | 0.5199 |
| 0.6903 | 15.0 | 150 | 0.6943 | 0.4440 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.8.0
- Tokenizers 0.13.2
|
5523ef341e57bf40aafef8306964c645
|
Lvxue/distilled-mt5-small-1-1
|
Lvxue
|
mt5
| 14 | 1 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
|
['en', 'ro']
|
['wmt16']
| null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,034 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilled-mt5-small-1-1
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the wmt16 ro-en dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8289
- Bleu: 6.6959
- Gen Len: 45.7539
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
da0ec09716f568c416953b30498c529c
|
flamesbob/Yuko_model
|
flamesbob
| null | 4 | 0 | null | 0 | null | false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 955 | false |
To use draw emphasis from the training model include the word `m_yukoring` in your prompt.
Yukoring is an artists that does a lot of anime watercolor style art.
License This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies:
You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license here
|
96d4e1e1910aeb40ab1df3ecff18724b
|
zendiode69/electra-base-squad2-finetuned-squad-12-trainedfor-3
|
zendiode69
|
electra
| 12 | 0 |
transformers
| 0 |
question-answering
| true | false | false |
cc-by-4.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,298 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-base-squad2-finetuned-squad-12-trainedfor-3
This model is a fine-tuned version of [deepset/electra-base-squad2](https://huggingface.co/deepset/electra-base-squad2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3064
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.6128 | 1.0 | 578 | 0.3142 |
| 0.4583 | 2.0 | 1156 | 0.3072 |
| 0.415 | 3.0 | 1734 | 0.3064 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
9a70bfbf6fe35b88a7e8f27c4bc33795
|
Martha-987/whisper-small-Arabic
|
Martha-987
|
whisper
| 24 | 2 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['ar']
|
['mozilla-foundation/common_voice_11_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['hf-asr-leaderboard', 'generated_from_trainer']
| true | true | true | 1,296 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Ar- Martha
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3837
- Wer: 51.1854
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.2726 | 0.42 | 1000 | 0.3837 | 51.1854 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
c3e578f6ad21da408cbae02f45d61a7d
|
asapp/sew-d-tiny-100k
|
asapp
|
sew-d
| 5 | 126 |
transformers
| 0 |
feature-extraction
| true | false | false |
apache-2.0
|
['en']
|
['librispeech_asr']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['speech']
| false | true | true | 1,699 | false |
# SEW-D-tiny
[SEW-D by ASAPP Research](https://github.com/asappresearch/sew)
The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc...
Paper: [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870)
Authors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi
**Abstract**
This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes.
The original model can be found under https://github.com/asappresearch/sew#model-checkpoints .
# Usage
See [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information on how to fine-tune the model. Note that the class `Wav2Vec2ForCTC` has to be replaced by `SEWDForCTC`.
|
71cb2d3f37d9bbb03327f419eeb83e88
|
KoichiYasuoka/roberta-base-korean-hanja
|
KoichiYasuoka
|
roberta
| 7 | 73 |
transformers
| 1 |
fill-mask
| true | false | false |
cc-by-sa-4.0
|
['ko']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['korean', 'masked-lm']
| false | true | true | 775 | false |
# roberta-base-korean-hanja
## Model Description
This is a RoBERTa model pre-trained on Korean texts, derived from [klue/roberta-base](https://huggingface.co/klue/roberta-base). Token-embeddings are enhanced to include all 한문 교육용 기초 한자 and 인명용 한자 characters. You can fine-tune `roberta-base-korean-hanja` for downstream tasks, such as [POS-tagging](https://huggingface.co/KoichiYasuoka/roberta-base-korean-upos), [dependency-parsing](https://huggingface.co/KoichiYasuoka/roberta-base-korean-ud-goeswith), and so on.
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForMaskedLM
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-korean-hanja")
model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/roberta-base-korean-hanja")
```
|
9444e26a24aa70e106eba1ad42e105d0
|
sukhendrasingh/finetuning-sentiment-model-3000-samples
|
sukhendrasingh
|
distilbert
| 13 | 11 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['imdb']
| null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,056 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3323
- Accuracy: 0.8733
- F1: 0.8797
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
212f80c1139b3a3b41ef73daafc2f5df
|
EleutherAI/pythia-1b
|
EleutherAI
|
gpt_neox
| 7 | 7,595 |
transformers
| 3 |
text-generation
| true | false | false |
apache-2.0
|
['en']
|
['the_pile']
| null | 1 | 0 | 1 | 0 | 0 | 0 | 0 |
['pytorch', 'causal-lm', 'pythia']
| false | true | true | 10,783 | false |
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research. It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. All Pythia models are available
[on Hugging Face](https://huggingface.co/EleutherAI).
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models match or exceed the performance of similar and same-sized models,
such as those in the OPT and GPT-Neo suites.
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact model parameter counts.
## Pythia-1B
### Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:contact@eleuther.ai).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 4M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 4M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 4M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
### Uses and Limitations
#### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. To enable the
study of how language models change over the course of training, we provide
143 evenly spaced intermediate checkpoints per model. These checkpoints are
hosted on Hugging Face as branches. Note that branch `143000` corresponds
exactly to the model checkpoint on the `main` branch of each model.
You may also further fine-tune and adapt Pythia-1B for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-1B as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
#### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-1B has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-1B will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “understand” human instructions.
#### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token deemed statistically most likely by the
model need not produce the most “accurate” text. Never rely on
Pythia-1B to produce factually accurate output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-1B may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-1B.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
### Training
#### Training data
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).
The Pile was **not** deduplicated before being used to train Pythia-1B.
#### Training procedure
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for the equivalent of 143000 steps at a batch size
of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch
size of 4M tokens listed were originally trained for 71500 steps instead, with
checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for
consistency with all 2M batch models, so `step1000` is the first checkpoint
for `pythia-1.4b` that was saved (corresponding to step 500 in training), and
`step1000` is likewise the first `pythia-6.9b` checkpoint that was saved
(corresponding to 1000 “actual” steps).
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).
### Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json).
February 2023 note: select evaluations and comparison with OPT and BLOOM
models will be added here at a later date.
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure>
|
b7fa2eb99d59ee6a3ecec8cb473eedda
|
sd-concepts-library/scratch-project
|
sd-concepts-library
| null | 16 | 0 | null | 0 | null | false | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,906 | false |
### Scratch project on Stable Diffusion
This is the `<scratch-project>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:











|
9025fb3f45049e252c9eb387ce2468cd
|
Haakf/allsides_left_text_padded_overfit
|
Haakf
|
distilbert
| 8 | 4 |
transformers
| 0 |
fill-mask
| false | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 2,467 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Haakf/allsides_left_text_padded_overfit
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.9591
- Validation Loss: 1.9856
- Epoch: 19
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -712, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.0625 | 1.8988 | 0 |
| 2.0063 | 1.9757 | 1 |
| 2.0061 | 1.9345 | 2 |
| 1.9730 | 1.9248 | 3 |
| 1.9572 | 1.8433 | 4 |
| 1.9645 | 1.9104 | 5 |
| 1.9584 | 1.9017 | 6 |
| 1.9508 | 1.9430 | 7 |
| 1.9716 | 1.9498 | 8 |
| 1.9613 | 1.9312 | 9 |
| 1.9625 | 1.8820 | 10 |
| 1.9573 | 1.8768 | 11 |
| 1.9612 | 1.8837 | 12 |
| 1.9501 | 1.9325 | 13 |
| 1.9471 | 1.9231 | 14 |
| 1.9567 | 1.8987 | 15 |
| 1.9605 | 1.9159 | 16 |
| 1.9661 | 1.9157 | 17 |
| 1.9513 | 1.8840 | 18 |
| 1.9591 | 1.9856 | 19 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Datasets 2.7.1
- Tokenizers 0.13.2
|
c5dc687e358f92e689cc89c35172278c
|
research-backup/bart-base-squadshifts-vanilla-reddit-qg
|
research-backup
|
bart
| 15 | 1 |
transformers
| 0 |
text2text-generation
| true | false | false |
cc-by-4.0
|
['en']
|
['lmqg/qg_squadshifts']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['question generation']
| true | true | true | 4,148 | false |
# Model Card of `research-backup/bart-base-squadshifts-vanilla-reddit-qg`
This model is fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) for question generation task on the [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) (dataset_name: reddit) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
### Overview
- **Language model:** [facebook/bart-base](https://huggingface.co/facebook/bart-base)
- **Language:** en
- **Training data:** [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) (reddit)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="en", model="research-backup/bart-base-squadshifts-vanilla-reddit-qg")
# model prediction
questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "research-backup/bart-base-squadshifts-vanilla-reddit-qg")
output = pipe("<hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.")
```
## Evaluation
- ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/research-backup/bart-base-squadshifts-vanilla-reddit-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_squadshifts.reddit.json)
| | Score | Type | Dataset |
|:-----------|--------:|:-------|:---------------------------------------------------------------------------|
| BERTScore | 91.89 | reddit | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) |
| Bleu_1 | 25.35 | reddit | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) |
| Bleu_2 | 16.53 | reddit | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) |
| Bleu_3 | 10.97 | reddit | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) |
| Bleu_4 | 7.52 | reddit | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) |
| METEOR | 21.32 | reddit | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) |
| MoverScore | 61.44 | reddit | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) |
| ROUGE_L | 24.67 | reddit | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_squadshifts
- dataset_name: reddit
- input_types: ['paragraph_answer']
- output_types: ['question']
- prefix_types: None
- model: facebook/bart-base
- max_length: 512
- max_length_output: 32
- epoch: 6
- batch: 8
- lr: 5e-05
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 8
- label_smoothing: 0.15
The full configuration can be found at [fine-tuning config file](https://huggingface.co/research-backup/bart-base-squadshifts-vanilla-reddit-qg/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
|
cf2ff92a65d8a73fd4c4cf79025e7034
|
Helsinki-NLP/opus-mt-pl-sv
|
Helsinki-NLP
|
marian
| 10 | 16 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 770 | false |
### opus-mt-pl-sv
* source languages: pl
* target languages: sv
* OPUS readme: [pl-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pl-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/pl-sv/opus-2020-01-24.zip)
* test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pl-sv/opus-2020-01-24.test.txt)
* test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pl-sv/opus-2020-01-24.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.pl.sv | 58.9 | 0.717 |
|
ea05d5b8cdb8ae18c9fd6b1fe33dd38e
|
JTH/results
|
JTH
|
distilbert
| 10 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 921 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
fbaa51094a310b8bf0ee81d405720cc4
|
natedog/my_awesome_billsum_model
|
natedog
|
t5
| 14 | 0 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null |
['billsum']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,203 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_billsum_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the billsum dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 62 | 3.5089 | 0.1247 | 0.0333 | 0.1056 | 0.1055 | 19.0 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1
- Datasets 2.9.0
- Tokenizers 0.13.2
|
e20d75e29c267137e8b973c401b3629c
|
openmmlab/upernet-swin-base
|
openmmlab
|
upernet
| 5 | 29 |
transformers
| 0 |
image-segmentation
| true | false | false |
mit
|
['en']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['vision', 'image-segmentation']
| false | true | true | 1,520 | false |
# UperNet, Swin Transformer base-sized backbone
UperNet framework for semantic segmentation, leveraging a Swin Transformer backbone. UperNet was introduced in the paper [Unified Perceptual Parsing for Scene Understanding](https://arxiv.org/abs/1807.10221) by Xiao et al.
Combining UperNet with a Swin Transformer backbone was introduced in the paper [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030).
Disclaimer: The team releasing UperNet + Swin Transformer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
UperNet is a framework for semantic segmentation. It consists of several components, including a backbone, a Feature Pyramid Network (FPN) and a Pyramid Pooling Module (PPM).
Any visual backbone can be plugged into the UperNet framework. The framework predicts a semantic label per pixel.

## Intended uses & limitations
You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?search=openmmlab/upernet) to look for
fine-tuned versions (with various backbones) on a task that interests you.
### How to use
For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/upernet#transformers.UperNetForSemanticSegmentation).
|
493ffea0e4665b131757555da3d51ff8
|
sd-concepts-library/im-poppy
|
sd-concepts-library
| null | 21 | 0 | null | 3 | null | false | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 2,236 | false |
### im-poppy on Stable Diffusion
This is the `im-poppy` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:
















|
3766baf7adcbf928086c3536e59c92f5
|
kasrahabib/200-500-bucket-finetunned
|
kasrahabib
|
bert
| 10 | 5 |
transformers
| 0 |
text-classification
| false | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,724 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# kasrahabib/200-500-bucket-finetunned
This model is a fine-tuned version of [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0280
- Validation Loss: 0.3784
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 3110, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.0739 | 0.6559 | 0 |
| 0.4665 | 0.4309 | 1 |
| 0.2473 | 0.3669 | 2 |
| 0.1437 | 0.3746 | 3 |
| 0.0825 | 0.3663 | 4 |
| 0.0592 | 0.3649 | 5 |
| 0.0451 | 0.3523 | 6 |
| 0.0345 | 0.3710 | 7 |
| 0.0292 | 0.3705 | 8 |
| 0.0280 | 0.3784 | 9 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
95a84dfa0e0943f27c2ec257247e0cf8
|
WALIDALI/cynthiasly
|
WALIDALI
| null | 18 | 2 |
diffusers
| 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['text-to-image', 'stable-diffusion']
| false | true | true | 420 | false |
### cynthiasly Dreambooth model trained by WALIDALI with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
4ec1640cbe26bc35afa3293468688c5d
|
frahman/distilbert-base-uncased-distilled-clinc
|
frahman
|
distilbert
| 10 | 3 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['clinc_oos']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,793 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1002
- Accuracy: 0.9406
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9039 | 1.0 | 318 | 0.5777 | 0.7335 |
| 0.4486 | 2.0 | 636 | 0.2860 | 0.8768 |
| 0.2528 | 3.0 | 954 | 0.1792 | 0.9210 |
| 0.176 | 4.0 | 1272 | 0.1398 | 0.9274 |
| 0.1417 | 5.0 | 1590 | 0.1209 | 0.9329 |
| 0.1245 | 6.0 | 1908 | 0.1110 | 0.94 |
| 0.1135 | 7.0 | 2226 | 0.1061 | 0.9390 |
| 0.1074 | 8.0 | 2544 | 0.1026 | 0.94 |
| 0.1032 | 9.0 | 2862 | 0.1006 | 0.9410 |
| 0.1017 | 10.0 | 3180 | 0.1002 | 0.9406 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
8dc285a1cbb7c9119222736c974e13dd
|
meongracun/nmt-mpst-id-en-lr_1e-05-ep_20-seq_128_bs-32
|
meongracun
|
t5
| 9 | 1 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,548 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nmt-mpst-id-en-lr_1e-05-ep_20-seq_128_bs-32
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7787
- Bleu: 0.0338
- Meteor: 0.1312
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Meteor |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| No log | 1.0 | 202 | 3.1965 | 0.0132 | 0.0696 |
| No log | 2.0 | 404 | 3.0644 | 0.0224 | 0.0975 |
| 3.5509 | 3.0 | 606 | 2.9995 | 0.0255 | 0.1075 |
| 3.5509 | 4.0 | 808 | 2.9538 | 0.0269 | 0.1106 |
| 3.2374 | 5.0 | 1010 | 2.9221 | 0.0277 | 0.1134 |
| 3.2374 | 6.0 | 1212 | 2.8996 | 0.0286 | 0.1165 |
| 3.2374 | 7.0 | 1414 | 2.8750 | 0.0291 | 0.1177 |
| 3.143 | 8.0 | 1616 | 2.8611 | 0.0297 | 0.1197 |
| 3.143 | 9.0 | 1818 | 2.8466 | 0.0303 | 0.1209 |
| 3.092 | 10.0 | 2020 | 2.8330 | 0.0312 | 0.1229 |
| 3.092 | 11.0 | 2222 | 2.8234 | 0.0318 | 0.1247 |
| 3.092 | 12.0 | 2424 | 2.8130 | 0.0322 | 0.1264 |
| 3.0511 | 13.0 | 2626 | 2.8058 | 0.0323 | 0.1269 |
| 3.0511 | 14.0 | 2828 | 2.7970 | 0.0324 | 0.1272 |
| 3.0288 | 15.0 | 3030 | 2.7914 | 0.033 | 0.1288 |
| 3.0288 | 16.0 | 3232 | 2.7877 | 0.0331 | 0.1299 |
| 3.0288 | 17.0 | 3434 | 2.7837 | 0.0333 | 0.1302 |
| 3.0133 | 18.0 | 3636 | 2.7809 | 0.0336 | 0.1308 |
| 3.0133 | 19.0 | 3838 | 2.7792 | 0.0337 | 0.131 |
| 3.0028 | 20.0 | 4040 | 2.7787 | 0.0338 | 0.1312 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
9ab48c64bbbd301c596a81dc18e59809
|
schorndorfer/distilroberta-base-finetuned-wikitext2
|
schorndorfer
|
roberta
| 9 | 4 |
transformers
| 0 |
fill-mask
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,267 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-finetuned-wikitext2
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8347
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.0853 | 1.0 | 2406 | 1.9214 |
| 1.986 | 2.0 | 4812 | 1.8799 |
| 1.9568 | 3.0 | 7218 | 1.8202 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
7643a16fe5243196a3125d2e7ebf8158
|
IDEA-CCNL/Randeng-T5-Char-57M-Chinese
|
IDEA-CCNL
|
mt5
| 8 | 15 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
|
['zh']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['T5', 'chinese', 'sentencepiece']
| false | true | true | 2,724 | false |
# Randeng-T5-Char-57M-Chinese
- Github: [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)
- Docs: [Fengshenbang-Docs](https://fengshenbang-doc.readthedocs.io/)
## 简介 Brief Introduction
善于处理NLT任务,中文版的T5-small,采用了BertTokenizer和中文字级别词典。
Good at handling NLT tasks, Chinese T5-small, use BertTokenizer and chinese vocab.
## 模型分类 Model Taxonomy
| 需求 Demand | 任务 Task | 系列 Series | 模型 Model | 参数 Parameter | 额外 Extra |
| :----: | :----: | :----: | :----: | :----: | :----: |
| 通用 General | 自然语言转换 NLT | 燃灯 Randeng | T5 | 57M | 中文-Chinese |
## 模型信息 Model Information
对比T5-small,训练了它的中文版。为了更好适用于中文任务,我们仅使用BertTokenzier,和支持中英文的词表,并且使用了语料库自适应预训练(Corpus-Adaptive Pre-Training, CAPT)技术在悟道语料库(180G版本)继续预训练。预训练目标为破坏span。具体地,我们在预训练阶段中使用了[封神框架](https://github.com/IDEA-CCNL/Fengshenbang-LM/tree/main/fengshen)大概花费了8张A100约24小时。
Compared with T5-samll, we implement its Chinese version. In order to use for chinese tasks, we use BertTokenizer and Chinese vocabulary, and Corpus-Adaptive Pre-Training (CAPT) on the WuDao Corpora (180 GB version). The pretraining objective is span corruption. Specifically, we use the [fengshen framework](https://github.com/IDEA-CCNL/Fengshenbang-LM/tree/main/fengshen) in the pre-training phase which cost about 24 hours with 8 A100 GPUs.
## 使用 Usage
```python
from transformers import T5ForConditionalGeneration, BertTokenizer
import torch
tokenizer=BertTokenizer.from_pretrained('IDEA-CCNL/Randeng-T5-Char-57M-Chinese', add_special_tokens=False)
model=T5ForConditionalGeneration.from_pretrained('IDEA-CCNL/Randeng-T5-Char-57M-Chinese')
```
## 引用 Citation
如果您在您的工作中使用了我们的模型,可以引用我们的[论文](https://arxiv.org/abs/2209.02970):
If you are using the resource for your work, please cite the our [paper](https://arxiv.org/abs/2209.02970):
```text
@article{fengshenbang,
author = {Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen and Ruyi Gan and Jiaxing Zhang},
title = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence},
journal = {CoRR},
volume = {abs/2209.02970},
year = {2022}
}
```
也可以引用我们的[网站](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
```text
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2021},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
```
|
66f925954581b239a7d667e4a6372853
|
jo0hnd0e/mt5-small-finetuned-amazon-en-es
|
jo0hnd0e
|
mt5
| 8 | 1 |
transformers
| 0 |
text2text-generation
| false | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,649 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# jo0hnd0e/mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.9844
- Validation Loss: 3.3610
- Epoch: 7
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 9672, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 9.6302 | 4.2399 | 0 |
| 5.7657 | 3.7191 | 1 |
| 4.9972 | 3.5931 | 2 |
| 4.6081 | 3.5038 | 3 |
| 4.3425 | 3.4322 | 4 |
| 4.1758 | 3.3950 | 5 |
| 4.0512 | 3.3649 | 6 |
| 3.9844 | 3.3610 | 7 |
### Framework versions
- Transformers 4.19.2
- TensorFlow 2.8.0
- Datasets 2.2.2
- Tokenizers 0.12.1
|
07bedf8f59fc110d64dbabbd8228df8e
|
TalTechNLP/voxlingua107-epaca-tdnn
|
TalTechNLP
| null | 8 | 24,084 |
speechbrain
| 20 |
audio-classification
| true | false | false |
apache-2.0
|
['multilingual']
|
['VoxLingua107']
| null | 2 | 2 | 0 | 0 | 1 | 1 | 0 |
['audio-classification', 'speechbrain', 'embeddings', 'Language', 'Identification', 'pytorch', 'ECAPA-TDNN', 'TDNN', 'VoxLingua107']
| false | true | true | 5,487 | false |
# VoxLingua107 ECAPA-TDNN Spoken Language Identification Model
## Model description
This is a spoken language recognition model trained on the VoxLingua107 dataset using SpeechBrain.
The model uses the ECAPA-TDNN architecture that has previously been used for speaker recognition.
The model can classify a speech utterance according to the language spoken.
It covers 107 different languages (
Abkhazian,
Afrikaans,
Amharic,
Arabic,
Assamese,
Azerbaijani,
Bashkir,
Belarusian,
Bulgarian,
Bengali,
Tibetan,
Breton,
Bosnian,
Catalan,
Cebuano,
Czech,
Welsh,
Danish,
German,
Greek,
English,
Esperanto,
Spanish,
Estonian,
Basque,
Persian,
Finnish,
Faroese,
French,
Galician,
Guarani,
Gujarati,
Manx,
Hausa,
Hawaiian,
Hindi,
Croatian,
Haitian,
Hungarian,
Armenian,
Interlingua,
Indonesian,
Icelandic,
Italian,
Hebrew,
Japanese,
Javanese,
Georgian,
Kazakh,
Central Khmer,
Kannada,
Korean,
Latin,
Luxembourgish,
Lingala,
Lao,
Lithuanian,
Latvian,
Malagasy,
Maori,
Macedonian,
Malayalam,
Mongolian,
Marathi,
Malay,
Maltese,
Burmese,
Nepali,
Dutch,
Norwegian Nynorsk,
Norwegian,
Occitan,
Panjabi,
Polish,
Pushto,
Portuguese,
Romanian,
Russian,
Sanskrit,
Scots,
Sindhi,
Sinhala,
Slovak,
Slovenian,
Shona,
Somali,
Albanian,
Serbian,
Sundanese,
Swedish,
Swahili,
Tamil,
Telugu,
Tajik,
Thai,
Turkmen,
Tagalog,
Turkish,
Tatar,
Ukrainian,
Urdu,
Uzbek,
Vietnamese,
Waray,
Yiddish,
Yoruba,
Mandarin Chinese).
## Intended uses & limitations
The model has two uses:
- use 'as is' for spoken language recognition
- use as an utterance-level feature (embedding) extractor, for creating a dedicated language ID model on your own data
The model is trained on automatically collected YouTube data. For more
information about the dataset, see [here](http://bark.phon.ioc.ee/voxlingua107/).
#### How to use
```python
import torchaudio
from speechbrain.pretrained import EncoderClassifier
language_id = EncoderClassifier.from_hparams(source="TalTechNLP/voxlingua107-epaca-tdnn", savedir="tmp")
# Download Thai language sample from Omniglot and cvert to suitable form
signal = language_id.load_audio("https://omniglot.com/soundfiles/udhr/udhr_th.mp3")
prediction = language_id.classify_batch(signal)
print(prediction)
(tensor([[0.3210, 0.3751, 0.3680, 0.3939, 0.4026, 0.3644, 0.3689, 0.3597, 0.3508,
0.3666, 0.3895, 0.3978, 0.3848, 0.3957, 0.3949, 0.3586, 0.4360, 0.3997,
0.4106, 0.3886, 0.4177, 0.3870, 0.3764, 0.3763, 0.3672, 0.4000, 0.4256,
0.4091, 0.3563, 0.3695, 0.3320, 0.3838, 0.3850, 0.3867, 0.3878, 0.3944,
0.3924, 0.4063, 0.3803, 0.3830, 0.2996, 0.4187, 0.3976, 0.3651, 0.3950,
0.3744, 0.4295, 0.3807, 0.3613, 0.4710, 0.3530, 0.4156, 0.3651, 0.3777,
0.3813, 0.6063, 0.3708, 0.3886, 0.3766, 0.4023, 0.3785, 0.3612, 0.4193,
0.3720, 0.4406, 0.3243, 0.3866, 0.3866, 0.4104, 0.4294, 0.4175, 0.3364,
0.3595, 0.3443, 0.3565, 0.3776, 0.3985, 0.3778, 0.2382, 0.4115, 0.4017,
0.4070, 0.3266, 0.3648, 0.3888, 0.3907, 0.3755, 0.3631, 0.4460, 0.3464,
0.3898, 0.3661, 0.3883, 0.3772, 0.9289, 0.3687, 0.4298, 0.4211, 0.3838,
0.3521, 0.3515, 0.3465, 0.4772, 0.4043, 0.3844, 0.3973, 0.4343]]), tensor([0.9289]), tensor([94]), ['th'])
# The scores in the prediction[0] tensor can be interpreted as cosine scores between
# the languages and the given utterance (i.e., the larger the better)
# The identified language ISO code is given in prediction[3]
print(prediction[3])
['th']
# Alternatively, use the utterance embedding extractor:
emb = language_id.encode_batch(signal)
print(emb.shape)
torch.Size([1, 1, 256])
```
#### Limitations and bias
Since the model is trained on VoxLingua107, it has many limitations and biases, some of which are:
- Probably it's accuracy on smaller languages is quite limited
- Probably it works worse on female speech than male speech (because YouTube data includes much more male speech)
- Based on subjective experiments, it doesn't work well on speech with a foreign accent
- Probably it doesn't work well on children's speech and on persons with speech disorders
## Training data
The model is trained on [VoxLingua107](http://bark.phon.ioc.ee/voxlingua107/).
VoxLingua107 is a speech dataset for training spoken language identification models.
The dataset consists of short speech segments automatically extracted from YouTube videos and labeled according the language of the video title and description, with some post-processing steps to filter out false positives.
VoxLingua107 contains data for 107 languages. The total amount of speech in the training set is 6628 hours.
The average amount of data per language is 62 hours. However, the real amount per language varies a lot. There is also a seperate development set containing 1609 speech segments from 33 languages, validated by at least two volunteers to really contain the given language.
## Training procedure
We used [SpeechBrain](https://github.com/speechbrain/speechbrain) to train the model.
Training recipe will be published soon.
## Evaluation results
Error rate: 7% on the development dataset
### BibTeX entry and citation info
```bibtex
@inproceedings{valk2021slt,
title={{VoxLingua107}: a Dataset for Spoken Language Recognition},
author={J{\"o}rgen Valk and Tanel Alum{\"a}e},
booktitle={Proc. IEEE SLT Workshop},
year={2021},
}
```
|
97fcf0587724b1a6bdf6a728f1d109bc
|
polejowska/swin-tiny-patch4-window7-224-eurosat
|
polejowska
|
swin
| 18 | 13 |
transformers
| 0 |
image-classification
| true | false | false |
apache-2.0
| null |
['imagefolder']
| null | 1 | 0 | 1 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,606 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0447
- Accuracy: 0.9852
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1547 | 0.99 | 147 | 0.0956 | 0.9711 |
| 0.0707 | 1.99 | 294 | 0.0759 | 0.9733 |
| 0.0537 | 2.99 | 441 | 0.0680 | 0.9768 |
| 0.0302 | 3.99 | 588 | 0.0447 | 0.9852 |
| 0.0225 | 4.99 | 735 | 0.0489 | 0.9837 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
ea9fed40410a5ed9153800c580640cfc
|
Anjoe/german-poetry-gpt2-large
|
Anjoe
|
gpt2
| 15 | 170 |
transformers
| 0 |
text-generation
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,179 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# german-poetry-gpt2-large
This model is a fine-tuned version of [benjamin/gerpt2-large](https://huggingface.co/benjamin/gerpt2-large) on German poems.
It achieves the following results on the evaluation set:
- eval_loss: 3.5753
- eval_runtime: 100.7173
- eval_samples_per_second: 51.6
- eval_steps_per_second: 25.805
- epoch: 4.0
- step: 95544
## Model description
large version of gpt-2
## Intended uses & limitations
It could be used for poetry generation
## Training and evaluation data
The model was trained on german poems from projekt Gutenberg
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 6
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.3.0
- Tokenizers 0.12.1
|
fec6841219380c04f06fbc69fa7f3f28
|
mideind/IceBERT-xlmr-ic3
|
mideind
|
roberta
| 6 | 418 |
transformers
| 0 |
fill-mask
| true | false | false |
agpl-3.0
|
['is']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['roberta', 'icelandic', 'masked-lm', 'pytorch']
| false | true | true | 1,604 | false |
# IceBERT-xlmr-ic3
This model was trained with fairseq using the RoBERTa-base architecture. The model `xlm-roberta-base` was used as a starting point. It is one of many models we have trained for Icelandic, see the paper referenced below for further details. The training data used is shown in the table below.
| Dataset | Size | Tokens |
|------------------------------------------------------|---------|--------|
| Icelandic Common Crawl Corpus (IC3) | 4.9 GB | 824M |
## Citation
The model is described in this paper [https://arxiv.org/abs/2201.05601](https://arxiv.org/abs/2201.05601). Please cite the paper if you make use of the model.
```
@article{DBLP:journals/corr/abs-2201-05601,
author = {V{\'{e}}steinn Sn{\ae}bjarnarson and
Haukur Barri S{\'{\i}}monarson and
P{\'{e}}tur Orri Ragnarsson and
Svanhv{\'{\i}}t Lilja Ing{\'{o}}lfsd{\'{o}}ttir and
Haukur P{\'{a}}ll J{\'{o}}nsson and
Vilhj{\'{a}}lmur {\TH}orsteinsson and
Hafsteinn Einarsson},
title = {A Warm Start and a Clean Crawled Corpus - {A} Recipe for Good Language
Models},
journal = {CoRR},
volume = {abs/2201.05601},
year = {2022},
url = {https://arxiv.org/abs/2201.05601},
eprinttype = {arXiv},
eprint = {2201.05601},
timestamp = {Thu, 20 Jan 2022 14:21:35 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2201-05601.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
b10133e36a22903eabb2491180928fd1
|
KoichiYasuoka/bert-large-japanese-upos
|
KoichiYasuoka
|
bert
| 9 | 11 |
transformers
| 2 |
token-classification
| true | false | false |
cc-by-sa-4.0
|
['ja']
|
['universal_dependencies']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['japanese', 'token-classification', 'pos', 'wikipedia', 'dependency-parsing']
| false | true | true | 1,120 | false |
# bert-large-japanese-upos
## Model Description
This is a BERT model pre-trained on Japanese Wikipedia texts for POS-tagging and dependency-parsing, derived from [bert-large-japanese-char-extended](https://huggingface.co/KoichiYasuoka/bert-large-japanese-char-extended). Every short-unit-word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
## How to Use
```py
import torch
from transformers import AutoTokenizer,AutoModelForTokenClassification
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/bert-large-japanese-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/bert-large-japanese-upos")
s="国境の長いトンネルを抜けると雪国であった。"
p=[model.config.id2label[q] for q in torch.argmax(model(tokenizer.encode(s,return_tensors="pt"))["logits"],dim=2)[0].tolist()[1:-1]]
print(list(zip(s,p)))
```
or
```py
import esupar
nlp=esupar.load("KoichiYasuoka/bert-large-japanese-upos")
print(nlp("国境の長いトンネルを抜けると雪国であった。"))
```
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
|
1d7c5f0f065aaee61cb9351a6a1b3f01
|
42MARU/ko-ctc-kenlm-spelling-only-wiki
|
42MARU
| null | 10 | 0 |
kenlm
| 0 |
text2text-generation
| false | false | false |
apache-2.0
|
['ko']
|
['korean-wiki']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['audio', 'automatic-speech-recognition', 'text2text-generation']
| false | true | true | 2,218 | false |
# ko-ctc-kenlm-spelling-only-wiki
## Table of Contents
- [ko-ctc-kenlm-spelling-only-wiki](#ko-ctc-kenlm-spelling-only-wiki)
- [Table of Contents](#table-of-contents)
- [Model Details](#model-details)
- [How to Get Started With the Model](#how-to-get-started-with-the-model)
## Model Details
- **Model Description** <br />
- 음향 모델을 위한 N-gram Base의 LM으로 자소별 단어기반으로 만들어졌으며, KenLM으로 학습되었습니다. 해당 모델은 [ko-spelling-wav2vec2-conformer-del-1s](https://huggingface.co/42MARU/ko-spelling-wav2vec2-conformer-del-1s)과 사용하십시오. <br />
- HuggingFace Transformers Style로 불러와 사용할 수 있도록 처리했습니다. <br />
- pyctcdecode lib을 이용해서도 바로 사용가능합니다. <br />
- data는 wiki korean을 사용했습니다. <br />
spelling vocab data에 없는 문장은 전부 제거하여, 오히려 LM으로 Outlier가 발생할 소요를 최소화 시켰습니다. <br />
해당 모델은 **철자전사** 기준의 데이터로 학습된 모델입니다. (숫자와 영어는 각 표기법을 따름) <br />
- **Developed by:** TADev (@lIlBrother)
- **Language(s):** Korean
- **License:** apache-2.0
## How to Get Started With the Model
```python
import librosa
from pyctcdecode import build_ctcdecoder
from transformers import (
AutoConfig,
AutoFeatureExtractor,
AutoModelForCTC,
AutoTokenizer,
Wav2Vec2ProcessorWithLM,
)
from transformers.pipelines import AutomaticSpeechRecognitionPipeline
audio_path = ""
# 모델과 토크나이저, 예측을 위한 각 모듈들을 불러옵니다.
model = AutoModelForCTC.from_pretrained("42MARU/ko-spelling-wav2vec2-conformer-del-1s")
feature_extractor = AutoFeatureExtractor.from_pretrained("42MARU/ko-spelling-wav2vec2-conformer-del-1s")
tokenizer = AutoTokenizer.from_pretrained("42MARU/ko-spelling-wav2vec2-conformer-del-1s")
processor = Wav2Vec2ProcessorWithLM("42MARU/ko-ctc-kenlm-spelling-only-wiki")
# 실제 예측을 위한 파이프라인에 정의된 모듈들을 삽입.
asr_pipeline = AutomaticSpeechRecognitionPipeline(
model=model,
tokenizer=processor.tokenizer,
feature_extractor=processor.feature_extractor,
decoder=processor.decoder,
device=-1,
)
# 음성파일을 불러오고 beamsearch 파라미터를 특정하여 예측을 수행합니다.
raw_data, _ = librosa.load(audio_path, sr=16000)
kwargs = {"decoder_kwargs": {"beam_width": 100}}
pred = asr_pipeline(inputs=raw_data, **kwargs)["text"]
# 모델이 자소 분리 유니코드 텍스트로 나오므로, 일반 String으로 변환해줄 필요가 있습니다.
result = unicodedata.normalize("NFC", pred)
print(result)
# 안녕하세요 123 테스트입니다.
```
|
357153ea48981220b2834c23fd847186
|
rafiulrumy/wav2vec2-large-xlsr-53-demo-colab
|
rafiulrumy
|
wav2vec2
| 21 | 8 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null |
['common_voice']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,657 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 6.7860
- Wer: 1.1067
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 8.2273 | 44.42 | 400 | 3.3544 | 1.0 |
| 0.9228 | 88.84 | 800 | 4.7054 | 1.1601 |
| 0.1423 | 133.32 | 1200 | 5.9489 | 1.1578 |
| 0.0751 | 177.74 | 1600 | 5.5939 | 1.1717 |
| 0.0554 | 222.21 | 2000 | 6.1230 | 1.1717 |
| 0.0356 | 266.63 | 2400 | 6.2845 | 1.1613 |
| 0.0288 | 311.11 | 2800 | 6.6109 | 1.2100 |
| 0.0223 | 355.53 | 3200 | 6.5605 | 1.1299 |
| 0.0197 | 399.95 | 3600 | 7.1242 | 1.1682 |
| 0.0171 | 444.42 | 4000 | 7.2452 | 1.1578 |
| 0.0149 | 488.84 | 4400 | 7.4048 | 1.0684 |
| 0.0118 | 533.32 | 4800 | 6.6227 | 1.1172 |
| 0.011 | 577.74 | 5200 | 6.7909 | 1.1566 |
| 0.0095 | 622.21 | 5600 | 6.8088 | 1.1102 |
| 0.0077 | 666.63 | 6000 | 7.4451 | 1.1311 |
| 0.0062 | 711.11 | 6400 | 6.8486 | 1.0777 |
| 0.0051 | 755.53 | 6800 | 6.8812 | 1.1241 |
| 0.0051 | 799.95 | 7200 | 6.9987 | 1.1450 |
| 0.0041 | 844.42 | 7600 | 7.3048 | 1.1323 |
| 0.0044 | 888.84 | 8000 | 6.6644 | 1.1125 |
| 0.0031 | 933.32 | 8400 | 6.6298 | 1.1148 |
| 0.0027 | 977.74 | 8800 | 6.7860 | 1.1067 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
d025d802c993a50a1286bab7e281fbe8
|
vblagoje/dpr-question_encoder-single-lfqa-base
|
vblagoje
|
dpr
| 7 | 111 |
transformers
| 0 |
feature-extraction
| true | false | false |
mit
|
['en']
|
['vblagoje/lfqa']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,618 | false |
## Introduction
The question encoder model based on [DPRQuestionEncoder](https://huggingface.co/docs/transformers/master/en/model_doc/dpr#transformers.DPRQuestionEncoder) architecture. It uses the transformer's pooler outputs as question representations.
## Training
We trained vblagoje/dpr-question_encoder-single-lfqa-base using FAIR's dpr-scale starting with PAQ based pretrained checkpoint and fine-tuned the retriever on the question-answer pairs from the LFQA dataset. As dpr-scale requires DPR formatted training set input with positive, negative, and hard negative samples - we created a training file with an answer being positive, negatives being question unrelated answers, while hard negative samples were chosen from answers on questions between 0.55 and 0.65 of cosine similarity.
## Performance
LFQA DPR-based retriever (vblagoje/dpr-question_encoder-single-lfqa-base and vblagoje/dpr-ctx_encoder-single-lfqa-base) had a score of 6.69 for R-precision and 14.5 for Recall@5 on KILT benchmark.
## Usage
```python
from transformers import DPRContextEncoder, DPRContextEncoderTokenizer
model = DPRQuestionEncoder.from_pretrained("vblagoje/dpr-question_encoder-single-lfqa-base").to(device)
tokenizer = AutoTokenizer.from_pretrained("vblagoje/dpr-question_encoder-single-lfqa-base")
input_ids = tokenizer("Why do airplanes leave contrails in the sky?", return_tensors="pt")["input_ids"]
embeddings = model(input_ids).pooler_output
```
## Author
- Vladimir Blagojevic: `dovlex [at] gmail.com` [Twitter](https://twitter.com/vladblagoje) | [LinkedIn](https://www.linkedin.com/in/blagojevicvladimir/)
|
4ac79235cdc4bb8a1a6f2c38f6513404
|
drhyrum/bert-tiny-torch-vuln
|
drhyrum
|
bert
| 5 | 69 |
transformers
| 2 | null | true | false | false |
['mit']
|
['en']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['BERT', 'MNLI', 'NLI', 'transformer', 'pre-training']
| false | true | true | 2,457 | false |
The following model is a Pytorch pre-trained model obtained from converting Tensorflow checkpoint found in the [official Google BERT repository](https://github.com/google-research/bert).
This is one of the smaller pre-trained BERT variants, together with [bert-mini](https://huggingface.co/prajjwal1/bert-mini) [bert-small](https://huggingface.co/prajjwal1/bert-small) and [bert-medium](https://huggingface.co/prajjwal1/bert-medium). They were introduced in the study `Well-Read Students Learn Better: On the Importance of Pre-training Compact Models` ([arxiv](https://arxiv.org/abs/1908.08962)), and ported to HF for the study `Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics` ([arXiv](https://arxiv.org/abs/2110.01518)). These models are supposed to be trained on a downstream task.
If you use the model, please consider citing both the papers:
```
@misc{bhargava2021generalization,
title={Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics},
author={Prajjwal Bhargava and Aleksandr Drozd and Anna Rogers},
year={2021},
eprint={2110.01518},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@article{DBLP:journals/corr/abs-1908-08962,
author = {Iulia Turc and
Ming{-}Wei Chang and
Kenton Lee and
Kristina Toutanova},
title = {Well-Read Students Learn Better: The Impact of Student Initialization
on Knowledge Distillation},
journal = {CoRR},
volume = {abs/1908.08962},
year = {2019},
url = {http://arxiv.org/abs/1908.08962},
eprinttype = {arXiv},
eprint = {1908.08962},
timestamp = {Thu, 29 Aug 2019 16:32:34 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1908-08962.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
Config of this model:
- `prajjwal1/bert-tiny` (L=2, H=128) [Model Link](https://huggingface.co/prajjwal1/bert-tiny)
Other models to check out:
- `prajjwal1/bert-mini` (L=4, H=256) [Model Link](https://huggingface.co/prajjwal1/bert-mini)
- `prajjwal1/bert-small` (L=4, H=512) [Model Link](https://huggingface.co/prajjwal1/bert-small)
- `prajjwal1/bert-medium` (L=8, H=512) [Model Link](https://huggingface.co/prajjwal1/bert-medium)
Original Implementation and more info can be found in [this Github repository](https://github.com/prajjwal1/generalize_lm_nli).
Twitter: [@prajjwal_1](https://twitter.com/prajjwal_1)
|
9d6ced45b14770e55665a3c235da784c
|
PaddlePaddle/ernie-1.0-large-zh-cw
|
PaddlePaddle
|
ernie
| 7 | 0 |
paddlenlp
| 0 |
fill-mask
| false | false | false |
apache-2.0
|
['zh']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['fill-mask']
| false | true | true | 1,688 | false |
[](https://github.com/PaddlePaddle/PaddleNLP)
# PaddlePaddle/ernie-1.0-large-zh-cw
## Introduction
We present a novel language representation model enhanced by knowledge called ERNIE (Enhanced Representation through kNowledge IntEgration).
Inspired by the masking strategy of BERT, ERNIE is designed to learn language representation enhanced by knowledge masking strategies,
which includes entity-level masking and phrase-level masking. Entity-level strategy masks entities which are usually composed of multiple words.
Phrase-level strategy masks the whole phrase which is composed of several words standing together as a conceptual unit.
Experimental results show that ERNIE outperforms other baseline methods, achieving new state-of-the-art results on five Chinese natural language processing tasks
including natural language inference, semantic similarity, named entity recognition, sentiment analysis and question answering.
We also demonstrate that ERNIE has more powerful knowledge inference capacity on a cloze test.
More detail: https://arxiv.org/abs/1904.09223
## Available Models
- ernie-1.0-base-zh
- ernie-1.0-large-zh-cw
## How to Use?
Click on the *Use in paddlenlp* button on the top right!
## Citation Info
```text
@article{ernie2.0,
title = {ERNIE: Enhanced Representation through Knowledge Integration},
author = {Sun, Yu and Wang, Shuohuan and Li, Yukun and Feng, Shikun and Chen, Xuyi and Zhang, Han and Tian, Xin and Zhu, Danxiang and Tian, Hao and Wu, Hua},
journal={arXiv preprint arXiv:1904.09223},
year = {2019},
}
```
|
acf97185027333132a9c563f9a1cca0c
|
jbreuch/bert-news-v2
|
jbreuch
|
bert
| 4 | 2 |
transformers
| 0 |
fill-mask
| false | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,323 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# bert-news-v2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 7052, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.19.2
- TensorFlow 2.8.0
- Datasets 2.2.2
- Tokenizers 0.12.1
|
4714c1db2b858a68ecf80037ae8ba3e3
|
Duskfallcrew/duskfall-s-vaporwave-aesthetic
|
Duskfallcrew
| null | 21 | 5 |
diffusers
| 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['text-to-image']
| false | true | true | 1,293 | false |
[](https://huggingface.co/spaces/Duskfallcrew/duskfall-s-vaporwave-aesthetic)
### Duskfall's Vaporwave Aesthetic Dreambooth model trained by Duskfallcrew with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
ALL UPDATES TO ALL MODELS after training:
https://civitai.com/user/duskfallcrew
I'm having trouble RE uploading files after doing a dumb.
SO you'll have to find the model when it's uploaded to civit!
If you want to donate towards costs and don't want to subscribe:
https://ko-fi.com/DUSKFALLcrew
If you want to monthly support the EARTH & DUSK media projects and not just AI:
https://www.patreon.com/earthndusk
Discord: https://discord.gg/Da7s8d3KJ7
Discord server DOES have pluralkit installed..
This is because DUSKFALL and Earth & Dusk are plural friendly
As well as the fact it's well known that Duskfallcrew and models are centered
around a lot of the aesthetic and feeling of their battles with Neurodivergency and DID.
|
95c1b94e980f59eea6ccb2a30a76031b
|
asapp/sew-d-base-plus-400k
|
asapp
|
sew-d
| 5 | 2 |
transformers
| 0 |
feature-extraction
| true | false | false |
apache-2.0
|
['en']
|
['librispeech_asr']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['speech']
| false | true | true | 1,701 | false |
# SEW-D-base+
[SEW-D by ASAPP Research](https://github.com/asappresearch/sew)
The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc...
Paper: [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870)
Authors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi
**Abstract**
This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes.
The original model can be found under https://github.com/asappresearch/sew#model-checkpoints .
# Usage
See [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information on how to fine-tune the model. Note that the class `Wav2Vec2ForCTC` has to be replaced by `SEWDForCTC`.
|
9811fe399d6c2a6eb39eb16aa0191ed4
|
tomekkorbak/keen_clarke
|
tomekkorbak
|
gpt2
| 23 | 1 |
transformers
| 0 | null | true | false | false |
mit
|
['en']
|
['tomekkorbak/detoxify-pile-chunk3-0-50000', 'tomekkorbak/detoxify-pile-chunk3-50000-100000', 'tomekkorbak/detoxify-pile-chunk3-100000-150000', 'tomekkorbak/detoxify-pile-chunk3-150000-200000', 'tomekkorbak/detoxify-pile-chunk3-200000-250000', 'tomekkorbak/detoxify-pile-chunk3-250000-300000', 'tomekkorbak/detoxify-pile-chunk3-300000-350000', 'tomekkorbak/detoxify-pile-chunk3-350000-400000', 'tomekkorbak/detoxify-pile-chunk3-400000-450000', 'tomekkorbak/detoxify-pile-chunk3-450000-500000', 'tomekkorbak/detoxify-pile-chunk3-500000-550000', 'tomekkorbak/detoxify-pile-chunk3-550000-600000', 'tomekkorbak/detoxify-pile-chunk3-600000-650000', 'tomekkorbak/detoxify-pile-chunk3-650000-700000', 'tomekkorbak/detoxify-pile-chunk3-700000-750000', 'tomekkorbak/detoxify-pile-chunk3-750000-800000', 'tomekkorbak/detoxify-pile-chunk3-800000-850000', 'tomekkorbak/detoxify-pile-chunk3-850000-900000', 'tomekkorbak/detoxify-pile-chunk3-900000-950000', 'tomekkorbak/detoxify-pile-chunk3-950000-1000000', 'tomekkorbak/detoxify-pile-chunk3-1000000-1050000', 'tomekkorbak/detoxify-pile-chunk3-1050000-1100000', 'tomekkorbak/detoxify-pile-chunk3-1100000-1150000', 'tomekkorbak/detoxify-pile-chunk3-1150000-1200000', 'tomekkorbak/detoxify-pile-chunk3-1200000-1250000', 'tomekkorbak/detoxify-pile-chunk3-1250000-1300000', 'tomekkorbak/detoxify-pile-chunk3-1300000-1350000', 'tomekkorbak/detoxify-pile-chunk3-1350000-1400000', 'tomekkorbak/detoxify-pile-chunk3-1400000-1450000', 'tomekkorbak/detoxify-pile-chunk3-1450000-1500000', 'tomekkorbak/detoxify-pile-chunk3-1500000-1550000', 'tomekkorbak/detoxify-pile-chunk3-1550000-1600000', 'tomekkorbak/detoxify-pile-chunk3-1600000-1650000', 'tomekkorbak/detoxify-pile-chunk3-1650000-1700000', 'tomekkorbak/detoxify-pile-chunk3-1700000-1750000', 'tomekkorbak/detoxify-pile-chunk3-1750000-1800000', 'tomekkorbak/detoxify-pile-chunk3-1800000-1850000', 'tomekkorbak/detoxify-pile-chunk3-1850000-1900000', 'tomekkorbak/detoxify-pile-chunk3-1900000-1950000']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 8,886 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# keen_clarke
This model was trained from scratch on the tomekkorbak/detoxify-pile-chunk3-0-50000, the tomekkorbak/detoxify-pile-chunk3-50000-100000, the tomekkorbak/detoxify-pile-chunk3-100000-150000, the tomekkorbak/detoxify-pile-chunk3-150000-200000, the tomekkorbak/detoxify-pile-chunk3-200000-250000, the tomekkorbak/detoxify-pile-chunk3-250000-300000, the tomekkorbak/detoxify-pile-chunk3-300000-350000, the tomekkorbak/detoxify-pile-chunk3-350000-400000, the tomekkorbak/detoxify-pile-chunk3-400000-450000, the tomekkorbak/detoxify-pile-chunk3-450000-500000, the tomekkorbak/detoxify-pile-chunk3-500000-550000, the tomekkorbak/detoxify-pile-chunk3-550000-600000, the tomekkorbak/detoxify-pile-chunk3-600000-650000, the tomekkorbak/detoxify-pile-chunk3-650000-700000, the tomekkorbak/detoxify-pile-chunk3-700000-750000, the tomekkorbak/detoxify-pile-chunk3-750000-800000, the tomekkorbak/detoxify-pile-chunk3-800000-850000, the tomekkorbak/detoxify-pile-chunk3-850000-900000, the tomekkorbak/detoxify-pile-chunk3-900000-950000, the tomekkorbak/detoxify-pile-chunk3-950000-1000000, the tomekkorbak/detoxify-pile-chunk3-1000000-1050000, the tomekkorbak/detoxify-pile-chunk3-1050000-1100000, the tomekkorbak/detoxify-pile-chunk3-1100000-1150000, the tomekkorbak/detoxify-pile-chunk3-1150000-1200000, the tomekkorbak/detoxify-pile-chunk3-1200000-1250000, the tomekkorbak/detoxify-pile-chunk3-1250000-1300000, the tomekkorbak/detoxify-pile-chunk3-1300000-1350000, the tomekkorbak/detoxify-pile-chunk3-1350000-1400000, the tomekkorbak/detoxify-pile-chunk3-1400000-1450000, the tomekkorbak/detoxify-pile-chunk3-1450000-1500000, the tomekkorbak/detoxify-pile-chunk3-1500000-1550000, the tomekkorbak/detoxify-pile-chunk3-1550000-1600000, the tomekkorbak/detoxify-pile-chunk3-1600000-1650000, the tomekkorbak/detoxify-pile-chunk3-1650000-1700000, the tomekkorbak/detoxify-pile-chunk3-1700000-1750000, the tomekkorbak/detoxify-pile-chunk3-1750000-1800000, the tomekkorbak/detoxify-pile-chunk3-1800000-1850000, the tomekkorbak/detoxify-pile-chunk3-1850000-1900000 and the tomekkorbak/detoxify-pile-chunk3-1900000-1950000 datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 64
- total_train_batch_size: 1024
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 3147
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.24.0
- Pytorch 1.11.0+cu113
- Datasets 2.5.1
- Tokenizers 0.11.6
# Full config
{'dataset': {'datasets': ['tomekkorbak/detoxify-pile-chunk3-0-50000',
'tomekkorbak/detoxify-pile-chunk3-50000-100000',
'tomekkorbak/detoxify-pile-chunk3-100000-150000',
'tomekkorbak/detoxify-pile-chunk3-150000-200000',
'tomekkorbak/detoxify-pile-chunk3-200000-250000',
'tomekkorbak/detoxify-pile-chunk3-250000-300000',
'tomekkorbak/detoxify-pile-chunk3-300000-350000',
'tomekkorbak/detoxify-pile-chunk3-350000-400000',
'tomekkorbak/detoxify-pile-chunk3-400000-450000',
'tomekkorbak/detoxify-pile-chunk3-450000-500000',
'tomekkorbak/detoxify-pile-chunk3-500000-550000',
'tomekkorbak/detoxify-pile-chunk3-550000-600000',
'tomekkorbak/detoxify-pile-chunk3-600000-650000',
'tomekkorbak/detoxify-pile-chunk3-650000-700000',
'tomekkorbak/detoxify-pile-chunk3-700000-750000',
'tomekkorbak/detoxify-pile-chunk3-750000-800000',
'tomekkorbak/detoxify-pile-chunk3-800000-850000',
'tomekkorbak/detoxify-pile-chunk3-850000-900000',
'tomekkorbak/detoxify-pile-chunk3-900000-950000',
'tomekkorbak/detoxify-pile-chunk3-950000-1000000',
'tomekkorbak/detoxify-pile-chunk3-1000000-1050000',
'tomekkorbak/detoxify-pile-chunk3-1050000-1100000',
'tomekkorbak/detoxify-pile-chunk3-1100000-1150000',
'tomekkorbak/detoxify-pile-chunk3-1150000-1200000',
'tomekkorbak/detoxify-pile-chunk3-1200000-1250000',
'tomekkorbak/detoxify-pile-chunk3-1250000-1300000',
'tomekkorbak/detoxify-pile-chunk3-1300000-1350000',
'tomekkorbak/detoxify-pile-chunk3-1350000-1400000',
'tomekkorbak/detoxify-pile-chunk3-1400000-1450000',
'tomekkorbak/detoxify-pile-chunk3-1450000-1500000',
'tomekkorbak/detoxify-pile-chunk3-1500000-1550000',
'tomekkorbak/detoxify-pile-chunk3-1550000-1600000',
'tomekkorbak/detoxify-pile-chunk3-1600000-1650000',
'tomekkorbak/detoxify-pile-chunk3-1650000-1700000',
'tomekkorbak/detoxify-pile-chunk3-1700000-1750000',
'tomekkorbak/detoxify-pile-chunk3-1750000-1800000',
'tomekkorbak/detoxify-pile-chunk3-1800000-1850000',
'tomekkorbak/detoxify-pile-chunk3-1850000-1900000',
'tomekkorbak/detoxify-pile-chunk3-1900000-1950000'],
'is_split_by_sentences': True},
'generation': {'every_n_steps': 16,
'force_call_on': [25354],
'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}],
'scenario_configs': [{'generate_kwargs': {'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'unconditional',
'num_samples': 2048},
{'generate_kwargs': {'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'challenging_rtp',
'num_samples': 2048,
'prompts_path': 'resources/challenging_rtp.jsonl'}],
'scorer_config': {'device': 'cuda:0'}},
'kl_gpt3_callback': {'every_n_steps': 16,
'force_call_on': [25354],
'max_tokens': 64,
'num_samples': 4096},
'model': {'from_scratch': True,
'gpt2_config_kwargs': {'reorder_and_upcast_attn': True,
'scale_attn_by': True},
'model_kwargs': {'value_head_config': {'is_detached': False}},
'path_or_name': 'gpt2'},
'objective': {'alpha': 1, 'beta': 10, 'name': 'AWR'},
'tokenizer': {'path_or_name': 'gpt2'},
'training': {'dataloader_num_workers': 0,
'effective_batch_size': 1024,
'evaluation_strategy': 'no',
'fp16': True,
'hub_model_id': 'keen_clarke',
'hub_strategy': 'all_checkpoints',
'learning_rate': 0.0005,
'logging_first_step': True,
'logging_steps': 1,
'num_tokens': 3300000000,
'output_dir': 'training_output104340',
'per_device_train_batch_size': 16,
'push_to_hub': True,
'remove_unused_columns': False,
'save_steps': 1673,
'save_strategy': 'steps',
'seed': 42,
'warmup_ratio': 0.01,
'weight_decay': 0.1}}
# Wandb URL:
https://wandb.ai/tomekkorbak/apo/runs/38k781c0
|
1abf80753dc3a86ee4f1553a517b02d0
|
PublicPrompts/All-In-One-Pixel-Model
|
PublicPrompts
| null | 18 | 309 |
diffusers
| 92 | null | false | false | false |
creativeml-openrail-m
| null | null | null | 4 | 0 | 4 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,547 | false |
Stable Diffusion model trained using dreambooth to create pixel art, in 2 styles
the sprite art can be used with the trigger word "pixelsprite"
the scene art can be used with the trigger word "16bitscene"
the art is not pixel perfect, but it can be fixed with pixelating tools like https://pinetools.com/pixelate-effect-image (they also have bulk pixelation)
some example generations







|
5adddcba414842c33d5abc2a5ba0fb23
|
bdh240901/wav2vec2-large-xls-r-300m-vi-colab
|
bdh240901
|
wav2vec2
| 13 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null |
['common_voice']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,100 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-vi-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
e3aaf39a2c57e325cc320a6005e3fa22
|
hyunjongkimmath/notation_identification
|
hyunjongkimmath
| null | 4 | 0 |
fastai
| 0 | null | false | false | false |
gpl-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['fastai']
| false | true | true | 918 | false |
Details coming soon, in the meantime, see [`trouver`](https://github.com/hyunjongkimmath/trouver#use-an-ml-model-to-find-notations-introduced-in-text) to see how this model is used.
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
07c69aca34cba713912dc6aa2fe625a2
|
arize-ai/XLM-RoBERTa-xtreme-en
|
arize-ai
|
xlm-roberta
| 11 | 7 |
transformers
| 0 |
token-classification
| true | false | false |
mit
| null |
['xtreme_en']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,383 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# XLM-RoBERTa-xtreme-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme_en dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2838
- Accuracy: 0.9109
- F1: 0.7544
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.6502 | 1.0 | 235 | 0.3328 | 0.8995 | 0.7251 |
| 0.3239 | 2.0 | 470 | 0.2897 | 0.9101 | 0.7473 |
| 0.2644 | 3.0 | 705 | 0.2838 | 0.9109 | 0.7544 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
62cc4bc89ff6d1db5dd7a1272eb7e81d
|
sail/poolformer_m48
|
sail
|
poolformer
| 5 | 13 |
transformers
| 0 |
image-classification
| true | false | false |
apache-2.0
| null |
['imagenet']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['image-classification', 'vision']
| false | true | true | 5,124 | false |
# PoolFormer (M48 model)
PoolFormer model trained on ImageNet-1k (1 million images, 1,000 classes) at resolution 224x224. It was first introduced in the paper [MetaFormer is Actually What You Need for Vision](https://arxiv.org/abs/2111.11418) by Yu et al. and first released in [this repository](https://github.com/sail-sg/poolformer).
## Model description
PoolFormer is a model that replaces attention token mixer in transfomrers with extremely simple operator, pooling.
Transformers have shown great potential in computer vision tasks. A common belief is their attention-based token mixer module contributes most to their competence. However, recent works show the attention-based module in transformers can be replaced by spatial MLPs and the resulted models still perform quite well. Based on this observation, we hypothesize that the general architecture of the transformers, instead of the specific token mixer module, is more essential to the model's performance. To verify this, we deliberately replace the attention module in transformers with an embarrassingly simple spatial pooling operator to conduct only the most basic token mixing. Surprisingly, we observe that the derived model, termed as PoolFormer, achieves competitive performance on multiple computer vision tasks. For example, on ImageNet-1K, PoolFormer achieves 82.1% top-1 accuracy, surpassing well-tuned vision transformer/MLP-like baselines DeiT-B/ResMLP-B24 by 0.3%/1.1% accuracy with 35%/52% fewer parameters and 48%/60% fewer MACs. The effectiveness of PoolFormer verifies our hypothesis and urges us to initiate the concept of "MetaFormer", a general architecture abstracted from transformers without specifying the token mixer. Based on the extensive experiments, we argue that MetaFormer is the key player in achieving superior results for recent transformer and MLP-like models on vision tasks. This work calls for more future research dedicated to improving MetaFormer instead of focusing on the token mixer modules. Additionally, our proposed PoolFormer could serve as a starting baseline for future MetaFormer architecture design.
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=sail/poolformer) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import PoolFormerFeatureExtractor, PoolFormerForImageClassification
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = PoolFormerFeatureExtractor.from_pretrained('sail/poolformer_m48')
model = PoolFormerForImageClassification.from_pretrained('sail/poolformer_m48')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
Currently, both the feature extractor and model support PyTorch.
## Training data
The poolformer model was trained on [ImageNet-1k](http://www.image-net.org/challenges/LSVRC/2012/), a dataset consisting of 1 million images and 1k classes.
## Training procedure
### Preprocessing
The exact details of preprocessing of images during training/validation can be found [here](https://github.com/sail-sg/poolformer/blob/main/train.py#L529-L572).
### Pretraining
The model was trained on TPU-v3s. Training resolution is 224. For all hyperparameters (such as batch size and learning rate), please refer to the original paper.
## Evaluation results
| Model | ImageNet top-1 accuracy | # params | URL |
|---------------------------------------|-------------------------|----------|------------------------------------------------------------------|
| PoolFormer-S12 | 77.2 | 12M | https://huggingface.co/sail/poolformer_s12 |
| PoolFormer-S24 | 80.3 | 21M | https://huggingface.co/sail/poolformer_s24 |
| PoolFormer-S36 | 81.4 | 31M | https://huggingface.co/sail/poolformer_s36 |
| PoolFormer-M36 | 82.1 | 56M | https://huggingface.co/sail/poolformer_m36 |
| **PoolFormer-M48** | **82.5** | **73M** | **https://huggingface.co/sail/poolformer_m48** |
### BibTeX entry and citation info
```bibtex
@article{yu2021metaformer,
title={MetaFormer is Actually What You Need for Vision},
author={Yu, Weihao and Luo, Mi and Zhou, Pan and Si, Chenyang and Zhou, Yichen and Wang, Xinchao and Feng, Jiashi and Yan, Shuicheng},
journal={arXiv preprint arXiv:2111.11418},
year={2021}
}
```
|
de7fb73797698675d379eeb2481d0148
|
Davincilee/closure_system_door_inne-roberta-base
|
Davincilee
|
roberta
| 14 | 4 |
transformers
| 0 |
fill-mask
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,150 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# closure_system_door_inne-roberta-base
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6038
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.3302 | 1.0 | 3 | 1.6837 |
### Framework versions
- Transformers 4.19.0
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
0b14c808861ea0783835303aebba3525
|
Helsinki-NLP/opus-mt-lt-fr
|
Helsinki-NLP
|
marian
| 10 | 36 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 768 | false |
### opus-mt-lt-fr
* source languages: lt
* target languages: fr
* OPUS readme: [lt-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lt-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/lt-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lt-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lt-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.lt.fr | 22.0 | 0.428 |
|
a01b846083ee3c441dc1bf091697a56f
|
fathyshalab/all-roberta-large-v1-small_talk-6-16-5
|
fathyshalab
|
roberta
| 11 | 3 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,515 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-small_talk-6-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3566
- Accuracy: 0.3855
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7259 | 1.0 | 1 | 2.5917 | 0.2551 |
| 2.217 | 2.0 | 2 | 2.5059 | 0.3275 |
| 1.7237 | 3.0 | 3 | 2.4355 | 0.3768 |
| 1.4001 | 4.0 | 4 | 2.3837 | 0.3739 |
| 1.1937 | 5.0 | 5 | 2.3566 | 0.3855 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
f0913442eece163391a4b3cc849cbe6c
|
anas-awadalla/roberta-large-houlsby-few-shot-k-32-finetuned-squad-seed-0
|
anas-awadalla
| null | 19 | 0 | null | 0 | null | false | false | false |
mit
| null |
['squad']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,096 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-houlsby-few-shot-k-32-finetuned-squad-seed-0
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 128
- seed: 0
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 128
- total_eval_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 75
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
|
c218605d1b3b0e89e3234497da991522
|
theovercomer8/proto-amy1
|
theovercomer8
| null | 18 | 0 |
diffusers
| 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['text-to-image', 'stable-diffusion']
| false | true | true | 425 | false |
### proto-amy1 Dreambooth model trained by theovercomer8 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
8bb04309cc7ab1a512af838108c1adfc
|
halffried/midas_v3_dpt_large_384
|
halffried
| null | 3 | 0 | null | 0 | null | false | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 754 | false |
## What is it?
Just a mirror of a model from https://github.com/isl-org/MiDaS, to allow downloading with Huggingface Hub tools
## Citation
```bibtex
@ARTICLE {Ranftl2022,
author = "Ren\'{e} Ranftl and Katrin Lasinger and David Hafner and Konrad Schindler and Vladlen Koltun",
title = "Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-Shot Cross-Dataset Transfer",
journal = "IEEE Transactions on Pattern Analysis and Machine Intelligence",
year = "2022",
volume = "44",
number = "3"
}
```
```bibtex
@article{Ranftl2021,
author = {Ren\'{e} Ranftl and Alexey Bochkovskiy and Vladlen Koltun},
title = {Vision Transformers for Dense Prediction},
journal = {ICCV},
year = {2021},
}
```
|
78e43932935f854e66ac6382ea120a3d
|
abdouaziiz/wav2vec2-xls-r-300m-wolof-lm
|
abdouaziiz
|
wav2vec2
| 13 | 21 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'asr', 'pytorch', 'wav2vec2', 'wolof', 'wo']
| false | true | true | 4,378 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-wolof-lm
Wolof is a language spoken in Senegal and neighbouring countries, this language is not too well represented, there are few resources in the field of Text en speech
In this sense we aim to bring our contribution to this, it is in this sense that enters this repo.
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) ,with a language model that is fine-tuned with the largest available speech dataset of the [ALFFA_PUBLIC](https://github.com/besacier/ALFFA_PUBLIC/tree/master/ASR/WOLOF)
It achieves the following results on the evaluation set:
- Loss: 0.367826
- Wer: 0.212565
## Model description
The duration of the training data is 16.8 hours, which we have divided into 10,000 audio files for the training and 3,339 for the test.
## Training and evaluation data
We eval the model at every 1500 step , and log it . and save at every 33340 step
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-4
- train_batch_size: 3
- eval_batch_size : 8
- total_train_batch_size: 64
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10.0
### Training results
| Step | Training Loss | Validation Loss | Wer |
|:-------:|:-------------:|:---------------:|:------:|
| 1500 | 2.854200 |0.642243 |0.543964 |
| 3000 | 0.599200 | 0.468138 | 0.429549|
| 4500 | 0.468300 | 0.433436 | 0.405644|
| 6000 | 0.427000 | 0.384873 | 0.344150|
| 7500 | 0.377000 | 0.374003 | 0.323892|
| 9000 | 0.337000 | 0.363674 | 0.306189|
| 10500 | 0.302400 | 0.349884 |0 .283908 |
| 12000 | 0.264100 | 0.344104 |0.277120|
| 13500 |0 .254000 |0.341820 |0.271316|
| 15000 | 0.208400| 0.326502 | 0.260695|
| 16500 | 0.203500| 0.326209 | 0.250313|
| 18000 |0.159800 |0.323539 | 0.239851|
| 19500 | 0.158200 | 0.310694 | 0.230028|
| 21000 | 0.132800 | 0.338318 | 0.229283|
| 22500 | 0.112800 | 0.336765 | 0.224145|
| 24000 | 0.103600 | 0.350208 | 0.227073 |
| 25500 | 0.091400 | 0.353609 | 0.221589 |
| 27000 | 0.084400 | 0.367826 | 0.212565 |
## Usage
The model can be used directly as follows:
```python
import librosa
import warnings
from transformers import AutoProcessor, AutoModelForCTC
from datasets import Dataset, DatasetDict
from datasets import load_metric
wer_metric = load_metric("wer")
wolof = pd.read_csv('Test.csv') # wolof contains the columns of file , and transcription
wolof = DatasetDict({'test': Dataset.from_pandas(wolof)})
chars_to_ignore_regex = '[\"\?\.\!\-\;\:\(\)\,]'
def remove_special_characters(batch):
batch["transcription"] = re.sub(chars_to_ignore_regex, '', batch["transcription"]).lower() + " "
return batch
wolof = wolof.map(remove_special_characters)
processor = AutoProcessor.from_pretrained("abdouaziiz/wav2vec2-xls-r-300m-wolof-lm")
model = AutoModelForCTC.from_pretrained("abdouaziiz/wav2vec2-xls-r-300m-wolof-lm")
warnings.filterwarnings("ignore")
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = librosa.load(batch["file"], sr = 16000)
batch["speech"] = speech_array.astype('float16')
batch["sampling_rate"] = sampling_rate
batch["target_text"] = batch["transcription"]
return batch
wolof = wolof.map(speech_file_to_array_fn, remove_columns=wolof.column_names["test"], num_proc=1)
def map_to_result(batch):
model.to("cuda")
input_values = processor(
batch["speech"],
sampling_rate=batch["sampling_rate"],
return_tensors="pt"
).input_values.to("cuda")
with torch.no_grad():
logits = model(input_values).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_str"] = processor.batch_decode(pred_ids)[0]
return batch
results = wolof["test"].map(map_to_result)
print("Test WER: {:.3f}".format(wer_metric.compute(predictions=results["pred_str"], references=results["transcription"])))
```
## PS:
The results obtained can be improved by using :
- Wav2vec2 + language model .
- Build a Spellcheker from the text of the data
- Sentence Edit Distance
|
71d6201fabac57746c2b6dfc847f5c8f
|
chandank/bart-base-finetuned-kaggglenews-batch8
|
chandank
|
bart
| 13 | 1 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,245 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-finetuned-kaggglenews-batch8
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:------:|:---------:|:-------:|
| No log | 1.0 | 495 | 1.6409 | 27.9647 | 15.4352 | 23.611 | 25.107 | 20.0 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu102
- Datasets 1.16.1
- Tokenizers 0.10.3
|
d2b08cd91ecf93f2c33fcc07b78ef111
|
nlp-waseda/roberta-base-japanese
|
nlp-waseda
|
roberta
| 7 | 3,787 |
transformers
| 15 |
fill-mask
| true | false | false |
cc-by-sa-4.0
|
['ja']
|
['wikipedia', 'cc100']
| null | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
[]
| false | true | true | 2,126 | false |
# nlp-waseda/roberta-base-japanese
## Model description
This is a Japanese RoBERTa base model pretrained on Japanese Wikipedia and the Japanese portion of CC-100.
## How to use
You can use this model for masked language modeling as follows:
```python
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("nlp-waseda/roberta-base-japanese")
model = AutoModelForMaskedLM.from_pretrained("nlp-waseda/roberta-base-japanese")
sentence = '早稲田 大学 で 自然 言語 処理 を [MASK] する 。' # input should be segmented into words by Juman++ in advance
encoding = tokenizer(sentence, return_tensors='pt')
...
```
You can fine-tune this model on downstream tasks.
## Tokenization
The input text should be segmented into words by [Juman++](https://github.com/ku-nlp/jumanpp) in advance. Juman++ 2.0.0-rc3 was used for pretraining. Each word is tokenized into tokens by [sentencepiece](https://github.com/google/sentencepiece).
`BertJapaneseTokenizer` now supports automatic `JumanppTokenizer` and `SentencepieceTokenizer`. You can use [this model](https://huggingface.co/nlp-waseda/roberta-base-japanese-with-auto-jumanpp) without any data preprocessing.
## Vocabulary
The vocabulary consists of 32000 tokens including words ([JumanDIC](https://github.com/ku-nlp/JumanDIC)) and subwords induced by the unigram language model of [sentencepiece](https://github.com/google/sentencepiece).
## Training procedure
This model was trained on Japanese Wikipedia (as of 20210920) and the Japanese portion of CC-100. It took a week using eight NVIDIA A100 GPUs.
The following hyperparameters were used during pretraining:
- learning_rate: 1e-4
- per_device_train_batch_size: 256
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 4096
- max_seq_length: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 700000
- warmup_steps: 10000
- mixed_precision_training: Native AMP
## Performance on JGLUE
See the [Baseline Scores](https://github.com/yahoojapan/JGLUE#baseline-scores) of JGLUE.
|
d92f67924d835b4c8f4b9f57e7bbe79f
|
omar47/wav2vec2-large-xls-r-300m-urdu-v2
|
omar47
|
wav2vec2
| 19 | 7 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 4,132 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-urdu-CV_8_0-and-PRUS_v2
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3541
- Wer: 0.6532
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 25
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 14.8521 | 0.52 | 32 | 20.0617 | 1.0 |
| 9.2152 | 1.05 | 64 | 7.8943 | 1.0 |
| 4.8598 | 1.57 | 96 | 5.1558 | 1.0 |
| 3.866 | 2.1 | 128 | 3.9680 | 1.0 |
| 3.3517 | 2.62 | 160 | 3.4201 | 1.0 |
| 3.2029 | 3.15 | 192 | 3.2355 | 1.0 |
| 3.1509 | 3.67 | 224 | 3.2337 | 1.0 |
| 3.1399 | 4.2 | 256 | 3.1627 | 1.0 |
| 3.0848 | 4.72 | 288 | 3.0550 | 1.0 |
| 2.9806 | 5.25 | 320 | 2.8343 | 0.9996 |
| 2.3814 | 5.77 | 352 | 2.0685 | 0.9523 |
| 1.2936 | 6.3 | 384 | 1.5907 | 0.8657 |
| 0.8656 | 6.82 | 416 | 1.3810 | 0.8235 |
| 0.7014 | 7.34 | 448 | 1.3838 | 0.7920 |
| 0.6015 | 7.87 | 480 | 1.3479 | 0.8046 |
| 0.5341 | 8.39 | 512 | 1.2613 | 0.7757 |
| 0.5031 | 8.92 | 544 | 1.2818 | 0.7890 |
| 0.4349 | 9.44 | 576 | 1.3171 | 0.7739 |
| 0.4198 | 9.97 | 608 | 1.2420 | 0.7750 |
| 0.3593 | 10.49 | 640 | 1.2991 | 0.7587 |
| 0.3252 | 11.02 | 672 | 1.2653 | 0.7228 |
| 0.2715 | 11.54 | 704 | 1.2488 | 0.7350 |
| 0.2733 | 12.07 | 736 | 1.2639 | 0.7110 |
| 0.2338 | 12.59 | 768 | 1.3733 | 0.7454 |
| 0.2403 | 13.11 | 800 | 1.3908 | 0.7228 |
| 0.2106 | 13.64 | 832 | 1.3384 | 0.7224 |
| 0.2041 | 14.16 | 864 | 1.3770 | 0.7050 |
| 0.1814 | 14.69 | 896 | 1.3526 | 0.6932 |
| 0.1742 | 15.21 | 928 | 1.3486 | 0.6895 |
| 0.1658 | 15.74 | 960 | 1.3210 | 0.6936 |
| 0.1455 | 16.26 | 992 | 1.3292 | 0.6858 |
| 0.1399 | 16.79 | 1024 | 1.3521 | 0.6828 |
| 0.1325 | 17.31 | 1056 | 1.3339 | 0.6876 |
| 0.1256 | 17.84 | 1088 | 1.3389 | 0.6836 |
| 0.1219 | 18.36 | 1120 | 1.3496 | 0.6769 |
| 0.1212 | 18.89 | 1152 | 1.3277 | 0.6776 |
| 0.1097 | 19.41 | 1184 | 1.3594 | 0.6762 |
| 0.1129 | 19.93 | 1216 | 1.3448 | 0.6688 |
| 0.1036 | 20.46 | 1248 | 1.3295 | 0.6710 |
| 0.1035 | 20.98 | 1280 | 1.3243 | 0.6577 |
| 0.094 | 21.51 | 1312 | 1.3832 | 0.6591 |
| 0.0912 | 22.03 | 1344 | 1.3857 | 0.6584 |
| 0.0815 | 22.56 | 1376 | 1.3739 | 0.6547 |
| 0.0864 | 23.08 | 1408 | 1.3649 | 0.6554 |
| 0.0772 | 23.61 | 1440 | 1.3791 | 0.6458 |
| 0.0894 | 24.13 | 1472 | 1.3630 | 0.6488 |
| 0.0776 | 24.66 | 1504 | 1.3541 | 0.6532 |
### Framework versions
- Transformers 4.19.1
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
a6b838dbd5e1561b0ea74cfdef943026
|
marifulhaque/wav2vec2-large-teacher-en-asr-timit
|
marifulhaque
|
wav2vec2
| 16 | 7 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,768 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-teacher-en-asr-timit
This model is a fine-tuned version of [facebook/wav2vec2-large](https://huggingface.co/facebook/wav2vec2-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4160
- Wer: 0.2984
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.4952 | 3.17 | 200 | 3.0555 | 1.0 |
| 2.5341 | 6.35 | 400 | 0.8144 | 0.7441 |
| 0.694 | 9.52 | 600 | 0.4154 | 0.4572 |
| 0.3593 | 12.7 | 800 | 0.4260 | 0.3890 |
| 0.2567 | 15.87 | 1000 | 0.4166 | 0.3614 |
| 0.1988 | 19.05 | 1200 | 0.3912 | 0.3346 |
| 0.1338 | 22.22 | 1400 | 0.4000 | 0.3178 |
| 0.1044 | 25.4 | 1600 | 0.4425 | 0.3071 |
| 0.0786 | 28.57 | 1800 | 0.4160 | 0.2984 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1+cu113
- Datasets 1.18.3
- Tokenizers 0.13.2
|
2cb241c3bb136ddd070814c32a907917
|
fatenghali/text_classification_model
|
fatenghali
|
distilbert
| 16 | 43 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,266 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# text_classification_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3686
- F1: 0.8968
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.2356 | 1.0 | 7215 | 0.3704 | 0.8946 |
| 0.2011 | 2.0 | 14430 | 0.3686 | 0.8968 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
e1f0defb5039c4ed6feb2ffc59426b00
|
Saisam/gpt-neo-math-small
|
Saisam
|
gpt_neo
| 9 | 2 |
transformers
| 0 |
text-generation
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 656 | false |
# GPT-NEO-Model for Lean Tactics
In the project, we used an HuggingFace GPT-NEO small model and fine-tuned the tactic dataset. The Input should be of the form
```
<GOAL> Goal <PROOFSTEP>
```
The model can easily be accessed using the following code.
```
from transformers import GPT2Tokenizer, GPTNeoForCausalLM
import torch
tokenizer = GPT2Tokenizer.from_pretrained("Saisam/gpt-neo-math-small")
model = GPTNeoForCausalLM.from_pretrained("Saisam/gpt-neo-math-small")
```
More Information can be found at https://github.com/saisurbehera/mathProof.
The current model beats the GPT-F for minif2f benchmark
Worked along with Xihao Xhang and Moya Zhu
|
9f2eaa2f3cb164d08aaab7633e3f530e
|
yhavinga/ul2-small-dutch-english
|
yhavinga
|
t5
| 15 | 57 |
transformers
| 0 |
text2text-generation
| true | false | true |
apache-2.0
|
['nl', 'en', 'multilingual']
|
['yhavinga/mc4_nl_cleaned', 'yhavinga/nedd_wiki_news']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['dutch', 'english', 't5', 't5x', 'ul2', 'seq2seq']
| false | true | true | 10,327 | false |
# ul2-small-dutch-english for Dutch and English
Pretrained T5 model on Dutch and English using a UL2 (Mixture-of-Denoisers) objective.
The T5 model was introduced in
[this paper](https://arxiv.org/abs/1910.10683)
and first released at [this page](https://github.com/google-research/text-to-text-transfer-transformer).
The UL2 objective was introduced in
[this paper](https://arxiv.org/abs/2205.05131)
and first released at [this page](https://github.com/google-research/google-research/tree/master/ul2).
**Note:** The Hugging Face inference widget is deactivated because this model needs a text-to-text fine-tuning on
a specific downstream task to be useful in practice.
## Model description
T5 is an encoder-decoder model and treats all NLP problems in a text-to-text format.
`ul2-small-dutch-english` T5 is a transformers model pretrained on a very large corpus of
Dutch and English data in a self-supervised fashion.
This means it was pretrained on the raw texts only, with no humans labelling them in any way
(which is why it can use lots of publicly available data) with an automatic process to generate
inputs and outputs from those texts.
This model used the [T5 v1.1](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) improvements compared to the original T5 model during the pretraining:
- GEGLU activation in the feed-forward hidden layer, rather than ReLU - see [here](https://arxiv.org/abs/2002.05202)
- Dropout was turned off during pre-training. Dropout should be re-enabled during fine-tuning
- Pre-trained on self-supervised objective only without mixing in the downstream tasks
- No parameter sharing between embedding and classifier layer
### UL2 pretraining objective
This model was pretrained with the UL2's Mixture-of-Denoisers (MoD) objective, that combines diverse pre-training
paradigms together. UL2 frames different objective functions for training language models as denoising tasks, where
the model has to recover missing sub-sequences of a given input. During pre-training it uses a novel mixture-of-denoisers
that samples from a varied set of such objectives, each with different configurations. UL2 is trained using a mixture of
three denoising tasks:
1. R-denoising (or regular span corruption), which emulates the standard T5 span corruption objective;
2. X-denoising (or extreme span corruption); and
3. S-denoising (or sequential PrefixLM).
During pre-training, we sample from the available denoising tasks based on user-specified ratios.
UL2 introduces a notion of mode switching, wherein downstream fine-tuning is associated with specific pre-training
denoising task. During the pre-training, a paradigm token is inserted to the input
(`[NLU]` for R-denoising, `[NLG]` for X-denoising, or `[S2S]` for S-denoising) indicating the denoising task at hand.
Then, during fine-tuning the same input token should be inserted to get the best performance for different downstream
fine-tuning tasks.
## Intended uses & limitations
This model was only pretrained in a self-supervised way excluding any supervised training.
Therefore, this model has to be fine-tuned before it is usable on a downstream task,
like text classification, unlike the Google's original T5 model.
**Note:** You most likely need to fine-tune these T5/UL2 models without mixed precision
so fine-tune them with full fp32 precision. Fine-tuning with Flax in bf16 - `model.to_bf16()` - is possible
if you set the mask correctly to exclude layernorm and embedding layers. Also note that the T5x pre-training
and fine-tuning configs set `z_loss` to 1e-4, which is used to keep the loss scale from underflowing.
You can also find more fine-tuning tips from [here](https://discuss.huggingface.co/t/t5-finetuning-tips), for example.
**Note**: For fine-tuning, most likely you can get better results if you insert a prefix token
of `[NLU]`, `[NLG]`, or `[S2S]` to your input texts.
For general language understanding fine-tuning tasks, you could use the `[NLU]` token.
For GPT-style causal language generation, you could use the `[S2S]` token.
The token `[NLG]` of the X-denoising pretrain task is somewhat mix between the language understanding and causal language
generation so the token `[NLG]` could maybe be used for language generation fine-tuning too.
### How to use
Here is how to use this model in PyTorch:
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("yhavinga/ul2-small-dutch-english", use_fast=False)
model = T5ForConditionalGeneration.from_pretrained("yhavinga/ul2-small-dutch-english")
```
and in Flax:
```python
from transformers import T5Tokenizer, FlaxT5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("yhavinga/ul2-small-dutch-english", use_fast=False)
model = FlaxT5ForConditionalGeneration.from_pretrained("yhavinga/ul2-small-dutch-english")
```
### Limitations and bias
The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral.
Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model.
## Training data
The `ul2-small-dutch-english` T5 model was pre-trained simultaneously on a combination of several datasets,
including the `full_en_nl` config of the "mc4_nl_cleaned" dataset, which is a cleaned version of Common Crawl's web
crawl corpus, Dutch books, the Dutch subset of Wikipedia (2022-03-20), the English subset of Wikipedia (2022-03-01),
and a subset of "mc4_nl_cleaned"
containing only texts from Dutch and Belgian newspapers. This last dataset is oversampled to bias the model
towards descriptions of events in the Netherlands and Belgium.
## Training procedure
### Preprocessing
The ul2-small-dutch-english T5 model uses a SentencePiece unigram tokenizer with a vocabulary of 32,000 tokens.
The tokenizer includes the special tokens `<pad>`, `</s>`, `<unk>`, known from the original T5 paper,
`[NLU]`, `[NLG]` and `[S2S]` for the MoD pre-training, and `<n>` for newline.
During pre-training with the UL2 objective, input and output sequences consist of 512 consecutive tokens.
The tokenizer does not lowercase texts and is therefore case-sensitive; it distinguises
between `dutch` and `Dutch`.
Additionally, 100+28 extra tokens were added for pre-training tasks, resulting in a total of 32,128 tokens.
### Pretraining
The model was trained on TPUv3-8 VM, sponsored by the [Google TPU Research Cloud](https://sites.research.google/trc/about/),
for 1000000 steps with a batch size of 128
(in total 65 B tokens).
The optimizer used was AdaFactor with learning rate warmup for 10K steps with a constant learning rate of 1e-2,
and then an inverse square root decay (exponential decay) of the learning rate after.
The model was trained with Google's Jax/Flax based [t5x framework](https://github.com/google-research/t5x) with help
from [Stephenn Fernandes](https://huggingface.co/StephennFernandes) to get started writing task definitions that wrap
HF datasets.
The UL2 training objective code used with the [t5x framework](https://github.com/google-research/t5x) was copied and
slightly modified from the [UL2 paper](https://arxiv.org/pdf/2205.05131.pdf) appendix chapter 9.2 by the authors
of the Finnish ul2 models. Used UL2 objective code is available in the repository
[Finnish-NLP/ul2-base-nl36-finnish](https://huggingface.co/Finnish-NLP/ul2-base-nl36-finnish) in the files `ul2_objective.py` and `tasks.py`.
UL2's mixture-of-denoisers configuration was otherwise equal to the UL2 paper
but for the rate of mixing denoisers, 20% for S-denoising was used (suggested at the paper chapter 4.5)
and the rest was divided equally between the R-denoising and X-denoising (i.e. 40% for both).
### Model list
Models in this series:
| | ul2-base-dutch-english | ul2-large-dutch-english | ul2-small-dutch-english |
|:---------------------|:-------------------------|:--------------------------|:--------------------------|
| model_type | t5 | t5 | t5 |
| _pipeline_tag | text2text-generation | text2text-generation | text2text-generation |
| d_model | 768 | 1024 | 512 |
| d_ff | 2048 | 2816 | 1024 |
| num_heads | 12 | 16 | 6 |
| d_kv | 64 | 64 | 64 |
| num_layers | 12 | 24 | 8 |
| num_decoder_layers | 12 | 24 | 8 |
| feed_forward_proj | gated-gelu | gated-gelu | gated-gelu |
| dense_act_fn | gelu_new | gelu_new | gelu_new |
| vocab_size | 32128 | 32128 | 32128 |
| tie_word_embeddings | 0 | 0 | 0 |
| torch_dtype | float32 | float32 | float32 |
| _gin_batch_size | 128 | 64 | 128 |
| _gin_z_loss | 0.0001 | 0.0001 | 0.0001 |
| _gin_t5_config_dtype | 'bfloat16' | 'bfloat16' | 'bfloat16' |
## Evaluation results
See the evaluation section in the interactive [Pre-training Dutch T5 Models](https://huggingface.co/spaces/yhavinga/pre-training-dutch-t5-models) blog.
## Acknowledgements
This project would not have been possible without compute generously provided by Google through the
[TPU Research Cloud](https://sites.research.google/trc/).
Thanks to the [Finnish-NLP](https://huggingface.co/Finnish-NLP) authors for releasing their code for the UL2 objective and associated task definitions.
Thanks to [Stephenn Fernandes](https://huggingface.co/StephennFernandes) for helping me get started with the t5x framework.
Created by [Yeb Havinga](https://www.linkedin.com/in/yeb-havinga-86530825/)
|
a90e3dff1e499ef32db307f7fc00b77b
|
muhtasham/small-mlm-glue-stsb-target-glue-qqp
|
muhtasham
|
bert
| 10 | 4 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,934 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# small-mlm-glue-stsb-target-glue-qqp
This model is a fine-tuned version of [muhtasham/small-mlm-glue-stsb](https://huggingface.co/muhtasham/small-mlm-glue-stsb) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3294
- Accuracy: 0.8525
- F1: 0.8131
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.4739 | 0.04 | 500 | 0.4259 | 0.7919 | 0.7514 |
| 0.4186 | 0.09 | 1000 | 0.3841 | 0.8190 | 0.7709 |
| 0.3984 | 0.13 | 1500 | 0.3737 | 0.8228 | 0.7757 |
| 0.3853 | 0.18 | 2000 | 0.3725 | 0.8228 | 0.7878 |
| 0.3761 | 0.22 | 2500 | 0.3558 | 0.8362 | 0.7969 |
| 0.3616 | 0.26 | 3000 | 0.3434 | 0.8418 | 0.8010 |
| 0.3616 | 0.31 | 3500 | 0.3286 | 0.8504 | 0.8008 |
| 0.3528 | 0.35 | 4000 | 0.3293 | 0.8513 | 0.8110 |
| 0.358 | 0.4 | 4500 | 0.3213 | 0.8539 | 0.8104 |
| 0.3428 | 0.44 | 5000 | 0.3294 | 0.8525 | 0.8131 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu116
- Datasets 2.8.1.dev0
- Tokenizers 0.13.2
|
743e91398e6a321e6bf6105d347407d2
|
jonatasgrosman/wav2vec2-large-xlsr-53-portuguese
|
jonatasgrosman
|
wav2vec2
| 24 | 11,152 |
transformers
| 10 |
automatic-speech-recognition
| true | false | true |
apache-2.0
|
['pt']
|
['common_voice', 'mozilla-foundation/common_voice_6_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['audio', 'automatic-speech-recognition', 'hf-asr-leaderboard', 'mozilla-foundation/common_voice_6_0', 'pt', 'robust-speech-event', 'speech', 'xlsr-fine-tuning-week']
| true | true | true | 4,047 | false |
# Fine-tuned XLSR-53 large model for speech recognition in Portuguese
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Portuguese using the train and validation splits of [Common Voice 6.1](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned thanks to the GPU credits generously given by the [OVHcloud](https://www.ovhcloud.com/en/public-cloud/ai-training/) :)
The script used for training can be found here: https://github.com/jonatasgrosman/wav2vec2-sprint
## Usage
The model can be used directly (without a language model) as follows...
Using the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) library:
```python
from huggingsound import SpeechRecognitionModel
model = SpeechRecognitionModel("jonatasgrosman/wav2vec2-large-xlsr-53-portuguese")
audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"]
transcriptions = model.transcribe(audio_paths)
```
Writing your own inference script:
```python
import torch
import librosa
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
LANG_ID = "pt"
MODEL_ID = "jonatasgrosman/wav2vec2-large-xlsr-53-portuguese"
SAMPLES = 10
test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]")
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
batch["sentence"] = batch["sentence"].upper()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
predicted_sentences = processor.batch_decode(predicted_ids)
for i, predicted_sentence in enumerate(predicted_sentences):
print("-" * 100)
print("Reference:", test_dataset[i]["sentence"])
print("Prediction:", predicted_sentence)
```
| Reference | Prediction |
| ------------- | ------------- |
| NEM O RADAR NEM OS OUTROS INSTRUMENTOS DETECTARAM O BOMBARDEIRO STEALTH. | NEMHUM VADAN OS OLTWES INSTRUMENTOS DE TTÉÃN UM BOMBERDEIRO OSTER |
| PEDIR DINHEIRO EMPRESTADO ÀS PESSOAS DA ALDEIA | E DIR ENGINHEIRO EMPRESTAR AS PESSOAS DA ALDEIA |
| OITO | OITO |
| TRANCÁ-LOS | TRANCAUVOS |
| REALIZAR UMA INVESTIGAÇÃO PARA RESOLVER O PROBLEMA | REALIZAR UMA INVESTIGAÇÃO PARA RESOLVER O PROBLEMA |
| O YOUTUBE AINDA É A MELHOR PLATAFORMA DE VÍDEOS. | YOUTUBE AINDA É A MELHOR PLATAFOMA DE VÍDEOS |
| MENINA E MENINO BEIJANDO NAS SOMBRAS | MENINA E MENINO BEIJANDO NAS SOMBRAS |
| EU SOU O SENHOR | EU SOU O SENHOR |
| DUAS MULHERES QUE SENTAM-SE PARA BAIXO LENDO JORNAIS. | DUAS MIERES QUE SENTAM-SE PARA BAICLANE JODNÓI |
| EU ORIGINALMENTE ESPERAVA | EU ORIGINALMENTE ESPERAVA |
## Evaluation
1. To evaluate on `mozilla-foundation/common_voice_6_0` with split `test`
```bash
python eval.py --model_id jonatasgrosman/wav2vec2-large-xlsr-53-portuguese --dataset mozilla-foundation/common_voice_6_0 --config pt --split test
```
2. To evaluate on `speech-recognition-community-v2/dev_data`
```bash
python eval.py --model_id jonatasgrosman/wav2vec2-large-xlsr-53-portuguese --dataset speech-recognition-community-v2/dev_data --config pt --split validation --chunk_length_s 5.0 --stride_length_s 1.0
```
## Citation
If you want to cite this model you can use this:
```bibtex
@misc{grosman2021xlsr53-large-portuguese,
title={Fine-tuned {XLSR}-53 large model for speech recognition in {P}ortuguese},
author={Grosman, Jonatas},
howpublished={\url{https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-portuguese}},
year={2021}
}
```
|
1bae249e8dec2c85bd82191387574a28
|
google/multiberts-seed_4-step_180k
|
google
|
bert
| 8 | 12 |
transformers
| 0 | null | true | true | false |
apache-2.0
|
['en']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['multiberts', 'multiberts-seed_4', 'multiberts-seed_4-step_180k']
| false | true | true | 3,521 | false |
# MultiBERTs, Intermediate Checkpoint - Seed 4, Step 180k
MultiBERTs is a collection of checkpoints and a statistical library to support
robust research on BERT. We provide 25 BERT-base models trained with
similar hyper-parameters as
[the original BERT model](https://github.com/google-research/bert) but
with different random seeds, which causes variations in the initial weights and order of
training instances. The aim is to distinguish findings that apply to a specific
artifact (i.e., a particular instance of the model) from those that apply to the
more general procedure.
We also provide 140 intermediate checkpoints captured
during the course of pre-training (we saved 28 checkpoints for the first 5 runs).
The models were originally released through
[http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our
paper
[The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163).
This is model #4, captured at step 180k (max: 2000k, i.e., 2M steps).
## Model Description
This model was captured during a reproduction of
[BERT-base uncased](https://github.com/google-research/bert), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP)
objectives.
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [BERT-base uncased](https://github.com/google-research/bert). Two major
differences with the original model:
* We pre-trained the MultiBERTs models for 2 million steps using sequence
length 512 (instead of 1 million steps using sequence length 128 then 512).
* We used an alternative version of Wikipedia and Books Corpus, initially
collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962).
This is a best-effort reproduction, and so it is probable that differences with
the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original
BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).
See our [technical report](https://arxiv.org/abs/2106.16163) for more details.
### How to use
Using code from
[BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on
Tensorflow:
```
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_4-step_180k')
model = TFBertModel.from_pretrained("google/multiberts-seed_4-step_180k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
PyTorch version:
```
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_4-step_180k')
model = BertModel.from_pretrained("google/multiberts-seed_4-step_180k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
## Citation info
```bibtex
@article{sellam2021multiberts,
title={The MultiBERTs: BERT Reproductions for Robustness Analysis},
author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick},
journal={arXiv preprint arXiv:2106.16163},
year={2021}
}
```
|
14b2658e0b4ac9c27aa39a9d831e22ba
|
akusov/durka-fusion
|
akusov
| null | 16 | 185 |
diffusers
| 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['text-to-image', 'stable-diffusion']
| false | true | true | 420 | false |
### durka-fusion Dreambooth model trained by akusov with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
b261c1ac793fe06aa494d9fa62fb5dd2
|
schoenml/bert-emotion
|
schoenml
|
distilbert
| 12 | 3 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['tweet_eval']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,455 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-emotion
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1531
- Precision: 0.7296
- Recall: 0.7266
- Fscore: 0.7278
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | Fscore |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|
| 0.8418 | 1.0 | 815 | 0.8129 | 0.7960 | 0.6242 | 0.6420 |
| 0.5222 | 2.0 | 1630 | 0.9663 | 0.7584 | 0.7196 | 0.7324 |
| 0.2662 | 3.0 | 2445 | 1.1531 | 0.7296 | 0.7266 | 0.7278 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
f6582e31914fb984c365686b6c9c1de0
|
Helsinki-NLP/opus-mt-toi-en
|
Helsinki-NLP
|
marian
| 10 | 11 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 | false |
### opus-mt-toi-en
* source languages: toi
* target languages: en
* OPUS readme: [toi-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/toi-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/toi-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/toi-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/toi-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.toi.en | 39.0 | 0.539 |
|
c4520396bd0e2224892ce84263361a77
|
s50227harry/TCFD-BERT
|
s50227harry
|
roberta
| 10 | 5 |
transformers
| 0 |
fill-mask
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 |
['generated_from_trainer']
| true | true | true | 1,843 | false |
Using the ClimateBERT-f model as starting point,the TCFD-BERT language model is additionally pre-trained to include precise paragraphs related to climate change.
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TCFD-BERT
It achieves the following results on the evaluation set:
- Loss: 1.1325
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- distributed_type: tpu
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.865 | 0.37 | 500 | 1.4460 |
| 1.6601 | 0.73 | 1000 | 1.3491 |
| 1.593 | 1.1 | 1500 | 1.3190 |
| 1.5336 | 1.46 | 2000 | 1.2801 |
| 1.5081 | 1.83 | 2500 | 1.2446 |
| 1.4547 | 2.19 | 3000 | 1.2281 |
| 1.4358 | 2.56 | 3500 | 1.2065 |
| 1.4121 | 2.92 | 4000 | 1.1874 |
| 1.396 | 3.29 | 4500 | 1.1817 |
| 1.383 | 3.65 | 5000 | 1.1747 |
| 1.3662 | 4.02 | 5500 | 1.1717 |
| 1.3545 | 4.38 | 6000 | 1.1567 |
| 1.3441 | 4.75 | 6500 | 1.1325 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.9.0+cu102
- Datasets 2.1.0
- Tokenizers 0.12.1
|
bc3d1221f2f020354e2be38644138f51
|
patrickvonplaten/deberta_v3_amazon_reviews
|
patrickvonplaten
|
deberta-v2
| 19 | 5 |
transformers
| 0 |
text-classification
| true | false | false |
mit
| null | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 984 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta_v3_amazon_reviews
This model is a fine-tuned version of [patrickvonplaten/deberta_v3_amazon_reviews](https://huggingface.co/patrickvonplaten/deberta_v3_amazon_reviews) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 2
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
9a0ccc8454f2af4d8f425ebb365bbda0
|
nlpaueb/bert-base-uncased-eurlex
|
nlpaueb
|
bert
| 8 | 264 |
transformers
| 5 |
fill-mask
| true | true | true |
cc-by-sa-4.0
|
['en']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['legal']
| false | true | true | 10,956 | false |
# LEGAL-BERT: The Muppets straight out of Law School
<img align="left" src="https://i.ibb.co/p3kQ7Rw/Screenshot-2020-10-06-at-12-16-36-PM.png" width="100"/>
LEGAL-BERT is a family of BERT models for the legal domain, intended to assist legal NLP research, computational law, and legal technology applications. To pre-train the different variations of LEGAL-BERT, we collected 12 GB of diverse English legal text from several fields (e.g., legislation, court cases, contracts) scraped from publicly available resources. Sub-domain variants (CONTRACTS-, EURLEX-, ECHR-) and/or general LEGAL-BERT perform better than using BERT out of the box for domain-specific tasks.<br>
This is the sub-domain variant pre-trained on EU legislation.
<br/><br/>
---
I. Chalkidis, M. Fergadiotis, P. Malakasiotis, N. Aletras and I. Androutsopoulos. "LEGAL-BERT: The Muppets straight out of Law School". In Findings of Empirical Methods in Natural Language Processing (EMNLP 2020) (Short Papers), to be held online, 2020. (https://aclanthology.org/2020.findings-emnlp.261)
---
## Pre-training corpora
The pre-training corpora of LEGAL-BERT include:
* 116,062 documents of EU legislation, publicly available from EURLEX (http://eur-lex.europa.eu), the repository of EU Law running under the EU Publication Office.
* 61,826 documents of UK legislation, publicly available from the UK legislation portal (http://www.legislation.gov.uk).
* 19,867 cases from the European Court of Justice (ECJ), also available from EURLEX.
* 12,554 cases from HUDOC, the repository of the European Court of Human Rights (ECHR) (http://hudoc.echr.coe.int/eng).
* 164,141 cases from various courts across the USA, hosted in the Case Law Access Project portal (https://case.law).
* 76,366 US contracts from EDGAR, the database of US Securities and Exchange Commission (SECOM) (https://www.sec.gov/edgar.shtml).
## Pre-training details
* We trained BERT using the official code provided in Google BERT's GitHub repository (https://github.com/google-research/bert).
* We released a model similar to the English BERT-BASE model (12-layer, 768-hidden, 12-heads, 110M parameters).
* We chose to follow the same training set-up: 1 million training steps with batches of 256 sequences of length 512 with an initial learning rate 1e-4.
* We were able to use a single Google Cloud TPU v3-8 provided for free from [TensorFlow Research Cloud (TFRC)](https://www.tensorflow.org/tfrc), while also utilizing [GCP research credits](https://edu.google.com/programs/credits/research). Huge thanks to both Google programs for supporting us!
## Models list
| Model name | Model Path | Training corpora |
| ------------------- | ------------------------------------ | ------------------- |
| CONTRACTS-BERT-BASE | `nlpaueb/bert-base-uncased-contracts` | US contracts |
| EURLEX-BERT-BASE | `nlpaueb/bert-base-uncased-eurlex` | EU legislation |
| ECHR-BERT-BASE | `nlpaueb/bert-base-uncased-echr` | ECHR cases |
| LEGAL-BERT-BASE * | `nlpaueb/legal-bert-base-uncased` | All |
| LEGAL-BERT-SMALL | `nlpaueb/legal-bert-small-uncased` | All |
\* LEGAL-BERT-BASE is the model referred to as LEGAL-BERT-SC in Chalkidis et al. (2020); a model trained from scratch in the legal corpora mentioned below using a newly created vocabulary by a sentence-piece tokenizer trained on the very same corpora.
\*\* As many of you expressed interest in the LEGAL-BERT-FP models (those relying on the original BERT-BASE checkpoint), they have been released in Archive.org (https://archive.org/details/legal_bert_fp), as these models are secondary and possibly only interesting for those who aim to dig deeper in the open questions of Chalkidis et al. (2020).
## Load Pretrained Model
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("nlpaueb/bert-base-uncased-eurlex")
model = AutoModel.from_pretrained("nlpaueb/bert-base-uncased-eurlex")
```
## Use LEGAL-BERT variants as Language Models
| Corpus | Model | Masked token | Predictions |
| --------------------------------- | ---------------------------------- | ------------ | ------------ |
| | **BERT-BASE-UNCASED** |
| (Contracts) | This [MASK] Agreement is between General Motors and John Murray . | employment | ('new', '0.09'), ('current', '0.04'), ('proposed', '0.03'), ('marketing', '0.03'), ('joint', '0.02')
| (ECHR) | The applicant submitted that her husband was subjected to treatment amounting to [MASK] whilst in the custody of Adana Security Directorate | torture | ('torture', '0.32'), ('rape', '0.22'), ('abuse', '0.14'), ('death', '0.04'), ('violence', '0.03')
| (EURLEX) | Establishing a system for the identification and registration of [MASK] animals and regarding the labelling of beef and beef products . | bovine | ('farm', '0.25'), ('livestock', '0.08'), ('draft', '0.06'), ('domestic', '0.05'), ('wild', '0.05')
| | **CONTRACTS-BERT-BASE** |
| (Contracts) | This [MASK] Agreement is between General Motors and John Murray . | employment | ('letter', '0.38'), ('dealer', '0.04'), ('employment', '0.03'), ('award', '0.03'), ('contribution', '0.02')
| (ECHR) | The applicant submitted that her husband was subjected to treatment amounting to [MASK] whilst in the custody of Adana Security Directorate | torture | ('death', '0.39'), ('imprisonment', '0.07'), ('contempt', '0.05'), ('being', '0.03'), ('crime', '0.02')
| (EURLEX) | Establishing a system for the identification and registration of [MASK] animals and regarding the labelling of beef and beef products . | bovine | (('domestic', '0.18'), ('laboratory', '0.07'), ('household', '0.06'), ('personal', '0.06'), ('the', '0.04')
| | **EURLEX-BERT-BASE** |
| (Contracts) | This [MASK] Agreement is between General Motors and John Murray . | employment | ('supply', '0.11'), ('cooperation', '0.08'), ('service', '0.07'), ('licence', '0.07'), ('distribution', '0.05')
| (ECHR) | The applicant submitted that her husband was subjected to treatment amounting to [MASK] whilst in the custody of Adana Security Directorate | torture | ('torture', '0.66'), ('death', '0.07'), ('imprisonment', '0.07'), ('murder', '0.04'), ('rape', '0.02')
| (EURLEX) | Establishing a system for the identification and registration of [MASK] animals and regarding the labelling of beef and beef products . | bovine | ('live', '0.43'), ('pet', '0.28'), ('certain', '0.05'), ('fur', '0.03'), ('the', '0.02')
| | **ECHR-BERT-BASE** |
| (Contracts) | This [MASK] Agreement is between General Motors and John Murray . | employment | ('second', '0.24'), ('latter', '0.10'), ('draft', '0.05'), ('bilateral', '0.05'), ('arbitration', '0.04')
| (ECHR) | The applicant submitted that her husband was subjected to treatment amounting to [MASK] whilst in the custody of Adana Security Directorate | torture | ('torture', '0.99'), ('death', '0.01'), ('inhuman', '0.00'), ('beating', '0.00'), ('rape', '0.00')
| (EURLEX) | Establishing a system for the identification and registration of [MASK] animals and regarding the labelling of beef and beef products . | bovine | ('pet', '0.17'), ('all', '0.12'), ('slaughtered', '0.10'), ('domestic', '0.07'), ('individual', '0.05')
| | **LEGAL-BERT-BASE** |
| (Contracts) | This [MASK] Agreement is between General Motors and John Murray . | employment | ('settlement', '0.26'), ('letter', '0.23'), ('dealer', '0.04'), ('master', '0.02'), ('supplemental', '0.02')
| (ECHR) | The applicant submitted that her husband was subjected to treatment amounting to [MASK] whilst in the custody of Adana Security Directorate | torture | ('torture', '1.00'), ('detention', '0.00'), ('arrest', '0.00'), ('rape', '0.00'), ('death', '0.00')
| (EURLEX) | Establishing a system for the identification and registration of [MASK] animals and regarding the labelling of beef and beef products . | bovine | ('live', '0.67'), ('beef', '0.17'), ('farm', '0.03'), ('pet', '0.02'), ('dairy', '0.01')
| | **LEGAL-BERT-SMALL** |
| (Contracts) | This [MASK] Agreement is between General Motors and John Murray . | employment | ('license', '0.09'), ('transition', '0.08'), ('settlement', '0.04'), ('consent', '0.03'), ('letter', '0.03')
| (ECHR) | The applicant submitted that her husband was subjected to treatment amounting to [MASK] whilst in the custody of Adana Security Directorate | torture | ('torture', '0.59'), ('pain', '0.05'), ('ptsd', '0.05'), ('death', '0.02'), ('tuberculosis', '0.02')
| (EURLEX) | Establishing a system for the identification and registration of [MASK] animals and regarding the labelling of beef and beef products . | bovine | ('all', '0.08'), ('live', '0.07'), ('certain', '0.07'), ('the', '0.07'), ('farm', '0.05')
## Evaluation on downstream tasks
Consider the experiments in the article "LEGAL-BERT: The Muppets straight out of Law School". Chalkidis et al., 2020, (https://aclanthology.org/2020.findings-emnlp.261)
## Author - Publication
```
@inproceedings{chalkidis-etal-2020-legal,
title = "{LEGAL}-{BERT}: The Muppets straight out of Law School",
author = "Chalkidis, Ilias and
Fergadiotis, Manos and
Malakasiotis, Prodromos and
Aletras, Nikolaos and
Androutsopoulos, Ion",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
doi = "10.18653/v1/2020.findings-emnlp.261",
pages = "2898--2904"
}
```
## About Us
[AUEB's Natural Language Processing Group](http://nlp.cs.aueb.gr) develops algorithms, models, and systems that allow computers to process and generate natural language texts.
The group's current research interests include:
* question answering systems for databases, ontologies, document collections, and the Web, especially biomedical question answering,
* natural language generation from databases and ontologies, especially Semantic Web ontologies,
text classification, including filtering spam and abusive content,
* information extraction and opinion mining, including legal text analytics and sentiment analysis,
* natural language processing tools for Greek, for example parsers and named-entity recognizers,
machine learning in natural language processing, especially deep learning.
The group is part of the Information Processing Laboratory of the Department of Informatics of the Athens University of Economics and Business.
[Ilias Chalkidis](https://iliaschalkidis.github.io) on behalf of [AUEB's Natural Language Processing Group](http://nlp.cs.aueb.gr)
| Github: [@ilias.chalkidis](https://github.com/iliaschalkidis) | Twitter: [@KiddoThe2B](https://twitter.com/KiddoThe2B) |
|
2fec84f90a208d74917e9c3c5dc0fc4a
|
hady/wav2vec2-base-timit-demo-colab
|
hady
|
wav2vec2
| 14 | 7 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,014 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
545cd28a2a32bf455ae6d18699569139
|
ZubairAzimMiazi/whisper-small-bn
|
ZubairAzimMiazi
|
whisper
| 7 | 0 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['bn']
|
['mozilla-foundation/common_voice_11_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['hf-asr-leaderboard', 'generated_from_trainer']
| true | true | true | 966 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Bn - Sanchit Gandhi
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.1
- Datasets 2.8.0
- Tokenizers 0.13.2
|
cf56f2113922ce5039476a0664b607fb
|
wietsedv/xlm-roberta-base-ft-udpos28-la
|
wietsedv
|
xlm-roberta
| 8 | 7 |
transformers
| 0 |
token-classification
| true | false | false |
apache-2.0
|
['la']
|
['universal_dependencies']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['part-of-speech', 'token-classification']
| true | true | true | 565 | false |
# XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Latin
This model is part of our paper called:
- Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages
Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-la")
model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-la")
```
|
3c9e284177e4d276fd0e677c9cca0f45
|
monakth/distilbert-base-cased-finetuned-squad
|
monakth
|
distilbert
| 12 | 3 |
transformers
| 0 |
question-answering
| true | false | false |
apache-2.0
| null |
['squad']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,278 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-cased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1787
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2453 | 1.0 | 5546 | 1.2056 |
| 0.9606 | 2.0 | 11092 | 1.1385 |
| 0.7447 | 3.0 | 16638 | 1.1787 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.2
- Tokenizers 0.12.1
|
082611090884d2128b007daf9e192a4a
|
microsoft/deberta-v2-xxlarge
|
microsoft
|
deberta-v2
| 8 | 9,165 |
transformers
| 12 |
fill-mask
| true | true | false |
mit
|
['en']
| null | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 |
['deberta', 'fill-mask']
| false | true | true | 4,606 | false |
## DeBERTa: Decoding-enhanced BERT with Disentangled Attention
[DeBERTa](https://arxiv.org/abs/2006.03654) improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. It outperforms BERT and RoBERTa on majority of NLU tasks with 80GB training data.
Please check the [official repository](https://github.com/microsoft/DeBERTa) for more details and updates.
This is the DeBERTa V2 xxlarge model with 48 layers, 1536 hidden size. The total parameters are 1.5B and it is trained with 160GB raw data.
### Fine-tuning on NLU tasks
We present the dev results on SQuAD 1.1/2.0 and several GLUE benchmark tasks.
| Model | SQuAD 1.1 | SQuAD 2.0 | MNLI-m/mm | SST-2 | QNLI | CoLA | RTE | MRPC | QQP |STS-B |
|---------------------------|-----------|-----------|-------------|-------|------|------|--------|-------|-------|------|
| | F1/EM | F1/EM | Acc | Acc | Acc | MCC | Acc |Acc/F1 |Acc/F1 |P/S |
| BERT-Large | 90.9/84.1 | 81.8/79.0 | 86.6/- | 93.2 | 92.3 | 60.6 | 70.4 | 88.0/- | 91.3/- |90.0/- |
| RoBERTa-Large | 94.6/88.9 | 89.4/86.5 | 90.2/- | 96.4 | 93.9 | 68.0 | 86.6 | 90.9/- | 92.2/- |92.4/- |
| XLNet-Large | 95.1/89.7 | 90.6/87.9 | 90.8/- | 97.0 | 94.9 | 69.0 | 85.9 | 90.8/- | 92.3/- |92.5/- |
| [DeBERTa-Large](https://huggingface.co/microsoft/deberta-large)<sup>1</sup> | 95.5/90.1 | 90.7/88.0 | 91.3/91.1| 96.5|95.3| 69.5| 91.0| 92.6/94.6| 92.3/- |92.8/92.5 |
| [DeBERTa-XLarge](https://huggingface.co/microsoft/deberta-xlarge)<sup>1</sup> | -/- | -/- | 91.5/91.2| 97.0 | - | - | 93.1 | 92.1/94.3 | - |92.9/92.7|
| [DeBERTa-V2-XLarge](https://huggingface.co/microsoft/deberta-v2-xlarge)<sup>1</sup>|95.8/90.8| 91.4/88.9|91.7/91.6| **97.5**| 95.8|71.1|**93.9**|92.0/94.2|92.3/89.8|92.9/92.9|
|**[DeBERTa-V2-XXLarge](https://huggingface.co/microsoft/deberta-v2-xxlarge)<sup>1,2</sup>**|**96.1/91.4**|**92.2/89.7**|**91.7/91.9**|97.2|**96.0**|**72.0**| 93.5| **93.1/94.9**|**92.7/90.3** |**93.2/93.1** |
--------
#### Notes.
- <sup>1</sup> Following RoBERTa, for RTE, MRPC, STS-B, we fine-tune the tasks based on [DeBERTa-Large-MNLI](https://huggingface.co/microsoft/deberta-large-mnli), [DeBERTa-XLarge-MNLI](https://huggingface.co/microsoft/deberta-xlarge-mnli), [DeBERTa-V2-XLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xlarge-mnli), [DeBERTa-V2-XXLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xxlarge-mnli). The results of SST-2/QQP/QNLI/SQuADv2 will also be slightly improved when start from MNLI fine-tuned models, however, we only report the numbers fine-tuned from pretrained base models for those 4 tasks.
- <sup>2</sup> To try the **XXLarge** model with **[HF transformers](https://huggingface.co/transformers/main_classes/trainer.html)**, we recommand using **deepspeed** as it's faster and saves memory.
Run with `Deepspeed`,
```bash
pip install datasets
pip install deepspeed
# Download the deepspeed config file
wget https://huggingface.co/microsoft/deberta-v2-xxlarge/resolve/main/ds_config.json -O ds_config.json
export TASK_NAME=mnli
output_dir="ds_results"
num_gpus=8
batch_size=8
python -m torch.distributed.launch --nproc_per_node=${num_gpus} \\
run_glue.py \\
--model_name_or_path microsoft/deberta-v2-xxlarge \\
--task_name $TASK_NAME \\
--do_train \\
--do_eval \\
--max_seq_length 256 \\
--per_device_train_batch_size ${batch_size} \\
--learning_rate 3e-6 \\
--num_train_epochs 3 \\
--output_dir $output_dir \\
--overwrite_output_dir \\
--logging_steps 10 \\
--logging_dir $output_dir \\
--deepspeed ds_config.json
```
You can also run with `--sharded_ddp`
```bash
cd transformers/examples/text-classification/
export TASK_NAME=mnli
python -m torch.distributed.launch --nproc_per_node=8 run_glue.py --model_name_or_path microsoft/deberta-v2-xxlarge \\
--task_name $TASK_NAME --do_train --do_eval --max_seq_length 256 --per_device_train_batch_size 8 \\
--learning_rate 3e-6 --num_train_epochs 3 --output_dir /tmp/$TASK_NAME/ --overwrite_output_dir --sharded_ddp --fp16
```
### Citation
If you find DeBERTa useful for your work, please cite the following paper:
``` latex
@inproceedings{
he2021deberta,
title={DEBERTA: DECODING-ENHANCED BERT WITH DISENTANGLED ATTENTION},
author={Pengcheng He and Xiaodong Liu and Jianfeng Gao and Weizhu Chen},
booktitle={International Conference on Learning Representations},
year={2021},
url={https://openreview.net/forum?id=XPZIaotutsD}
}
```
|
cd6d277a49f79712e7fee66d3e5cb811
|
annahaz/distilbert-base-multilingual-cased-finetuned-misogyny
|
annahaz
|
distilbert
| 9 | 3 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,366 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-multilingual-cased-finetuned-misogyny
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0045
- Accuracy: 0.9990
- F1: 0.9989
- Precision: 0.9989
- Recall: 0.9989
- Mae: 0.0010
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Mae |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|:------:|
| 0.2987 | 1.0 | 1759 | 0.3910 | 0.8164 | 0.8186 | 0.7793 | 0.8621 | 0.1836 |
| 0.2507 | 2.0 | 3518 | 0.2399 | 0.9029 | 0.9043 | 0.8589 | 0.9547 | 0.0971 |
| 0.1793 | 3.0 | 5277 | 0.1412 | 0.9479 | 0.9483 | 0.9068 | 0.9937 | 0.0521 |
| 0.1062 | 4.0 | 7036 | 0.0570 | 0.9828 | 0.9823 | 0.9702 | 0.9947 | 0.0172 |
| 0.0732 | 5.0 | 8795 | 0.0293 | 0.9924 | 0.9921 | 0.9885 | 0.9958 | 0.0076 |
| 0.0461 | 6.0 | 10554 | 0.0157 | 0.9960 | 0.9958 | 0.9937 | 0.9979 | 0.0040 |
| 0.037 | 7.0 | 12313 | 0.0126 | 0.9975 | 0.9974 | 0.9948 | 1.0 | 0.0025 |
| 0.0311 | 8.0 | 14072 | 0.0092 | 0.9980 | 0.9979 | 0.9958 | 1.0 | 0.0020 |
| 0.0141 | 9.0 | 15831 | 0.0065 | 0.9985 | 0.9984 | 0.9979 | 0.9989 | 0.0015 |
| 0.0119 | 10.0 | 17590 | 0.0045 | 0.9990 | 0.9989 | 0.9989 | 0.9989 | 0.0010 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.9.0+cu111
- Datasets 2.3.2
- Tokenizers 0.12.1
|
da7bfe2c2cf5a7e569e0aa69262fcf52
|
Zia/distilbert-base-uncased-finetuned-emotion
|
Zia
|
distilbert
| 16 | 29 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['emotion']
| null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,410 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1707
- Accuracy: 0.9365
- F1: 0.9367
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.0746 | 1.0 | 250 | 0.1932 | 0.9335 | 0.9330 |
| 0.0565 | 2.0 | 500 | 0.1774 | 0.939 | 0.9391 |
| 0.0539 | 3.0 | 750 | 0.1707 | 0.9365 | 0.9367 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
be1a85a813f867633bd447321bf26a5b
|
NerdyRodent/rodent-diffusion-1-5
|
NerdyRodent
| null | 8 | 0 | null | 11 |
text-to-image
| false | false | false |
creativeml-openrail-m
|
['en']
| null | null | 2 | 0 | 0 | 2 | 0 | 0 | 0 |
['stable-diffusion', 'text-to-image']
| false | true | true | 7,539 | false |
# Rodent Diffusion 1.5 Model Card
Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input.
The **Rodent-Diffusion-1-5** checkpoint was created with a custom Stable Diffusion v1.4 model as the base.
From the base model, small merges (0.1-0.3) were included from the models listed below. Some keywords may exist, but for the most part you don't need anything special.
Files are located in the "Files and versions" tab.
<a href="https://huggingface.co/NerdyRodent/rodent-diffusion-1-5/blob/main/rodent-diffusion-1.5.safetensors">Safetensors file</a>
Models:
- analogDiffusion
- Knolling Case
- RPGDiffusion
- classicnegative
- cuteRich
- inkpunk
- evoartMj4
- dreamshaper
- deliberate
# Examples
<img src="https://huggingface.co/NerdyRodent/rodent-diffusion-1-5/resolve/main/00806-Professional%2C_full-colour%2C_HD_digital_portrait_photo_of_a_hipster._Detailed%2C_intricate_hair%2C_high_definition._Focused%2C_crisp%2C_cl_3642035934_Euler%20a.png" width="30%"/>
<sub>Professional, full-colour, HD digital portrait photo of a hipster. Detailed, intricate hair, high definition. Focused, crisp, clear and sharp. Ultra-realistic cinematic film still. taken with the Canon m50, 50mm focal. pastel shades AND professional photo of a hipster with vivid, vibrant earthy tones. 1960s Technicolor 16mm celluloid film look. Coffee bar in the background. Decaf latte.
Negative prompt: blurry, smudge, smear, painting, anime, sketch, doodle, illustration, drawing
Steps: 42, Sampler: Euler a, CFG scale: 5.25, Seed: 3642035934, Size: 512x640, Denoising strength: 0.666, Hires upscale: 1.689, Hires upscaler: Latent (bicubic antialiased)
</sub>
<img src="https://huggingface.co/NerdyRodent/rodent-diffusion-1-5/resolve/main/Rodent.png" width="30%"/>
<sub>Professional, full-colour, HD digital portrait photo of a humanoid rat. Detailed, intricate hair, high definition. Focused, crisp, clear and sharp. Ultra-realistic cinematic film still. taken with the Canon m50, 50mm focal. pastel shades AND professional photo of a rodent druid wearing amazing armour. Vibrant earthy tones. 1960s Technicolor 16mm celluloid film look. Gothic castle background.
Negative prompt: blurry, smudge, smear, painting, anime, sketch, doodle, illustration, drawing
Steps: 42, Sampler: Euler a, CFG scale: 5.25, Seed: 2537406181, Size: 512x640, Denoising strength: 0.666, Hires upscale: 1.689, Hires upscaler: Latent (bicubic antialiased)
</sub>
<img src="https://huggingface.co/NerdyRodent/rodent-diffusion-1-5/resolve/main/00827-Amazing_painting_of_a_stunning_African_woman._Incredible_hairstyle%2C_high_definition._Focused%2C_crisp%2C_clear_and_sharp._Ultra-real_3784463460_Euler%20a.png" width="30%"/>
<sub>
Amazing painting of a stunning African woman. Incredible hairstyle, high definition. Focused, crisp, clear and sharp. Ultra-realistic. vibrant colours. AND matte portrait painting, cute African lady from the future. Vibrant brush strokes. oil on canvas, realism, acrylic impressionism neo-science fiction aesthetic with fantasy undertones mixed to create a warm feeling. 80's look and feel
Negative prompt: 3d, render, blurry, smudge, smear, photo
Steps: 42, Sampler: Euler a, CFG scale: 5.25, Seed: 3784463462, Size: 512x640, Denoising strength: 0.666, Hires upscale: 1.689, Hires upscaler: Latent (bicubic antialiased)
</sub>
<img src="https://huggingface.co/NerdyRodent/rodent-diffusion-1-5/resolve/main/00841-Anime_style_painting_of_a_Tokyo_street._Calm_and_peaceful._Relaxing._Incredible_definition_and_detail._Crisp%2C_clear_and_sharp_fo_2306894277_Euler%20a.png" width="30%"/>
<sub>Anime style painting of a Tokyo street. Calm and peaceful. Relaxing. Incredible definition and detail. Crisp, clear and sharp focus. AND Anime inspired cinematic film still from the future the depicts a serene street during golden hour. Cel shading. Pastel shades and chilled vibes.
Negative prompt: 3d, render, blurry, smudge, smear, photo
Steps: 42, Sampler: Euler a, CFG scale: 5.25, Seed: 2306894277, Size: 512x640, Denoising strength: 0.666, Hires upscale: 1.689, Hires upscaler: Latent (bicubic antialiased)
</sub>
<img src="https://huggingface.co/NerdyRodent/rodent-diffusion-1-5/resolve/main/00849-Matte_painting_of_a_cat%2C_psychedelic_fractal_fur%2C_illusion%2C_ethereal_AND_oil_painting_of_a_surreal_cat_with_wild%2C_human-like_eye_2534465260_Euler%20a.png" width="30%"/>
<sub>Matte painting of a cat, psychedelic fractal fur, illusion, ethereal AND oil painting of a surreal cat with wild, human-like eyes and a massive grin
Negative prompt: 3d, render, blurry, smudge, smear, photo
Steps: 42, Sampler: Euler a, CFG scale: 5.25, Seed: 2534465260, Size: 512x640, Denoising strength: 0.666, Hires upscale: 1.689, Hires upscaler: Latent (bicubic antialiased)
</sub>
Due to the strange licence mix, this model is for personal use only though I am working on an update with less restrictions.
## Original Stable Diffusion Model Details
- **Developed by:** Robin Rombach, Patrick Esser
- **Model type:** Diffusion-based text-to-image generation model
- **Language(s):** English
- **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based.
- **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([CLIP ViT-L/14](https://arxiv.org/abs/2103.00020)) as suggested in the [Imagen paper](https://arxiv.org/abs/2205.11487).
- **Resources for more information:** [GitHub Repository](https://github.com/CompVis/stable-diffusion), [Paper](https://arxiv.org/abs/2112.10752).
- **Cite as:**
@InProceedings{Rombach_2022_CVPR,
author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
title = {High-Resolution Image Synthesis With Latent Diffusion Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10684-10695}
}
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
|
ea217d5ab1b342451229670e072c0b02
|
Gokulapriyan/swin-tiny-patch4-window7-224-finetuned-new_dataset_50e
|
Gokulapriyan
|
swin
| 18 | 6 |
transformers
| 0 |
image-classification
| true | false | false |
apache-2.0
| null |
['imagefolder']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 4,415 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-new_dataset_50e
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6407
- Accuracy: 0.7973
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.94 | 4 | 0.7081 | 0.6081 |
| No log | 1.94 | 8 | 0.7104 | 0.6351 |
| 0.5516 | 2.94 | 12 | 0.6911 | 0.6351 |
| 0.5516 | 3.94 | 16 | 0.7156 | 0.7027 |
| 0.537 | 4.94 | 20 | 0.7345 | 0.7297 |
| 0.537 | 5.94 | 24 | 0.6745 | 0.6892 |
| 0.537 | 6.94 | 28 | 0.7146 | 0.7297 |
| 0.5333 | 7.94 | 32 | 0.7057 | 0.6892 |
| 0.5333 | 8.94 | 36 | 0.6531 | 0.7027 |
| 0.4871 | 9.94 | 40 | 0.6405 | 0.7027 |
| 0.4871 | 10.94 | 44 | 0.6126 | 0.6892 |
| 0.4871 | 11.94 | 48 | 0.6303 | 0.7027 |
| 0.4432 | 12.94 | 52 | 0.6264 | 0.7027 |
| 0.4432 | 13.94 | 56 | 0.6347 | 0.7432 |
| 0.3669 | 14.94 | 60 | 0.6698 | 0.6622 |
| 0.3669 | 15.94 | 64 | 0.6346 | 0.7568 |
| 0.3669 | 16.94 | 68 | 0.6510 | 0.6892 |
| 0.3704 | 17.94 | 72 | 0.6491 | 0.6892 |
| 0.3704 | 18.94 | 76 | 0.5947 | 0.7568 |
| 0.3624 | 19.94 | 80 | 0.6248 | 0.7027 |
| 0.3624 | 20.94 | 84 | 0.6580 | 0.7027 |
| 0.3624 | 21.94 | 88 | 0.6345 | 0.7162 |
| 0.3164 | 22.94 | 92 | 0.6092 | 0.7568 |
| 0.3164 | 23.94 | 96 | 0.6498 | 0.7162 |
| 0.2777 | 24.94 | 100 | 0.6915 | 0.7703 |
| 0.2777 | 25.94 | 104 | 0.6482 | 0.7838 |
| 0.2777 | 26.94 | 108 | 0.6407 | 0.7973 |
| 0.2946 | 27.94 | 112 | 0.6135 | 0.7838 |
| 0.2946 | 28.94 | 116 | 0.6819 | 0.7568 |
| 0.2546 | 29.94 | 120 | 0.6401 | 0.7568 |
| 0.2546 | 30.94 | 124 | 0.6370 | 0.7432 |
| 0.2546 | 31.94 | 128 | 0.6488 | 0.7703 |
| 0.2477 | 32.94 | 132 | 0.6429 | 0.7973 |
| 0.2477 | 33.94 | 136 | 0.6540 | 0.7703 |
| 0.1968 | 34.94 | 140 | 0.5895 | 0.7973 |
| 0.1968 | 35.94 | 144 | 0.6242 | 0.7568 |
| 0.1968 | 36.94 | 148 | 0.6575 | 0.7568 |
| 0.2235 | 37.94 | 152 | 0.6263 | 0.7703 |
| 0.2235 | 38.94 | 156 | 0.6225 | 0.7838 |
| 0.2005 | 39.94 | 160 | 0.6731 | 0.7703 |
| 0.2005 | 40.94 | 164 | 0.6844 | 0.7703 |
| 0.2005 | 41.94 | 168 | 0.6550 | 0.7703 |
| 0.2062 | 42.94 | 172 | 0.6700 | 0.7703 |
| 0.2062 | 43.94 | 176 | 0.6661 | 0.7703 |
| 0.1933 | 44.94 | 180 | 0.6606 | 0.7838 |
| 0.1933 | 45.94 | 184 | 0.6757 | 0.7703 |
| 0.1933 | 46.94 | 188 | 0.6889 | 0.7568 |
| 0.1895 | 47.94 | 192 | 0.6940 | 0.7568 |
| 0.1895 | 48.94 | 196 | 0.6919 | 0.7568 |
| 0.1666 | 49.94 | 200 | 0.6899 | 0.7432 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
d6253aa433710a61e0277336d055065f
|
mehdidn/finetuned_bert_fa_zwnj_base_ner
|
mehdidn
|
bert
| 16 | 5 |
transformers
| 0 |
token-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,782 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_parsBERT_NER_fa
This model is a fine-tuned version of [HooshvareLab/bert-fa-zwnj-base](https://huggingface.co/HooshvareLab/bert-fa-zwnj-base) on the mixed NER dataset collected from ARMAN, PEYMA, and WikiANN.
It achieves the following results on the evaluation set:
- Loss: 0.0297
- Precision: 0.9481
- Recall: 0.9582
- F1: 0.9531
- Accuracy: 0.9942
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.12 | 1.0 | 1821 | 0.0543 | 0.8387 | 0.8577 | 0.8481 | 0.9830 |
| 0.0381 | 2.0 | 3642 | 0.0360 | 0.8941 | 0.9247 | 0.9091 | 0.9898 |
| 0.0168 | 3.0 | 5463 | 0.0282 | 0.9273 | 0.9452 | 0.9362 | 0.9927 |
| 0.0078 | 4.0 | 7284 | 0.0284 | 0.9391 | 0.9551 | 0.9470 | 0.9938 |
| 0.0033 | 5.0 | 9105 | 0.0297 | 0.9481 | 0.9582 | 0.9531 | 0.9942 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
0e9ca388739a7ff355074f3fd61970ef
|
DrishtiSharma/whisper-large-v2-hungarian
|
DrishtiSharma
|
whisper
| 15 | 0 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['hu']
|
['mozilla-foundation/common_voice_11_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['whisper-event', 'generated_from_trainer']
| true | true | true | 1,308 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Large-V2 Hungarian
This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2075
- Wer: 17.4533
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.1751 | 0.67 | 1000 | 0.2075 | 17.4533 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu116
- Datasets 2.8.1.dev0
- Tokenizers 0.13.2
|
48b8098688b42ffd110b013bfedad5f5
|
clementchadebec/reproduced_hvae
|
clementchadebec
| null | 7 | 0 |
pythae
| 0 | null | false | false | false |
apache-2.0
|
['en']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['pythae', 'reproducibility']
| false | true | true | 603 | false |
This model was trained with pythae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from pythae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="clementchadebec/reproduced_hvae")
```
## Reproducibility
This trained model reproduces the results of Table 1 in [1].
| Model | Dataset | Metric | Obtained value | Reference value |
|:---:|:---:|:---:|:---:|:---:|
| HVAE (n_lf=4) | Binary MNIST | NLL (1000 IS) | 86.21 (0.01) | 86.40 |
[1] Samlimans, T. et al, *Markov chain monte carlo and variational inference: Bridging the gap*, ICML 2015
|
cd6106d08f5d827c0ef928e1f445e763
|
henryscheible/mrpc_bert-base-uncased_81_v2
|
henryscheible
| null | 13 | 0 | null | 0 | null | true | false | false |
apache-2.0
|
['en']
|
['glue']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,057 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mrpc_bert-base-uncased_81_v2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6390
- Accuracy: 0.8088
- F1: 0.8717
- Combined Score: 0.8403
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1
- Datasets 2.6.1
- Tokenizers 0.13.1
|
aaa3eb34412bf92164fa936d0706694b
|
KoichiYasuoka/roberta-base-ainu
|
KoichiYasuoka
|
roberta
| 8 | 117 |
transformers
| 0 |
fill-mask
| true | false | false |
cc-by-sa-4.0
|
['ain']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['ainu', 'masked-lm']
| false | true | true | 790 | false |
# roberta-base-ainu
## Model Description
This is a RoBERTa model pre-trained on Ainu texts written in カタカナ, Roman, and Кириллица. You can fine-tune `roberta-base-ainu` for downstream tasks, such as [POS-tagging](https://huggingface.co/KoichiYasuoka/roberta-base-ainu-upos), [dependency-parsing](https://huggingface.co/KoichiYasuoka/roberta-base-ainu-ud-goeswith), and so on.
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForMaskedLM
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-ainu")
model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/roberta-base-ainu")
```
## Reference
安岡孝一: [ローマ字・カタカナ・キリル文字併用アイヌ語RoBERTa・DeBERTaモデルの開発](http://id.nii.ac.jp/1001/00224072/), 情報処理学会研究報告, Vol.2023-CH-131『人文科学とコンピュータ』, No.7 (2023年2月18日), pp.1-7.
|
1c785aca3d1beb03efaedbc32d32ee55
|
hjjeon/ddpm-butterflies-128
|
hjjeon
| null | 18 | 2 |
diffusers
| 0 | null | false | false | false |
apache-2.0
|
['en']
|
['huggan/smithsonian_butterflies_subset']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,415 | false |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
from diffusers import DDPMPipeline
model_id = "hjjeon/ddpm-butterflies-128"
# load model and scheduler
pipeline = DDPMPipeline.from_pretrained(model_id)
# run pipeline in inference
image = pipeline()["sample"]
# save image
image[0].save("butterfly.png")
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/hjjeon/ddpm-butterflies-128/tensorboard?#scalars)
|
1404c7f91e3d106815e8a905948a3485
|
roschmid/distilbert-base-uncased-finetuned-ner
|
roschmid
|
distilbert
| 13 | 3 |
transformers
| 0 |
token-classification
| true | false | false |
apache-2.0
| null |
['conll2003']
| null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,555 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0631
- Precision: 0.9207
- Recall: 0.9352
- F1: 0.9279
- Accuracy: 0.9832
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2399 | 1.0 | 878 | 0.0678 | 0.9097 | 0.9211 | 0.9154 | 0.9804 |
| 0.0502 | 2.0 | 1756 | 0.0628 | 0.9152 | 0.9320 | 0.9235 | 0.9820 |
| 0.0299 | 3.0 | 2634 | 0.0631 | 0.9207 | 0.9352 | 0.9279 | 0.9832 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
ed2386426c2a094ebc9dc3e1023b0a2c
|
jonatasgrosman/exp_w2v2r_fr_xls-r_age_teens-2_sixties-8_s82
|
jonatasgrosman
|
wav2vec2
| 10 | 0 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['fr']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'fr']
| false | true | true | 474 | false |
# exp_w2v2r_fr_xls-r_age_teens-2_sixties-8_s82
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
b8e7ae5fe83b8bd96eef7be2d36d2d6c
|
ganchengguang/RoBERTa-base-janpanese
|
ganchengguang
|
roberta
| 6 | 13 |
transformers
| 1 |
fill-mask
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 826 | false |
This is RoBERTa model pretrained on texts in the Japanese language.
3.45GB wikipedia text
trained 1.65M step
use the sentencepiece tokenizer.
If you want to fine-tune model. Please use
```python
from transformers import BertTokenizer, RobertaModel
BertTokenizer.from_pretrained('')
RoBERTModel.from_pretrained('')
```
The accuracy in JGLUE-marc_ja-v1.0 binary sentiment classification 95.4%
Contribute by Yokohama Nationaly University Mori Lab
@article{liu2019roberta,
title={Roberta: A robustly optimized bert pretraining approach},
author={Liu, Yinhan and Ott, Myle and Goyal, Naman and Du, Jingfei and Joshi, Mandar and Chen, Danqi and Levy, Omer and Lewis, Mike and Zettlemoyer, Luke and Stoyanov,
Veselin},
journal={arXiv preprint arXiv:1907.11692},
year={2019}
}
|
65d3429bd1113815da6142bf91d1e116
|
yazdipour/text-to-sparql-t5-small
|
yazdipour
|
t5
| 11 | 8 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null |
[]
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| false | true | true | 1,780 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# text-to-sparql-t5-small-2021-10-19_10-17_lastDS
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2335
- Gen Len: 19.0
- P: 0.5580
- R: 0.0884
- F1: 0.3129
- Score: 5.9585
- Bleu-precisions: [90.11303396628615, 80.34125695971072, 73.81487011728768, 69.48796722990271]
- Bleu-bp: 0.0763
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Gen Len | P | R | F1 | Score | Bleu-precisions | Bleu-bp |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:------:|:------:|:------:|:----------------------------------------------------------------------------:|:-------:|
| 0.3166 | 1.0 | 4807 | 0.2335 | 19.0 | 0.5580 | 0.0884 | 0.3129 | 5.9585 | [90.11303396628615, 80.34125695971072, 73.81487011728768, 69.48796722990271] | 0.0763 |
### Framework versions
- Transformers 4.10.0
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
773ea491a5983ce532eafb5693b40179
|
din0s/bart-large-asqa-cb
|
din0s
|
bart
| 11 | 4 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,814 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-asqa-cb
This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4791
- Rougelsum: 38.2862
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:---------:|
| 3.347 | 1.0 | 545 | 2.5353 | 37.3812 |
| 2.7829 | 2.0 | 1090 | 2.5087 | 37.6431 |
| 2.6973 | 3.0 | 1635 | 2.4906 | 37.9194 |
| 2.6125 | 4.0 | 2180 | 2.4812 | 38.1180 |
| 2.5697 | 5.0 | 2725 | 2.4762 | 38.1616 |
| 2.5086 | 6.0 | 3270 | 2.4773 | 38.1370 |
| 2.4678 | 7.0 | 3815 | 2.4831 | 37.9346 |
| 2.4404 | 8.0 | 4360 | 2.4896 | 38.1150 |
| 2.3866 | 9.0 | 4905 | 2.4775 | 38.2222 |
| 2.3791 | 10.0 | 5450 | 2.4791 | 38.2862 |
### Framework versions
- Transformers 4.23.0.dev0
- Pytorch 1.12.1+cu102
- Datasets 2.4.0
- Tokenizers 0.12.1
|
7e59005da97bb6b4664913058d31dacb
|
Akihiro2/bert-finetuned-squad
|
Akihiro2
|
bert
| 12 | 3 |
transformers
| 0 |
question-answering
| true | false | false |
apache-2.0
| null |
['squad']
| null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 954 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
5f8db0a78a86230b1bbe56f31143878a
|
malcolm/TSC_SentimentA_IMDBAmznTSC_2
|
malcolm
|
distilbert
| 13 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,044 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TSC_SentimentA_IMDBAmznTSC_2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1985
- Accuracy: 0.9365
- F1: 0.9373
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
9af4e7c23d0dbd13ec605129c543968d
|
jonatasgrosman/exp_w2v2t_ja_unispeech-sat_s635
|
jonatasgrosman
|
unispeech-sat
| 10 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['ja']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'ja']
| false | true | true | 463 | false |
# exp_w2v2t_ja_unispeech-sat_s635
Fine-tuned [microsoft/unispeech-sat-large](https://huggingface.co/microsoft/unispeech-sat-large) for speech recognition using the train split of [Common Voice 7.0 (ja)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
4d42549d90bade60834ac2dc73e5718c
|
jonatasgrosman/exp_w2v2r_en_xls-r_age_teens-2_sixties-8_s717
|
jonatasgrosman
|
wav2vec2
| 10 | 0 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['en']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'en']
| false | true | true | 475 | false |
# exp_w2v2r_en_xls-r_age_teens-2_sixties-8_s717
Fine-tuned [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
09d6c067e6abf80325cbdaaffe62530c
|
MultiBertGunjanPatrick/multiberts-seed-0-80k
|
MultiBertGunjanPatrick
|
bert
| 7 | 3 |
transformers
| 0 | null | true | false | false |
apache-2.0
|
['en']
|
['bookcorpus', 'wikipedia']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['exbert', 'multiberts', 'multiberts-seed-0']
| false | true | true | 6,479 | false |
# MultiBERTs Seed 0 Checkpoint 80k (uncased)
Seed 0 intermediate checkpoint 80k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-0](https://hf.co/multberts-seed-0). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-0-80k')
model = BertModel.from_pretrained("multiberts-seed-0-80k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
b3db93d0a52aae654520401aa475c007
|
dminiotas05/distilbert-base-uncased-finetuned-ft1500_norm300_aug9
|
dminiotas05
|
distilbert
| 14 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,542 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ft1500_norm300_aug9
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0639
- Mse: 4.2557
- Mae: 1.3660
- R2: 0.4773
- Accuracy: 0.3664
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mse | Mae | R2 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:--------:|
| 0.7595 | 1.0 | 3242 | 1.1009 | 4.4036 | 1.4148 | 0.4591 | 0.3440 |
| 0.6024 | 2.0 | 6484 | 1.0896 | 4.3582 | 1.3732 | 0.4647 | 0.3690 |
| 0.3745 | 3.0 | 9726 | 1.0639 | 4.2557 | 1.3660 | 0.4773 | 0.3664 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
d6234bee6136f7dd17a0f835ccd7bc73
|
sd-concepts-library/test-man
|
sd-concepts-library
| null | 8 | 0 | null | 0 | null | false | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 906 | false |
### Test man on Stable Diffusion
This is the `<Test-man>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:



|
bbcfee1323c3d7c226bee1fab60532e8
|
Helsinki-NLP/opus-mt-zle-en
|
Helsinki-NLP
|
marian
| 11 | 13 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['be', 'ru', 'uk', 'zle', 'en']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 2,798 | false |
### zle-eng
* source group: East Slavic languages
* target group: English
* OPUS readme: [zle-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zle-eng/README.md)
* model: transformer
* source language(s): bel bel_Latn orv_Cyrl rue rus ukr
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-eng/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-eng/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zle-eng/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newstest2012-ruseng.rus.eng | 31.1 | 0.579 |
| newstest2013-ruseng.rus.eng | 24.9 | 0.522 |
| newstest2014-ruen-ruseng.rus.eng | 27.9 | 0.563 |
| newstest2015-enru-ruseng.rus.eng | 26.8 | 0.541 |
| newstest2016-enru-ruseng.rus.eng | 25.8 | 0.535 |
| newstest2017-enru-ruseng.rus.eng | 29.1 | 0.561 |
| newstest2018-enru-ruseng.rus.eng | 25.4 | 0.537 |
| newstest2019-ruen-ruseng.rus.eng | 26.8 | 0.545 |
| Tatoeba-test.bel-eng.bel.eng | 38.3 | 0.569 |
| Tatoeba-test.multi.eng | 50.1 | 0.656 |
| Tatoeba-test.orv-eng.orv.eng | 6.9 | 0.217 |
| Tatoeba-test.rue-eng.rue.eng | 15.4 | 0.345 |
| Tatoeba-test.rus-eng.rus.eng | 52.5 | 0.674 |
| Tatoeba-test.ukr-eng.ukr.eng | 52.1 | 0.673 |
### System Info:
- hf_name: zle-eng
- source_languages: zle
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zle-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['be', 'ru', 'uk', 'zle', 'en']
- src_constituents: {'bel', 'orv_Cyrl', 'bel_Latn', 'rus', 'ukr', 'rue'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/zle-eng/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/zle-eng/opus2m-2020-08-01.test.txt
- src_alpha3: zle
- tgt_alpha3: eng
- short_pair: zle-en
- chrF2_score: 0.6559999999999999
- bleu: 50.1
- brevity_penalty: 0.97
- ref_len: 69599.0
- src_name: East Slavic languages
- tgt_name: English
- train_date: 2020-08-01
- src_alpha2: zle
- tgt_alpha2: en
- prefer_old: False
- long_pair: zle-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
c5bc6035f6c6c33b267723e8c0e873ae
|
Helsinki-NLP/opus-mt-tc-big-ar-en
|
Helsinki-NLP
|
marian
| 13 | 279 |
transformers
| 0 |
translation
| true | true | false |
cc-by-4.0
|
['ar', 'en']
| null | null | 2 | 1 | 1 | 0 | 0 | 0 | 0 |
['translation', 'opus-mt-tc']
| true | true | true | 5,261 | false |
# opus-mt-tc-big-ar-en
Neural machine translation model for translating from Arabic (ar) to English (en).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train).
* Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.)
```
@inproceedings{tiedemann-thottingal-2020-opus,
title = "{OPUS}-{MT} {--} Building open translation services for the World",
author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh},
booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation",
month = nov,
year = "2020",
address = "Lisboa, Portugal",
publisher = "European Association for Machine Translation",
url = "https://aclanthology.org/2020.eamt-1.61",
pages = "479--480",
}
@inproceedings{tiedemann-2020-tatoeba,
title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}",
author = {Tiedemann, J{\"o}rg},
booktitle = "Proceedings of the Fifth Conference on Machine Translation",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.wmt-1.139",
pages = "1174--1182",
}
```
## Model info
* Release: 2022-03-09
* source language(s): afb ara arz
* target language(s): eng
* model: transformer-big
* data: opusTCv20210807+bt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge))
* tokenization: SentencePiece (spm32k,spm32k)
* original model: [opusTCv20210807+bt_transformer-big_2022-03-09.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-eng/opusTCv20210807+bt_transformer-big_2022-03-09.zip)
* more information released models: [OPUS-MT ara-eng README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-eng/README.md)
## Usage
A short example code:
```python
from transformers import MarianMTModel, MarianTokenizer
src_text = [
"اتبع قلبك فحسب.",
"وين راهي دّوش؟"
]
model_name = "pytorch-models/opus-mt-tc-big-ar-en"
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True))
for t in translated:
print( tokenizer.decode(t, skip_special_tokens=True) )
# expected output:
# Just follow your heart.
# Wayne Rahi Dosh?
```
You can also use OPUS-MT models with the transformers pipelines, for example:
```python
from transformers import pipeline
pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-ar-en")
print(pipe("اتبع قلبك فحسب."))
# expected output: Just follow your heart.
```
## Benchmarks
* test set translations: [opusTCv20210807+bt_transformer-big_2022-03-09.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-eng/opusTCv20210807+bt_transformer-big_2022-03-09.test.txt)
* test set scores: [opusTCv20210807+bt_transformer-big_2022-03-09.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-eng/opusTCv20210807+bt_transformer-big_2022-03-09.eval.txt)
* benchmark results: [benchmark_results.txt](benchmark_results.txt)
* benchmark output: [benchmark_translations.zip](benchmark_translations.zip)
| langpair | testset | chr-F | BLEU | #sent | #words |
|----------|---------|-------|-------|-------|--------|
| ara-eng | tatoeba-test-v2021-08-07 | 0.63477 | 47.3 | 10305 | 76975 |
| ara-eng | flores101-devtest | 0.66987 | 42.6 | 1012 | 24721 |
| ara-eng | tico19-test | 0.68521 | 44.4 | 2100 | 56323 |
## Acknowledgements
The work is supported by the [European Language Grid](https://www.european-language-grid.eu/) as [pilot project 2866](https://live.european-language-grid.eu/catalogue/#/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
## Model conversion info
* transformers version: 4.16.2
* OPUS-MT git hash: 3405783
* port time: Wed Apr 13 18:17:57 EEST 2022
* port machine: LM0-400-22516.local
|
aed0dff6d87de75432fc18ab1c91612b
|
dvitel/h2
|
dvitel
|
gpt2
| 12 | 2 |
transformers
| 0 |
text-generation
| true | false | false |
apache-2.0
| null |
['dvitel/hearthstone']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['distigpt2', 'hearthstone']
| true | true | true | 4,544 | false |
# h2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on [hearthstone](https://huggingface.co/datasets/dvitel/hearthstone).
[GitHub repo](https://github.com/dvitel/nlp-sem-parsing/blob/master/h2.py).
It achieves the following results on the evaluation set:
- Loss: 2.5771
- Exact Match: 0.0
- Bleu: 0.6619
- Codebleu: 0.5374
- Ngram Match Score: 0.4051
- Weighted Ngram Match Score: 0.4298
- Syntax Match Score: 0.5605
- Dataflow Match Score: 0.7541
- Chrf: 73.9625
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 17
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Exact Match | Bleu | Codebleu | Ngram Match Score | Weighted Ngram Match Score | Syntax Match Score | Dataflow Match Score | Chrf |
|:-------------:|:------:|:-----:|:---------------:|:-----------:|:------:|:--------:|:-----------------:|:--------------------------:|:------------------:|:--------------------:|:-------:|
| 1.2052 | 11.94 | 1600 | 1.2887 | 0.0 | 0.6340 | 0.4427 | 0.3384 | 0.3614 | 0.5263 | 0.5446 | 70.8004 |
| 0.3227 | 23.88 | 3200 | 1.4484 | 0.0 | 0.6575 | 0.5050 | 0.3767 | 0.3995 | 0.5955 | 0.6485 | 72.9553 |
| 0.205 | 35.82 | 4800 | 1.6392 | 0.0 | 0.6598 | 0.5174 | 0.3788 | 0.4022 | 0.5821 | 0.7063 | 73.2766 |
| 0.1392 | 47.76 | 6400 | 1.8219 | 0.0 | 0.6584 | 0.5279 | 0.3922 | 0.4159 | 0.5742 | 0.7294 | 73.5022 |
| 0.0979 | 59.7 | 8000 | 1.9416 | 0.0 | 0.6635 | 0.5305 | 0.4012 | 0.4248 | 0.5699 | 0.7261 | 73.8081 |
| 0.0694 | 71.64 | 9600 | 2.1793 | 0.0 | 0.6593 | 0.5400 | 0.4027 | 0.4271 | 0.5562 | 0.7739 | 73.6746 |
| 0.0512 | 83.58 | 11200 | 2.2547 | 0.0 | 0.6585 | 0.5433 | 0.4040 | 0.4283 | 0.5486 | 0.7921 | 73.7670 |
| 0.0399 | 95.52 | 12800 | 2.3037 | 0.0 | 0.6585 | 0.5354 | 0.4040 | 0.4282 | 0.5454 | 0.7640 | 73.7431 |
| 0.0316 | 107.46 | 14400 | 2.4113 | 0.0 | 0.6577 | 0.5294 | 0.4006 | 0.4257 | 0.5504 | 0.7409 | 73.7004 |
| 0.0254 | 119.4 | 16000 | 2.4407 | 0.0 | 0.6607 | 0.5412 | 0.4041 | 0.4285 | 0.5598 | 0.7723 | 73.8828 |
| 0.0208 | 131.34 | 17600 | 2.4993 | 0.0 | 0.6637 | 0.5330 | 0.4042 | 0.4286 | 0.5684 | 0.7310 | 74.1760 |
| 0.0176 | 143.28 | 19200 | 2.5138 | 0.0 | 0.6627 | 0.5434 | 0.4050 | 0.4295 | 0.5620 | 0.7772 | 74.0546 |
| 0.0158 | 155.22 | 20800 | 2.5589 | 0.0 | 0.6616 | 0.5347 | 0.4044 | 0.4291 | 0.5512 | 0.7541 | 73.9516 |
| 0.0147 | 167.16 | 22400 | 2.5554 | 0.0 | 0.6620 | 0.5354 | 0.4049 | 0.4295 | 0.5630 | 0.7442 | 73.9461 |
| 0.0134 | 179.1 | 24000 | 2.5696 | 0.0 | 0.6607 | 0.5395 | 0.4046 | 0.4293 | 0.5602 | 0.7640 | 73.8383 |
| 0.0135 | 191.04 | 25600 | 2.5771 | 0.0 | 0.6619 | 0.5374 | 0.4051 | 0.4298 | 0.5605 | 0.7541 | 73.9625 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0
- Datasets 2.6.1
- Tokenizers 0.13.1
|
77096afd9524752dfd31f12dc1254ac8
|
Shaier/BERT_MC_OpenBookQA_w_wrong_context
|
Shaier
|
bert
| 10 | 0 |
transformers
| 0 |
multiple-choice
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,840 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT_MC_OpenBookQA_w_wrong_context
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7450
- Accuracy: 0.922
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 11
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3525 | 1.0 | 1859 | 0.2696 | 0.906 |
| 0.2084 | 2.0 | 3718 | 0.3284 | 0.9143 |
| 0.1263 | 3.0 | 5577 | 0.4205 | 0.9143 |
| 0.0734 | 4.0 | 7436 | 0.4688 | 0.9203 |
| 0.0437 | 5.0 | 9295 | 0.6266 | 0.9173 |
| 0.0357 | 6.0 | 11154 | 0.6934 | 0.9207 |
| 0.0264 | 7.0 | 13013 | 0.6947 | 0.92 |
| 0.0098 | 8.0 | 14872 | 0.6800 | 0.9197 |
| 0.0104 | 9.0 | 16731 | 0.7393 | 0.923 |
| 0.0067 | 10.0 | 18590 | 0.7846 | 0.9217 |
| 0.0034 | 11.0 | 20449 | 0.7450 | 0.922 |
### Framework versions
- Transformers 4.21.3
- Pytorch 1.12.1
- Datasets 2.5.1
- Tokenizers 0.11.0
|
5c76b6436bbd9748a128de6c8b69a66e
|
corbt/roberta-lora-2
|
corbt
|
roberta
| 8 | 4 |
transformers
| 0 |
text-classification
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 8,586 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-lora-2
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5790
- Mse: 0.5790
- Mae: 0.5751
- R2: 0.5572
- Accuracy: 0.5465
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mse | Mae | R2 | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:------:|:------:|:------:|:--------:|
| 0.9268 | 0.02 | 2500 | 0.7467 | 0.7467 | 0.6737 | 0.4290 | 0.4621 |
| 0.7651 | 0.05 | 5000 | 0.7631 | 0.7631 | 0.6773 | 0.4164 | 0.4582 |
| 0.7399 | 0.07 | 7500 | 0.9654 | 0.9654 | 0.7675 | 0.2616 | 0.4104 |
| 0.7249 | 0.1 | 10000 | 0.7259 | 0.7259 | 0.6579 | 0.4449 | 0.4763 |
| 0.7122 | 0.12 | 12500 | 0.7292 | 0.7292 | 0.6596 | 0.4423 | 0.4753 |
| 0.7035 | 0.15 | 15000 | 0.7039 | 0.7039 | 0.6425 | 0.4616 | 0.4889 |
| 0.6992 | 0.17 | 17500 | 0.8192 | 0.8192 | 0.7018 | 0.3735 | 0.4485 |
| 0.6885 | 0.2 | 20000 | 0.8312 | 0.8312 | 0.7040 | 0.3643 | 0.4480 |
| 0.6974 | 0.22 | 22500 | 0.6822 | 0.6822 | 0.6317 | 0.4782 | 0.4987 |
| 0.6933 | 0.25 | 25000 | 0.7079 | 0.7079 | 0.6426 | 0.4586 | 0.4936 |
| 0.6972 | 0.27 | 27500 | 0.7470 | 0.7470 | 0.6638 | 0.4287 | 0.4768 |
| 0.6838 | 0.29 | 30000 | 0.6918 | 0.6918 | 0.6362 | 0.4709 | 0.5009 |
| 0.6766 | 0.32 | 32500 | 0.6597 | 0.6597 | 0.6199 | 0.4955 | 0.5035 |
| 0.6746 | 0.34 | 35000 | 0.7049 | 0.7049 | 0.6431 | 0.4609 | 0.4897 |
| 0.6742 | 0.37 | 37500 | 0.6701 | 0.6701 | 0.6240 | 0.4875 | 0.5096 |
| 0.6772 | 0.39 | 40000 | 0.6616 | 0.6616 | 0.6176 | 0.4940 | 0.5120 |
| 0.6717 | 0.42 | 42500 | 0.6548 | 0.6548 | 0.6187 | 0.4992 | 0.5072 |
| 0.6849 | 0.44 | 45000 | 0.6486 | 0.6486 | 0.6157 | 0.5039 | 0.5087 |
| 0.6727 | 0.47 | 47500 | 0.6829 | 0.6829 | 0.6294 | 0.4777 | 0.5030 |
| 0.7081 | 0.49 | 50000 | 0.6777 | 0.6777 | 0.6299 | 0.4817 | 0.5037 |
| 0.6692 | 0.52 | 52500 | 0.6634 | 0.6634 | 0.6206 | 0.4927 | 0.5078 |
| 0.6676 | 0.54 | 55000 | 0.6760 | 0.6760 | 0.6261 | 0.4830 | 0.5068 |
| 0.6575 | 0.56 | 57500 | 0.6301 | 0.6301 | 0.6060 | 0.5181 | 0.5172 |
| 0.6661 | 0.59 | 60000 | 0.6626 | 0.6626 | 0.6168 | 0.4933 | 0.5153 |
| 0.653 | 0.61 | 62500 | 0.6516 | 0.6516 | 0.6176 | 0.5017 | 0.5106 |
| 0.6583 | 0.64 | 65000 | 0.7014 | 0.7014 | 0.6400 | 0.4636 | 0.4951 |
| 0.6617 | 0.66 | 67500 | 0.6620 | 0.6620 | 0.6207 | 0.4937 | 0.5090 |
| 0.6475 | 0.69 | 70000 | 0.6286 | 0.6286 | 0.6037 | 0.5193 | 0.5223 |
| 0.6455 | 0.71 | 72500 | 0.7304 | 0.7304 | 0.6545 | 0.4414 | 0.4863 |
| 0.6464 | 0.74 | 75000 | 0.6246 | 0.6246 | 0.6006 | 0.5223 | 0.5199 |
| 0.646 | 0.76 | 77500 | 0.6414 | 0.6414 | 0.6124 | 0.5095 | 0.5126 |
| 0.6502 | 0.79 | 80000 | 0.6131 | 0.6131 | 0.5988 | 0.5311 | 0.5245 |
| 0.6443 | 0.81 | 82500 | 0.6376 | 0.6376 | 0.6064 | 0.5123 | 0.5229 |
| 0.641 | 0.83 | 85000 | 0.6399 | 0.6399 | 0.6096 | 0.5106 | 0.5163 |
| 0.6495 | 0.86 | 87500 | 0.6709 | 0.6709 | 0.6239 | 0.4869 | 0.5093 |
| 0.642 | 0.88 | 90000 | 0.6025 | 0.6025 | 0.5952 | 0.5392 | 0.5212 |
| 0.636 | 0.91 | 92500 | 0.6870 | 0.6870 | 0.6317 | 0.4746 | 0.5006 |
| 0.633 | 0.93 | 95000 | 0.6190 | 0.6190 | 0.5949 | 0.5266 | 0.5270 |
| 0.6316 | 0.96 | 97500 | 0.6053 | 0.6053 | 0.5926 | 0.5371 | 0.5280 |
| 0.6224 | 0.98 | 100000 | 0.6098 | 0.6098 | 0.5956 | 0.5336 | 0.5217 |
| 0.6304 | 1.01 | 102500 | 0.6124 | 0.6124 | 0.5949 | 0.5317 | 0.5280 |
| 0.6238 | 1.03 | 105000 | 0.6138 | 0.6138 | 0.5950 | 0.5306 | 0.5313 |
| 0.6228 | 1.06 | 107500 | 0.6302 | 0.6302 | 0.6038 | 0.5180 | 0.5189 |
| 0.6218 | 1.08 | 110000 | 0.6198 | 0.6198 | 0.5958 | 0.5260 | 0.5274 |
| 0.6164 | 1.1 | 112500 | 0.6045 | 0.6045 | 0.5895 | 0.5377 | 0.5327 |
| 0.6295 | 1.13 | 115000 | 0.6040 | 0.6040 | 0.5884 | 0.5381 | 0.5352 |
| 0.614 | 1.15 | 117500 | 0.5956 | 0.5956 | 0.5863 | 0.5445 | 0.5346 |
| 0.6016 | 1.18 | 120000 | 0.6208 | 0.6208 | 0.5994 | 0.5252 | 0.5246 |
| 0.6103 | 1.2 | 122500 | 0.6060 | 0.6060 | 0.5888 | 0.5366 | 0.5343 |
| 0.614 | 1.23 | 125000 | 0.6198 | 0.6198 | 0.5995 | 0.5259 | 0.5293 |
| 0.6113 | 1.25 | 127500 | 0.6010 | 0.6010 | 0.5874 | 0.5403 | 0.5340 |
| 0.6131 | 1.28 | 130000 | 0.6118 | 0.6118 | 0.5926 | 0.5321 | 0.5303 |
| 0.6069 | 1.3 | 132500 | 0.5914 | 0.5914 | 0.5815 | 0.5477 | 0.5406 |
| 0.6016 | 1.33 | 135000 | 0.5908 | 0.5908 | 0.5825 | 0.5482 | 0.5417 |
| 0.6053 | 1.35 | 137500 | 0.6166 | 0.6166 | 0.5939 | 0.5285 | 0.5317 |
| 0.5927 | 1.37 | 140000 | 0.5910 | 0.5910 | 0.5840 | 0.5480 | 0.5392 |
| 0.5942 | 1.4 | 142500 | 0.5965 | 0.5965 | 0.5856 | 0.5438 | 0.5387 |
| 0.5966 | 1.42 | 145000 | 0.6121 | 0.6121 | 0.5923 | 0.5319 | 0.5358 |
| 0.5941 | 1.45 | 147500 | 0.5889 | 0.5889 | 0.5814 | 0.5496 | 0.5373 |
| 0.6007 | 1.47 | 150000 | 0.5833 | 0.5833 | 0.5770 | 0.5539 | 0.5436 |
| 0.6024 | 1.5 | 152500 | 0.5862 | 0.5862 | 0.5786 | 0.5517 | 0.5423 |
| 0.5896 | 1.52 | 155000 | 0.5913 | 0.5913 | 0.5813 | 0.5478 | 0.5429 |
| 0.5906 | 1.55 | 157500 | 0.5944 | 0.5944 | 0.5854 | 0.5454 | 0.5373 |
| 0.5847 | 1.57 | 160000 | 0.5989 | 0.5989 | 0.5845 | 0.5419 | 0.5398 |
| 0.5837 | 1.6 | 162500 | 0.5914 | 0.5914 | 0.5822 | 0.5477 | 0.5394 |
| 0.5928 | 1.62 | 165000 | 0.5888 | 0.5888 | 0.5798 | 0.5497 | 0.5424 |
| 0.585 | 1.64 | 167500 | 0.5952 | 0.5952 | 0.5829 | 0.5448 | 0.5391 |
| 0.5929 | 1.67 | 170000 | 0.5829 | 0.5829 | 0.5768 | 0.5542 | 0.5440 |
| 0.5886 | 1.69 | 172500 | 0.5831 | 0.5831 | 0.5783 | 0.5540 | 0.5428 |
| 0.5793 | 1.72 | 175000 | 0.5857 | 0.5857 | 0.5776 | 0.5520 | 0.5453 |
| 0.5805 | 1.74 | 177500 | 0.5746 | 0.5746 | 0.5727 | 0.5606 | 0.5489 |
| 0.5875 | 1.77 | 180000 | 0.5798 | 0.5798 | 0.5739 | 0.5566 | 0.5487 |
| 0.5898 | 1.79 | 182500 | 0.5818 | 0.5818 | 0.5746 | 0.5550 | 0.5475 |
| 0.5884 | 1.82 | 185000 | 0.5736 | 0.5736 | 0.5722 | 0.5613 | 0.5496 |
| 0.5757 | 1.84 | 187500 | 0.5816 | 0.5816 | 0.5756 | 0.5552 | 0.5464 |
| 0.5789 | 1.87 | 190000 | 0.5846 | 0.5846 | 0.5774 | 0.5529 | 0.5448 |
| 0.575 | 1.89 | 192500 | 0.5866 | 0.5866 | 0.5779 | 0.5513 | 0.5443 |
| 0.5836 | 1.91 | 195000 | 0.5815 | 0.5815 | 0.5764 | 0.5552 | 0.5470 |
| 0.573 | 1.94 | 197500 | 0.5805 | 0.5805 | 0.5749 | 0.5561 | 0.5493 |
| 0.5728 | 1.96 | 200000 | 0.5808 | 0.5808 | 0.5757 | 0.5558 | 0.5474 |
| 0.5711 | 1.99 | 202500 | 0.5790 | 0.5790 | 0.5751 | 0.5572 | 0.5465 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1
- Datasets 2.8.0
- Tokenizers 0.13.2
|
bc7aa86ab5f9d3466c0b9a88c8230049
|
nila-yuki/final_lab
|
nila-yuki
|
bert
| 8 | 6 |
transformers
| 0 |
token-classification
| false | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,416 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# nila-yuki/final_lab
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0240
- Validation Loss: 0.0593
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1017, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.1059 | 0.0572 | 0 |
| 0.0391 | 0.0542 | 1 |
| 0.0240 | 0.0593 | 2 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.8.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
88334cc29205e927fe1bf15ee6611c0e
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.