repo_id
stringlengths 4
110
| author
stringlengths 2
27
⌀ | model_type
stringlengths 2
29
⌀ | files_per_repo
int64 2
15.4k
| downloads_30d
int64 0
19.9M
| library
stringlengths 2
37
⌀ | likes
int64 0
4.34k
| pipeline
stringlengths 5
30
⌀ | pytorch
bool 2
classes | tensorflow
bool 2
classes | jax
bool 2
classes | license
stringlengths 2
30
| languages
stringlengths 4
1.63k
⌀ | datasets
stringlengths 2
2.58k
⌀ | co2
stringclasses 29
values | prs_count
int64 0
125
| prs_open
int64 0
120
| prs_merged
int64 0
15
| prs_closed
int64 0
28
| discussions_count
int64 0
218
| discussions_open
int64 0
148
| discussions_closed
int64 0
70
| tags
stringlengths 2
513
| has_model_index
bool 2
classes | has_metadata
bool 1
class | has_text
bool 1
class | text_length
int64 401
598k
| is_nc
bool 1
class | readme
stringlengths 0
598k
| hash
stringlengths 32
32
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
sd-concepts-library/malika-favre-art-style
|
sd-concepts-library
| null | 18 | 0 | null | 21 | null | false | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 |
[]
| false | true | true | 2,202 | false |
### Malika Favre Art Style on Stable Diffusion
This is the `<malika-favre>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:













|
953f0e105f1902d907835cc5e71593eb
|
carlosdanielhernandezmena/wav2vec2-large-xlsr-53-faroese-100h
|
carlosdanielhernandezmena
|
wav2vec2
| 9 | 62 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
cc-by-4.0
|
['fo']
|
['carlosdanielhernandezmena/ravnursson_asr']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['audio', 'automatic-speech-recognition', 'faroese', 'xlrs-53-faroese', 'ravnur-project', 'faroe-islands']
| true | true | true | 3,987 | false |
# wav2vec2-large-xlsr-53-faroese-100h
The "wav2vec2-large-xlsr-53-faroese-100h" is an acoustic model suitable for Automatic Speech Recognition in Faroese. It is the result of fine-tuning the model "facebook/wav2vec2-large-xlsr-53" with 100 hours of Faroese data released by the Ravnur Project (https://maltokni.fo/en/) from the Faroe Islands.
The specific dataset used to create the model is called "Ravnursson Faroese Speech and Transcripts" and it is available at http://hdl.handle.net/20.500.12537/276.
The fine-tuning process was perform during November (2022) in the servers of the Language and Voice Lab (https://lvl.ru.is/) at Reykjavík University (Iceland) by Carlos Daniel Hernández Mena.
# Evaluation
```python
import torch
from transformers import Wav2Vec2Processor
from transformers import Wav2Vec2ForCTC
#Load the processor and model.
MODEL_NAME="carlosdanielhernandezmena/wav2vec2-large-xlsr-53-faroese-100h"
processor = Wav2Vec2Processor.from_pretrained(MODEL_NAME)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_NAME)
#Load the dataset
from datasets import load_dataset, load_metric, Audio
ds=load_dataset("carlosdanielhernandezmena/ravnursson_asr",split='test')
#Downsample to 16kHz
ds = ds.cast_column("audio", Audio(sampling_rate=16_000))
#Process the dataset
def prepare_dataset(batch):
audio = batch["audio"]
#Batched output is "un-batched" to ensure mapping is correct
batch["input_values"] = processor(audio["array"], sampling_rate=audio["sampling_rate"]).input_values[0]
with processor.as_target_processor():
batch["labels"] = processor(batch["normalized_text"]).input_ids
return batch
ds = ds.map(prepare_dataset, remove_columns=ds.column_names,num_proc=1)
#Define the evaluation metric
import numpy as np
wer_metric = load_metric("wer")
def compute_metrics(pred):
pred_logits = pred.predictions
pred_ids = np.argmax(pred_logits, axis=-1)
pred.label_ids[pred.label_ids == -100] = processor.tokenizer.pad_token_id
pred_str = processor.batch_decode(pred_ids)
#We do not want to group tokens when computing the metrics
label_str = processor.batch_decode(pred.label_ids, group_tokens=False)
wer = wer_metric.compute(predictions=pred_str, references=label_str)
return {"wer": wer}
#Do the evaluation (with batch_size=1)
model = model.to(torch.device("cuda"))
def map_to_result(batch):
with torch.no_grad():
input_values = torch.tensor(batch["input_values"], device="cuda").unsqueeze(0)
logits = model(input_values).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_str"] = processor.batch_decode(pred_ids)[0]
batch["sentence"] = processor.decode(batch["labels"], group_tokens=False)
return batch
results = ds.map(map_to_result,remove_columns=ds.column_names)
#Compute the overall WER now.
print("Test WER: {:.3f}".format(wer_metric.compute(predictions=results["pred_str"], references=results["sentence"])))
```
**Test Result**: 0.076
# BibTeX entry and citation info
*When publishing results based on these models please refer to:*
```bibtex
@misc{mena2022xlrs53faroese,
title={Acoustic Model in Faroese: wav2vec2-large-xlsr-53-faroese-100h.},
author={Hernandez Mena, Carlos Daniel},
year={2022},
url={https://huggingface.co/carlosdanielhernandezmena/wav2vec2-large-xlsr-53-faroese-100h},
}
```
# Acknowledgements
We want to thank to Jón Guðnason, head of the Language and Voice Lab for providing computational power to make this model possible. We also want to thank to the "Language Technology Programme for Icelandic 2019-2023" which is managed and coordinated by Almannarómur, and it is funded by the Icelandic Ministry of Education, Science and Culture.
Special thanks to Annika Simonsen and to The Ravnur Project for making their
"Basic Language Resource Kit"(BLARK 1.0) publicly available through the research paper "Creating a Basic Language Resource Kit for Faroese" https://aclanthology.org/2022.lrec-1.495.pdf
|
6fa580976fd1b619ff1b31ea22199c22
|
RichardsonTXCarpetCleaning/UpholsteryCleaningRichardsonTX
|
RichardsonTXCarpetCleaning
| null | 2 | 0 | null | 0 | null | false | false | false |
other
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 500 | false |
Upholstery Cleaning Richardson TX
https://carpetcleaning-richardson.com/upholstery-cleaning.html
(972) 454-9815
Your furniture is the most expensive item in your home, along with probably your jewelry and electronics, cars, and other possessions.It's possible that some of this furniture was passed down through generations.You want to take care of it so that future generations can continue to enjoy it.Call Richardson TX Carpet Cleaning right away if you require steam cleaning for your upholstery!
|
52a55178c30e4828b59879f807be4fcd
|
model-attribution-challenge/gpt2-xl
|
model-attribution-challenge
|
gpt2
| 10 | 111 |
transformers
| 0 |
text-generation
| true | true | true |
mit
|
['en']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 11,933 | false |
# GPT-2 XL
## Table of Contents
- [Model Details](#model-details)
- [How To Get Started With the Model](#how-to-get-started-with-the-model)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [Training](#training)
- [Evaluation](#evaluation)
- [Environmental Impact](#environmental-impact)
- [Technical Specifications](#technical-specifications)
- [Citation Information](#citation-information)
- [Model Card Authors](#model-card-authors)
## Model Details
**Model Description:** GPT-2 XL is the **1.5B parameter** version of GPT-2, a transformer-based language model created and released by OpenAI. The model is a pretrained model on English language using a causal language modeling (CLM) objective.
- **Developed by:** OpenAI, see [associated research paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) and [GitHub repo](https://github.com/openai/gpt-2) for model developers.
- **Model Type:** Transformer-based language model
- **Language(s):** English
- **License:** [Modified MIT License](https://github.com/openai/gpt-2/blob/master/LICENSE)
- **Related Models:** [GPT-2](https://huggingface.co/gpt2), [GPT-Medium](https://huggingface.co/gpt2-medium) and [GPT-Large](https://huggingface.co/gpt2-large)
- **Resources for more information:**
- [Research Paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
- [OpenAI Blog Post](https://openai.com/blog/better-language-models/)
- [GitHub Repo](https://github.com/openai/gpt-2)
- [OpenAI Model Card for GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md)
- [OpenAI GPT-2 1.5B Release Blog Post](https://openai.com/blog/gpt-2-1-5b-release/)
- Test the full generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large
## How to Get Started with the Model
Use the code below to get started with the model. You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility:
```python
from transformers import pipeline, set_seed
generator = pipeline('text-generation', model='gpt2-xl')
set_seed(42)
generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5)
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import GPT2Tokenizer, GPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2-xl')
model = GPT2Model.from_pretrained('gpt2-xl')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import GPT2Tokenizer, TFGPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2-xl')
model = TFGPT2Model.from_pretrained('gpt2-xl')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Uses
#### Direct Use
In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote:
> The primary intended users of these models are AI researchers and practitioners.
>
> We primarily imagine these language models will be used by researchers to better understand the behaviors, capabilities, biases, and constraints of large-scale generative language models.
#### Downstream Use
In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote:
> Here are some secondary use cases we believe are likely:
>
> - Writing assistance: Grammar assistance, autocompletion (for normal prose or code)
> - Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.
> - Entertainment: Creation of games, chat bots, and amusing generations.
#### Misuse and Out-of-scope Use
In their [model card about GPT-2](https://github.com/openai/gpt-2/blob/master/model_card.md), OpenAI wrote:
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true.
>
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes.
## Risks, Limitations and Biases
**CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.**
#### Biases
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For example:
```python
from transformers import pipeline, set_seed
generator = pipeline('text-generation', model='gpt2-xl')
set_seed(42)
generator("The man worked as a", max_length=10, num_return_sequences=5)
set_seed(42)
generator("The woman worked as a", max_length=10, num_return_sequences=5)
```
This bias will also affect all fine-tuned versions of this model. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
#### Risks and Limitations
When they released the 1.5B parameter model, OpenAI wrote in a [blog post](https://openai.com/blog/gpt-2-1-5b-release/):
> GPT-2 can be fine-tuned for misuse. Our partners at the Middlebury Institute of International Studies’ Center on Terrorism, Extremism, and Counterterrorism (CTEC) found that extremist groups can use GPT-2 for misuse, specifically by fine-tuning GPT-2 models on four ideological positions: white supremacy, Marxism, jihadist Islamism, and anarchism. CTEC demonstrated that it’s possible to create models that can generate synthetic propaganda for these ideologies. They also show that, despite having low detection accuracy on synthetic outputs, ML-based detection methods can give experts reasonable suspicion that an actor is generating synthetic text.
The blog post further discusses the risks, limitations, and biases of the model.
## Training
#### Training Data
The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web
pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from
this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights
40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText
[here](https://github.com/openai/gpt-2/blob/master/domains.txt).
#### Training Procedure
The model is pretrained on a very large corpus of English data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks.
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.
## Evaluation
The following evaluation information is extracted from the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf).
#### Testing Data, Factors and Metrics
The model authors write in the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) that:
> Since our model operates on a byte level and does not require lossy pre-processing or tokenization, we can evaluate it on any language model benchmark. Results on language modeling datasets are commonly reported in a quantity which is a scaled or ex- ponentiated version of the average negative log probability per canonical prediction unit - usually a character, a byte, or a word. We evaluate the same quantity by computing the log-probability of a dataset according to a WebText LM and dividing by the number of canonical units. For many of these datasets, WebText LMs would be tested significantly out- of-distribution, having to predict aggressively standardized text, tokenization artifacts such as disconnected punctuation and contractions, shuffled sentences, and even the string <UNK> which is extremely rare in WebText - occurring only 26 times in 40 billion bytes. We report our main results...using invertible de-tokenizers which remove as many of these tokenization / pre-processing artifacts as possible. Since these de-tokenizers are invertible, we can still calculate the log probability of a dataset and they can be thought of as a simple form of domain adaptation.
#### Results
The model achieves the following results without any fine-tuning (zero-shot):
| Dataset | LAMBADA | LAMBADA | CBT-CN | CBT-NE | WikiText2 | PTB | enwiki8 | text8 | WikiText103 | 1BW |
|:--------:|:-------:|:-------:|:------:|:------:|:---------:|:------:|:-------:|:------:|:-----------:|:-----:|
| (metric) | (PPL) | (ACC) | (ACC) | (ACC) | (PPL) | (PPL) | (BPB) | (BPC) | (PPL) | (PPL) |
| | 8.63 | 63.24 | 93.30 | 89.05 | 18.34 | 35.76 | 0.93 | 0.98 | 17.48 | 42.16 |
## Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware type and hours used are based on information provided by one of the model authors on [Reddit](https://bit.ly/2Tw1x4L).
- **Hardware Type:** 32 TPUv3 chips
- **Hours used:** 168
- **Cloud Provider:** Unknown
- **Compute Region:** Unknown
- **Carbon Emitted:** Unknown
## Technical Specifications
See the [associated paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) for details on the modeling architecture, objective, and training details.
## Citation Information
```bibtex
@article{radford2019language,
title={Language models are unsupervised multitask learners},
author={Radford, Alec and Wu, Jeffrey and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya and others},
journal={OpenAI blog},
volume={1},
number={8},
pages={9},
year={2019}
}
```
## Model Card Authors
This model card was written by the Hugging Face team.
|
b3c9166c2435a6786bf1f6cdf49559a5
|
sherry7144/wav2vec2-base-timit-demo-colab2
|
sherry7144
|
wav2vec2
| 12 | 2 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,341 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab2
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7746
- Wer: 0.5855
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 800
- num_epochs: 35
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.1452 | 13.89 | 500 | 2.9679 | 1.0 |
| 1.075 | 27.78 | 1000 | 0.7746 | 0.5855 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
041a1dc2a5fd2df2e983ef1d4df178bd
|
amritpattnaik/mt5-small-amrit-finetuned-amazon-en
|
amritpattnaik
|
mt5
| 13 | 3 |
transformers
| 0 |
summarization
| true | false | false |
apache-2.0
| null |
['amazon_reviews_multi']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['summarization', 'generated_from_trainer']
| true | true | true | 2,015 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-amrit-finetuned-amazon-en
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3112
- Rouge1: 15.4603
- Rouge2: 7.1882
- Rougel: 15.2221
- Rougelsum: 15.1231
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|
| 8.7422 | 1.0 | 771 | 3.6517 | 12.9002 | 4.8601 | 12.6743 | 12.6561 |
| 4.1322 | 2.0 | 1542 | 3.4937 | 14.1146 | 6.5433 | 14.0882 | 14.0484 |
| 3.7426 | 3.0 | 2313 | 3.4070 | 14.4797 | 6.8527 | 14.1544 | 14.2753 |
| 3.5743 | 4.0 | 3084 | 3.3439 | 15.9805 | 7.8873 | 15.4935 | 15.41 |
| 3.4489 | 5.0 | 3855 | 3.3122 | 16.5749 | 7.9809 | 16.1922 | 16.1226 |
| 3.3602 | 6.0 | 4626 | 3.3187 | 16.4809 | 7.7656 | 16.211 | 16.1185 |
| 3.3215 | 7.0 | 5397 | 3.3180 | 15.4615 | 7.1361 | 15.1919 | 15.1144 |
| 3.294 | 8.0 | 6168 | 3.3112 | 15.4603 | 7.1882 | 15.2221 | 15.1231 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
cde384dc9c6cec83c9627d4a4dab1ef6
|
mrm8488/t5-small-finetuned-turk-text-simplification
|
mrm8488
|
t5
| 11 | 1 |
transformers
| 1 |
text2text-generation
| true | false | false |
apache-2.0
|
['en']
| null | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 |
['generated_from_trainer']
| true | true | true | 1,843 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# T5 (small) finetuned-turk-text-simplification
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1001
- Rouge2 Precision: 0.6825
- Rouge2 Recall: 0.4542
- Rouge2 Fmeasure: 0.5221
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure |
|:-------------:|:-----:|:----:|:---------------:|:----------------:|:-------------:|:---------------:|
| 0.4318 | 1.0 | 500 | 0.1053 | 0.682 | 0.4533 | 0.5214 |
| 0.0977 | 2.0 | 1000 | 0.1019 | 0.683 | 0.4545 | 0.5225 |
| 0.0938 | 3.0 | 1500 | 0.1010 | 0.6828 | 0.4547 | 0.5226 |
| 0.0916 | 4.0 | 2000 | 0.1003 | 0.6829 | 0.4545 | 0.5225 |
| 0.0906 | 5.0 | 2500 | 0.1001 | 0.6825 | 0.4542 | 0.5221 |
### Framework versions
- Transformers 4.21.3
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
5023106808f3ac29e4d8b4becbcc0640
|
theojolliffe/bart-cnn-pubmed-arxiv-pubmed
|
theojolliffe
|
bart
| 13 | 3 |
transformers
| 0 |
text2text-generation
| true | false | false |
mit
| null |
['scientific_papers']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,493 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-cnn-pubmed-arxiv-pubmed
This model is a fine-tuned version of [theojolliffe/bart-cnn-pubmed-arxiv](https://huggingface.co/theojolliffe/bart-cnn-pubmed-arxiv) on the scientific_papers dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9245
- Rouge1: 37.3328
- Rouge2: 15.5894
- Rougel: 23.0297
- Rougelsum: 33.952
- Gen Len: 136.3568
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| 2.0272 | 1.0 | 29981 | 1.9245 | 37.3328 | 15.5894 | 23.0297 | 33.952 | 136.3568 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
2b897c915b3d1e19d8e173def823c8f1
|
classla/bcms-bertic-frenk-hate
|
classla
|
bert
| 10 | 10 |
transformers
| 0 |
text-classification
| true | false | false |
cc-by-sa-4.0
|
['hr']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['text-classification', 'hate-speech']
| false | true | true | 3,666 | false |
# bcms-bertic-frenk-hate
Text classification model based on [`classla/bcms-bertic`](https://huggingface.co/classla/bcms-bertic) and fine-tuned on the [FRENK dataset](https://www.clarin.si/repository/xmlui/handle/11356/1433) comprising of LGBT and migrant hatespeech. Only the Croatian subset of the data was used for fine-tuning and the dataset has been relabeled for binary classification (offensive or acceptable).
## Fine-tuning hyperparameters
Fine-tuning was performed with `simpletransformers`. Beforehand a brief hyperparameter optimisation was performed and the presumed optimal hyperparameters are:
```python
model_args = {
"num_train_epochs": 12,
"learning_rate": 1e-5,
"train_batch_size": 74}
```
## Performance
The same pipeline was run with two other transformer models and `fasttext` for comparison. Accuracy and macro F1 score were recorded for each of the 6 fine-tuning sessions and post festum analyzed.
| model | average accuracy | average macro F1 |
|----------------------------|------------------|------------------|
| bcms-bertic-frenk-hate | 0.8313 | 0.8219 |
| EMBEDDIA/crosloengual-bert | 0.8054 | 0.796 |
| xlm-roberta-base | 0.7175 | 0.7049 |
| fasttext | 0.771 | 0.754 |
From recorded accuracies and macro F1 scores p-values were also calculated:
Comparison with `crosloengual-bert`:
| test | accuracy p-value | macro F1 p-value |
|----------------|------------------|------------------|
| Wilcoxon | 0.00781 | 0.00781 |
| Mann Whithney | 0.00108 | 0.00108 |
| Student t-test | 2.43e-10 | 1.27e-10 |
Comparison with `xlm-roberta-base`:
| test | accuracy p-value | macro F1 p-value |
|----------------|------------------|------------------|
| Wilcoxon | 0.00781 | 0.00781 |
| Mann Whithney | 0.00107 | 0.00108 |
| Student t-test | 4.83e-11 | 5.61e-11 |
## Use examples
```python
from simpletransformers.classification import ClassificationModel
model = ClassificationModel(
"bert", "5roop/bcms-bertic-frenk-hate", use_cuda=True,
)
predictions, logit_output = model.predict(['Ne odbacujem da će RH primiti još migranata iz Afganistana, no neće biti novog vala',
"Potpredsjednik Vlade i ministar branitelja Tomo Medved komentirao je Vladine planove za zakonsku zabranu pozdrava 'za dom spremni' "])
predictions
### Output:
### array([0, 0])
```
## Citation
If you use the model, please cite the following paper on which the original model is based:
```
@inproceedings{ljubesic-lauc-2021-bertic,
title = "{BERT}i{\'c} - The Transformer Language Model for {B}osnian, {C}roatian, {M}ontenegrin and {S}erbian",
author = "Ljube{\v{s}}i{\'c}, Nikola and Lauc, Davor",
booktitle = "Proceedings of the 8th Workshop on Balto-Slavic Natural Language Processing",
month = apr,
year = "2021",
address = "Kiyv, Ukraine",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2021.bsnlp-1.5",
pages = "37--42",
}
```
and the dataset used for fine-tuning:
```
@misc{ljubešić2019frenk,
title={The FRENK Datasets of Socially Unacceptable Discourse in Slovene and English},
author={Nikola Ljubešić and Darja Fišer and Tomaž Erjavec},
year={2019},
eprint={1906.02045},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/1906.02045}
}
```
|
14f05c0bc2a1d7af0f6c47f1e4e52ea5
|
lsanochkin/rubert-finetuned-collection3
|
lsanochkin
|
bert
| 16 | 3 |
transformers
| 0 |
token-classification
| true | false | false |
apache-2.0
| null |
['collection3']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,558 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rubert-finetuned-collection3
This model is a fine-tuned version of [sberbank-ai/ruBert-base](https://huggingface.co/sberbank-ai/ruBert-base) on the collection3 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0514
- Precision: 0.9355
- Recall: 0.9577
- F1: 0.9465
- Accuracy: 0.9865
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0794 | 1.0 | 1163 | 0.0536 | 0.9178 | 0.9466 | 0.9320 | 0.9825 |
| 0.0391 | 2.0 | 2326 | 0.0512 | 0.9228 | 0.9553 | 0.9388 | 0.9853 |
| 0.0191 | 3.0 | 3489 | 0.0514 | 0.9355 | 0.9577 | 0.9465 | 0.9865 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0.dev20220929+cu117
- Datasets 2.8.0
- Tokenizers 0.13.2
|
957d13511a29a4b828d088fc5558ae33
|
Shularp/mt5-small-finetuned-ar-to-th-3rd-round
|
Shularp
|
mt5
| 12 | 16 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,416 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-ar-to-th-finetuned-ar-to-th-2nd-round-finetuned-ar-to-th-3rd-round
This model is a fine-tuned version of [Shularp/mt5-small-finetuned-ar-to-th](https://huggingface.co/Shularp/mt5-small-finetuned-ar-to-th) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7393
- Bleu: 5.3860
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.5917 | 1.0 | 16806 | 2.8661 | 4.1430 |
| 3.432 | 2.0 | 33612 | 2.7698 | 5.0779 |
| 3.3793 | 3.0 | 50418 | 2.7393 | 5.3860 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
a5efd2befa3581491a03de13127e112d
|
fathyshalab/all-roberta-large-v1-small_talk-4-16-5-oos
|
fathyshalab
|
roberta
| 11 | 3 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,519 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-small_talk-4-16-5-oos
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3566
- Accuracy: 0.3855
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.7259 | 1.0 | 1 | 2.5917 | 0.2551 |
| 2.217 | 2.0 | 2 | 2.5059 | 0.3275 |
| 1.7237 | 3.0 | 3 | 2.4355 | 0.3768 |
| 1.4001 | 4.0 | 4 | 2.3837 | 0.3739 |
| 1.1937 | 5.0 | 5 | 2.3566 | 0.3855 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
7283babeb9fb7e30e9ba6bc6da333a2b
|
cmarkea/distilcamembert-base-sentiment
|
cmarkea
|
camembert
| 8 | 2,638 |
transformers
| 15 |
text-classification
| true | true | false |
mit
|
['fr']
|
['amazon_reviews_multi', 'allocine']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 6,816 | false |
DistilCamemBERT-Sentiment
=========================
We present DistilCamemBERT-Sentiment, which is [DistilCamemBERT](https://huggingface.co/cmarkea/distilcamembert-base) fine-tuned for the sentiment analysis task for the French language. This model is built using two datasets: [Amazon Reviews](https://huggingface.co/datasets/amazon_reviews_multi) and [Allociné.fr](https://huggingface.co/datasets/allocine) to minimize the bias. Indeed, Amazon reviews are similar in messages and relatively shorts, contrary to Allociné critics, who are long and rich texts.
This modelization is close to [tblard/tf-allocine](https://huggingface.co/tblard/tf-allocine) based on [CamemBERT](https://huggingface.co/camembert-base) model. The problem of the modelizations based on CamemBERT is at the scaling moment, for the production phase, for example. Indeed, inference cost can be a technological issue. To counteract this effect, we propose this modelization which **divides the inference time by two** with the same consumption power thanks to [DistilCamemBERT](https://huggingface.co/cmarkea/distilcamembert-base).
Dataset
-------
The dataset comprises 204,993 reviews for training and 4,999 reviews for the test from Amazon, and 235,516 and 4,729 critics from [Allocine website](https://www.allocine.fr/). The dataset is labeled into five categories:
* 1 star: represents a terrible appreciation,
* 2 stars: bad appreciation,
* 3 stars: neutral appreciation,
* 4 stars: good appreciation,
* 5 stars: excellent appreciation.
Evaluation results
------------------
In addition of accuracy (called here *exact accuracy*) in order to be robust to +/-1 star estimation errors, we will take the following definition as a performance measure:
$$\mathrm{top\!-\!2\; acc}=\frac{1}{|\mathcal{O}|}\sum_{i\in\mathcal{O}}\sum_{0\leq l < 2}\mathbb{1}(\hat{f}_{i,l}=y_i)$$
where \\(\hat{f}_l\\) is the l-th largest predicted label, \\(y\\) the true label, \\(\mathcal{O}\\) is the test set of the observations and \\(\mathbb{1}\\) is the indicator function.
| **class** | **exact accuracy (%)** | **top-2 acc (%)** | **support** |
| :---------: | :--------------------: | :---------------: | :---------: |
| **global** | 61.01 | 88.80 | 9,698 |
| **1 star** | 87.21 | 77.17 | 1,905 |
| **2 stars** | 79.19 | 84.75 | 1,935 |
| **3 stars** | 77.85 | 78.98 | 1,974 |
| **4 stars** | 78.61 | 90.22 | 1,952 |
| **5 stars** | 85.96 | 82.92 | 1,932 |
Benchmark
---------
This model is compared to 3 reference models (see below). As each model doesn't have the exact definition of targets, we detail the performance measure used for each. An **AMD Ryzen 5 4500U @ 2.3GHz with 6 cores** was used for the mean inference time measure.
#### bert-base-multilingual-uncased-sentiment
[nlptown/bert-base-multilingual-uncased-sentiment](https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment) is based on BERT model in the multilingual and uncased version. This sentiment analyzer is trained on Amazon reviews, similar to our model. Hence the targets and their definitions are the same.
| **model** | **time (ms)** | **exact accuracy (%)** | **top-2 acc (%)** |
| :-------: | :-----------: | :--------------------: | :---------------: |
| [cmarkea/distilcamembert-base-sentiment](https://huggingface.co/cmarkea/distilcamembert-base-sentiment) | **95.56** | **61.01** | **88.80** |
| [nlptown/bert-base-multilingual-uncased-sentiment](https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment) | 187.70 | 54.41 | 82.82 |
#### tf-allociné and barthez-sentiment-classification
[tblard/tf-allocine](https://huggingface.co/tblard/tf-allocine) based on [CamemBERT](https://huggingface.co/camembert-base) model and [moussaKam/barthez-sentiment-classification](https://huggingface.co/moussaKam/barthez-sentiment-classification) based on [BARThez](https://huggingface.co/moussaKam/barthez) use the same bi-class definition between them. To bring this back to a two-class problem, we will only consider the *"1 star"* and *"2 stars"* labels for the *negative* sentiments and *"4 stars"* and *"5 stars"* for *positive* sentiments. We exclude the *"3 stars"* which can be interpreted as a *neutral* class. In this context, the problem of +/-1 star estimation errors disappears. Then we use only the classical accuracy definition.
| **model** | **time (ms)** | **exact accuracy (%)** |
| :-------: | :-----------: | :--------------------: |
| [cmarkea/distilcamembert-base-sentiment](https://huggingface.co/cmarkea/distilcamembert-base-sentiment) | **95.56** | **97.52** |
| [tblard/tf-allocine](https://huggingface.co/tblard/tf-allocine) | 329.74 | 95.69 |
| [moussaKam/barthez-sentiment-classification](https://huggingface.co/moussaKam/barthez-sentiment-classification) | 197.95 | 94.29 |
How to use DistilCamemBERT-Sentiment
------------------------------------
```python
from transformers import pipeline
analyzer = pipeline(
task='text-classification',
model="cmarkea/distilcamembert-base-sentiment",
tokenizer="cmarkea/distilcamembert-base-sentiment"
)
result = analyzer(
"J'aime me promener en forêt même si ça me donne mal aux pieds.",
return_all_scores=True
)
result
[{'label': '1 star',
'score': 0.047529436647892},
{'label': '2 stars',
'score': 0.14150355756282806},
{'label': '3 stars',
'score': 0.3586442470550537},
{'label': '4 stars',
'score': 0.3181498646736145},
{'label': '5 stars',
'score': 0.13417290151119232}]
```
### Optimum + ONNX
```python
from optimum.onnxruntime import ORTModelForSequenceClassification
from transformers import AutoTokenizer, pipeline
HUB_MODEL = "cmarkea/distilcamembert-base-sentiment"
tokenizer = AutoTokenizer.from_pretrained(HUB_MODEL)
model = ORTModelForSequenceClassification.from_pretrained(HUB_MODEL)
onnx_qa = pipeline("text-classification", model=model, tokenizer=tokenizer)
# Quantized onnx model
quantized_model = ORTModelForSequenceClassification.from_pretrained(
HUB_MODEL, file_name="model_quantized.onnx"
)
```
Citation
--------
```bibtex
@inproceedings{delestre:hal-03674695,
TITLE = {{DistilCamemBERT : une distillation du mod{\`e}le fran{\c c}ais CamemBERT}},
AUTHOR = {Delestre, Cyrile and Amar, Abibatou},
URL = {https://hal.archives-ouvertes.fr/hal-03674695},
BOOKTITLE = {{CAp (Conf{\'e}rence sur l'Apprentissage automatique)}},
ADDRESS = {Vannes, France},
YEAR = {2022},
MONTH = Jul,
KEYWORDS = {NLP ; Transformers ; CamemBERT ; Distillation},
PDF = {https://hal.archives-ouvertes.fr/hal-03674695/file/cap2022.pdf},
HAL_ID = {hal-03674695},
HAL_VERSION = {v1},
}
```
|
97522b8a9ad78cb7d9490722ab58d77a
|
sd-concepts-library/reeducation-camp
|
sd-concepts-library
| null | 11 | 0 | null | 0 | null | false | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,330 | false |
### reeducation camp on Stable Diffusion
This is the `<reeducation-camp>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:






|
dc50435e584f2f487cb7e01f6681e2e0
|
jonatasgrosman/exp_w2v2t_pl_no-pretraining_s20
|
jonatasgrosman
|
wav2vec2
| 10 | 2 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['pl']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'pl']
| false | true | true | 413 | false |
# exp_w2v2t_pl_no-pretraining_s20
Fine-tuned randomly initialized wav2vec2 model for speech recognition using the train split of [Common Voice 7.0 (pl)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
059f1549c91185883bd7c102699ed0fe
|
glob-asr/wav2vec2-xls-r-300m-spanish-large-LM
|
glob-asr
|
wav2vec2
| 14 | 2 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null |
['common_voice']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer', 'es', 'robust-speech-event']
| true | true | true | 3,329 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-spanish-large
This model is a fine-tuned version of [tomascufaro/xls-r-es-test](https://huggingface.co/tomascufaro/xls-r-es-test) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1431
- Wer: 0.1197
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 10
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 20
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 300
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.1769 | 0.15 | 400 | 0.1795 | 0.1698 |
| 0.217 | 0.3 | 800 | 0.2000 | 0.1945 |
| 0.2372 | 0.45 | 1200 | 0.1985 | 0.1859 |
| 0.2351 | 0.6 | 1600 | 0.1901 | 0.1772 |
| 0.2269 | 0.75 | 2000 | 0.1968 | 0.1783 |
| 0.2284 | 0.9 | 2400 | 0.1873 | 0.1771 |
| 0.2014 | 1.06 | 2800 | 0.1840 | 0.1696 |
| 0.1988 | 1.21 | 3200 | 0.1904 | 0.1730 |
| 0.1919 | 1.36 | 3600 | 0.1827 | 0.1630 |
| 0.1919 | 1.51 | 4000 | 0.1788 | 0.1629 |
| 0.1817 | 1.66 | 4400 | 0.1755 | 0.1558 |
| 0.1812 | 1.81 | 4800 | 0.1795 | 0.1638 |
| 0.1808 | 1.96 | 5200 | 0.1762 | 0.1603 |
| 0.1625 | 2.11 | 5600 | 0.1721 | 0.1557 |
| 0.1477 | 2.26 | 6000 | 0.1735 | 0.1504 |
| 0.1508 | 2.41 | 6400 | 0.1708 | 0.1478 |
| 0.157 | 2.56 | 6800 | 0.1644 | 0.1466 |
| 0.1491 | 2.71 | 7200 | 0.1638 | 0.1445 |
| 0.1458 | 2.86 | 7600 | 0.1582 | 0.1426 |
| 0.1387 | 3.02 | 8000 | 0.1607 | 0.1376 |
| 0.1269 | 3.17 | 8400 | 0.1559 | 0.1364 |
| 0.1172 | 3.32 | 8800 | 0.1521 | 0.1335 |
| 0.1203 | 3.47 | 9200 | 0.1534 | 0.1330 |
| 0.1177 | 3.62 | 9600 | 0.1485 | 0.1304 |
| 0.1167 | 3.77 | 10000 | 0.1498 | 0.1302 |
| 0.1194 | 3.92 | 10400 | 0.1463 | 0.1287 |
| 0.1053 | 4.07 | 10800 | 0.1483 | 0.1282 |
| 0.098 | 4.22 | 11200 | 0.1498 | 0.1267 |
| 0.0958 | 4.37 | 11600 | 0.1461 | 0.1233 |
| 0.0946 | 4.52 | 12000 | 0.1444 | 0.1218 |
| 0.094 | 4.67 | 12400 | 0.1434 | 0.1206 |
| 0.0932 | 4.82 | 12800 | 0.1424 | 0.1206 |
| 0.0912 | 4.98 | 13200 | 0.1431 | 0.1197 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
372d17eabe2b466794ac0c17ed476ec8
|
jayanta/distilbert-base-uncased-sentiment-finetuned-memes
|
jayanta
|
distilbert
| 13 | 3 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,030 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-sentiment-finetuned-memes
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1824
- Accuracy: 0.8270
- Precision: 0.8270
- Recall: 0.8270
- F1: 0.8270
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.5224 | 1.0 | 4293 | 0.5321 | 0.7720 | 0.8084 | 0.7720 | 0.7721 |
| 0.4386 | 2.0 | 8586 | 0.4930 | 0.7961 | 0.7980 | 0.7961 | 0.7967 |
| 0.3722 | 3.0 | 12879 | 0.7652 | 0.7925 | 0.7955 | 0.7925 | 0.7932 |
| 0.3248 | 4.0 | 17172 | 0.9827 | 0.8045 | 0.8047 | 0.8045 | 0.8023 |
| 0.308 | 5.0 | 21465 | 0.9518 | 0.8244 | 0.8260 | 0.8244 | 0.8249 |
| 0.2906 | 6.0 | 25758 | 1.0971 | 0.8155 | 0.8166 | 0.8155 | 0.8159 |
| 0.2036 | 7.0 | 30051 | 1.1457 | 0.8260 | 0.8271 | 0.8260 | 0.8264 |
| 0.1747 | 8.0 | 34344 | 1.1824 | 0.8270 | 0.8270 | 0.8270 | 0.8270 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
72a0b3f02ce675776ed1e069bc586a6f
|
Krishadow/biobert-finetuned-ner-K
|
Krishadow
|
bert
| 8 | 9 |
transformers
| 0 |
token-classification
| false | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,518 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Krishadow/biobert-finetuned-ner-K
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0099
- Validation Loss: 0.0676
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1695, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.1222 | 0.0604 | 0 |
| 0.0398 | 0.0531 | 1 |
| 0.0220 | 0.0616 | 2 |
| 0.0134 | 0.0653 | 3 |
| 0.0099 | 0.0676 | 4 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.8.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
c0cac851ff8881f121e3fb6b9b8e3fff
|
AaronMarker/my-awesome-model
|
AaronMarker
|
distilbert
| 4 | 1 |
transformers
| 0 |
text-classification
| false | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,304 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# my-awesome-model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.8153
- Validation Loss: 0.4165
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 500, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.8153 | 0.4165 | 0 |
### Framework versions
- Transformers 4.21.2
- TensorFlow 2.8.2
- Datasets 2.4.0
- Tokenizers 0.12.1
|
6688ba9bc3b8996185995039182f37d4
|
TheLastBen/rick-roll-style
|
TheLastBen
| null | 38 | 51 |
diffusers
| 11 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 |
['text-to-image', 'stable-diffusion']
| false | true | true | 2,906 | false |
### Rick Roll Style V2.1
#### V2.1-768 Model by TheLastBen
This model is trained on 130 images, 1200 steps UNet and 400 steps text_encoder.
#### Prompt example :
(anthropomorphic) chicken rckrll, closeup
Negative : painting, fake, drawing
768x768
Each 5 steps can make a big output difference with the same seed.
You can use the images below to load the full settings in A1111
A1111 Colab :[fast-stable-diffusion-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
#### Sample pictures of this concept:
.png)
.png)
.png)
.png)
.png)
.png)
.png)
.png)
.png)
.png)
.png)
.png)
.png)
.png)
.png)
.png)
.png)
.png)
.png)
.png)
.png)
.png)
|
ad7e41a77227327d7c08d4377a2f307d
|
ahmad573/wav2vec2-base-timit-demo-colab2
|
ahmad573
|
wav2vec2
| 12 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,438 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab2
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1914
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 700
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 3.8196 | 7.04 | 500 | 3.2201 | 1.0 |
| 3.1517 | 14.08 | 1000 | 3.1876 | 1.0 |
| 3.1493 | 21.13 | 1500 | 3.1837 | 1.0 |
| 3.1438 | 28.17 | 2000 | 3.1914 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
41d4fbaca3f624785d223b0126e007e1
|
sayakpaul/glpn-nyu-finetuned-diode-221221-102136
|
sayakpaul
|
glpn
| 7 | 1 |
transformers
| 0 |
depth-estimation
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['vision', 'depth-estimation', 'generated_from_trainer']
| true | true | true | 2,756 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# glpn-nyu-finetuned-diode-221221-102136
This model is a fine-tuned version of [vinvino02/glpn-nyu](https://huggingface.co/vinvino02/glpn-nyu) on the diode-subset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4222
- Mae: 0.4110
- Rmse: 0.6292
- Abs Rel: 0.3778
- Log Mae: 0.1636
- Log Rmse: 0.2240
- Delta1: 0.4320
- Delta2: 0.6806
- Delta3: 0.8068
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 24
- eval_batch_size: 48
- seed: 2022
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.15
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mae | Rmse | Abs Rel | Log Mae | Log Rmse | Delta1 | Delta2 | Delta3 |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:-------:|:-------:|:--------:|:------:|:------:|:------:|
| 0.4953 | 1.0 | 72 | 0.4281 | 0.4216 | 0.6448 | 0.3539 | 0.1696 | 0.2312 | 0.4427 | 0.6625 | 0.7765 |
| 0.3855 | 2.0 | 144 | 0.4749 | 0.4444 | 0.6498 | 0.4156 | 0.1846 | 0.2408 | 0.3612 | 0.6027 | 0.7728 |
| 0.4158 | 3.0 | 216 | 0.5042 | 0.5122 | 0.7196 | 0.4385 | 0.2264 | 0.2834 | 0.2797 | 0.4837 | 0.6699 |
| 0.388 | 4.0 | 288 | 0.4418 | 0.4304 | 0.6473 | 0.4030 | 0.1745 | 0.2378 | 0.4027 | 0.6497 | 0.7900 |
| 0.4595 | 5.0 | 360 | 0.4394 | 0.4154 | 0.6292 | 0.4012 | 0.1664 | 0.2285 | 0.4262 | 0.6613 | 0.8021 |
| 0.393 | 6.0 | 432 | 0.4252 | 0.4060 | 0.6153 | 0.3944 | 0.1617 | 0.2215 | 0.4318 | 0.6747 | 0.8128 |
| 0.3468 | 7.0 | 504 | 0.4413 | 0.4366 | 0.6479 | 0.3835 | 0.1818 | 0.2385 | 0.3778 | 0.6248 | 0.7770 |
| 0.316 | 8.0 | 576 | 0.4218 | 0.4048 | 0.6192 | 0.3844 | 0.1606 | 0.2215 | 0.4374 | 0.6896 | 0.8119 |
| 0.3123 | 9.0 | 648 | 0.4263 | 0.4168 | 0.6295 | 0.3765 | 0.1689 | 0.2267 | 0.4139 | 0.6612 | 0.7976 |
| 0.2973 | 10.0 | 720 | 0.4222 | 0.4110 | 0.6292 | 0.3778 | 0.1636 | 0.2240 | 0.4320 | 0.6806 | 0.8068 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
ab8834659ac9df8bece2a2aee441edeb
|
thaonguyen274/resnet-50-finetuned-eurosat
|
thaonguyen274
|
resnet
| 18 | 4 |
transformers
| 0 |
image-classification
| true | false | false |
apache-2.0
| null |
['imagefolder']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,871 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resnet-50-finetuned-eurosat
This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9095
- Accuracy: 0.8240
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.78 | 0.96 | 17 | 1.7432 | 0.4321 |
| 1.7105 | 1.96 | 34 | 1.6596 | 0.6307 |
| 1.6045 | 2.96 | 51 | 1.5369 | 0.6758 |
| 1.6526 | 3.96 | 68 | 1.4111 | 0.7139 |
| 1.4018 | 4.96 | 85 | 1.2686 | 0.7602 |
| 1.2812 | 5.96 | 102 | 1.1433 | 0.7714 |
| 1.3282 | 6.96 | 119 | 1.0643 | 0.7910 |
| 1.1246 | 7.96 | 136 | 0.9794 | 0.8133 |
| 1.0731 | 8.96 | 153 | 0.9279 | 0.8087 |
| 1.0531 | 9.96 | 170 | 0.9095 | 0.8240 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
1f6467ab1b7be0be4c76c2bbec031142
|
sd-concepts-library/nathan-wyatt
|
sd-concepts-library
| null | 12 | 0 | null | 0 | null | false | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,386 | false |
### Nathan-Wyatt on Stable Diffusion
This is the `<Nathan-Wyatt>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:







|
1470469ed599bdfa0e0f96d05e48f8f3
|
baruga/hideous-blobfish
|
baruga
| null | 17 | 4 |
diffusers
| 1 |
text-to-image
| true | false | false |
creativeml-openrail-m
| null | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'animal']
| false | true | true | 1,396 | false |
## Description
This is a Stable Diffusion model fine-tuned on the infamous blobfish (often remarked as the ugliest animal in the world) for the DreamBooth Hackathon 🔥 animal theme. To participate or learn more, visit [this page](https://huggingface.co/dreambooth-hackathon).
To generate blobfish images, use **a photo of blofi fish in [your choice]** or experiment with other variations. For some reason, GFC scale 5 seems to give the best results, at 7 images start get "overcooked". Despite multiple training runs with various settings, I couldn't fully solve this problem. Additional modifiers and negative prompts may also improve results.
## Examples
*a photo of blofi fish wearing a beautiful flower crown.*

*a photo of blofi fish in nerdy glasses.*

*a photo of blofi fish at the Arctic in a fluffy hat.*

*top rated surrealist painting of blofi fish by Salvador Dalí, intricate details.*

*top rated colorful origami photo of blofi fish.*

## Usage
```python
from diffusers import StableDiffusionPipeline
pipeline = StableDiffusionPipeline.from_pretrained('baruga/hideous-blobfish')
image = pipeline().images[0]
image
```
|
cde52202a5c05730729097b78fa5387b
|
espnet/simpleoier_chime6_asr_transformer_wavlm_lr1e-3
|
espnet
| null | 23 | 22 |
espnet
| 0 |
automatic-speech-recognition
| false | false | false |
cc-by-4.0
|
['en']
|
['chime6']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['espnet', 'audio', 'automatic-speech-recognition']
| false | true | true | 17,649 | false |
## ESPnet2 ASR model
### `espnet/simpleoier_chime6_asr_transformer_wavlm_lr1e-3`
This model was trained by simpleoier using chime6 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout b757b89d45d5574cebf44e225cbe32e3e9e4f522
pip install -e .
cd egs2/chime6/asr1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/simpleoier_chime6_asr_transformer_wavlm_lr1e-3
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Tue May 3 16:47:10 EDT 2022`
- python version: `3.9.12 (main, Apr 5 2022, 06:56:58) [GCC 7.5.0]`
- espnet version: `espnet 202204`
- pytorch version: `pytorch 1.10.1`
- Git hash: `b757b89d45d5574cebf44e225cbe32e3e9e4f522`
- Commit date: `Mon May 2 09:21:08 2022 -0400`
## asr_train_asr_transformer_wavlm_lr1e-3_specaug_accum1_preenc128_warmup20k_raw_en_bpe1000_sp
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_transformer_asr_model_1epoch/dev_gss_multiarray|7437|58881|66.5|21.3|12.2|8.8|42.3|77.4|
|decode_asr_transformer_asr_model_2epoch/dev_gss_multiarray|7437|58881|68.6|20.7|10.6|8.4|39.8|77.5|
|decode_asr_transformer_asr_model_3epoch/dev_gss_multiarray|7437|58881|67.5|20.3|12.2|8.0|40.5|76.5|
|decode_asr_transformer_asr_model_5epoch/dev_gss_multiarray|7437|58881|67.7|21.4|10.9|8.6|40.9|77.9|
|decode_asr_transformer_asr_model_7epoch/dev_gss_multiarray|7437|58881|66.6|20.9|12.5|8.2|41.6|77.8|
|decode_asr_transformer_asr_model_valid.acc.ave/dev_gss_multiarray|0|0|0.0|0.0|0.0|0.0|0.0|0.0|
|decode_asr_transformer_asr_model_valid.acc.ave_5best/dev_gss_multiarray|7437|58881|69.4|20.2|10.4|8.6|39.1|75.8|
|decode_asr_transformer_lw0.5_lm_lm_train_lm_en_bpe1000_valid.loss.ave_asr_model_valid.acc.ave_5best/dev_gss_multiarray|7437|58881|65.7|20.2|14.1|7.5|41.8|77.8|
|decode_asr_transformer_lw0.5_ngram_ngram_3gram_asr_model_valid.acc.ave/dev_gss_multiarray|7437|58881|65.7|19.0|15.3|6.2|40.6|78.8|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_transformer_asr_model_1epoch/dev_gss_multiarray|7437|280767|78.1|7.7|14.1|9.1|31.0|77.9|
|decode_asr_transformer_asr_model_2epoch/dev_gss_multiarray|7437|280767|80.0|7.6|12.5|8.7|28.8|78.1|
|decode_asr_transformer_asr_model_3epoch/dev_gss_multiarray|7437|280767|78.6|7.3|14.1|8.1|29.5|77.5|
|decode_asr_transformer_asr_model_5epoch/dev_gss_multiarray|7437|280767|79.5|7.7|12.8|9.1|29.6|78.8|
|decode_asr_transformer_asr_model_7epoch/dev_gss_multiarray|7437|280767|77.9|7.6|14.5|8.3|30.3|78.6|
|decode_asr_transformer_asr_model_valid.acc.ave/dev_gss_multiarray|0|0|0.0|0.0|0.0|0.0|0.0|0.0|
|decode_asr_transformer_asr_model_valid.acc.ave_5best/dev_gss_multiarray|7437|280767|80.6|7.4|12.0|8.9|28.3|76.6|
|decode_asr_transformer_lw0.5_lm_lm_train_lm_en_bpe1000_valid.loss.ave_asr_model_valid.acc.ave_5best/dev_gss_multiarray|7437|280767|76.5|7.4|16.1|7.7|31.2|78.5|
|decode_asr_transformer_lw0.5_ngram_ngram_3gram_asr_model_valid.acc.ave/dev_gss_multiarray|7437|280767|77.0|7.6|15.4|7.2|30.2|79.8|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_transformer_asr_model_1epoch/dev_gss_multiarray|7437|92680|65.8|18.8|15.4|8.7|42.9|78.0|
|decode_asr_transformer_asr_model_2epoch/dev_gss_multiarray|7437|92680|67.9|18.1|13.9|8.2|40.3|78.2|
|decode_asr_transformer_asr_model_3epoch/dev_gss_multiarray|7437|92680|66.9|17.8|15.2|8.0|41.1|77.7|
|decode_asr_transformer_asr_model_5epoch/dev_gss_multiarray|7437|92680|67.2|18.5|14.3|8.2|40.9|78.9|
|decode_asr_transformer_asr_model_7epoch/dev_gss_multiarray|7437|92680|66.1|18.2|15.7|7.8|41.7|78.6|
|decode_asr_transformer_asr_model_valid.acc.ave/dev_gss_multiarray|0|0|0.0|0.0|0.0|0.0|0.0|0.0|
|decode_asr_transformer_asr_model_valid.acc.ave_5best/dev_gss_multiarray|7437|92680|68.9|17.7|13.4|8.2|39.3|76.6|
|decode_asr_transformer_lw0.5_lm_lm_train_lm_en_bpe1000_valid.loss.ave_asr_model_valid.acc.ave_5best/dev_gss_multiarray|7437|92680|66.1|19.1|14.8|10.2|44.1|78.6|
|decode_asr_transformer_lw0.5_ngram_ngram_3gram_asr_model_valid.acc.ave/dev_gss_multiarray|7437|92680|66.0|19.9|14.1|9.5|43.6|79.8|
## ASR config
<details><summary>expand</summary>
```
config: conf/tuning/train_asr_transformer_wavlm_lr1e-3_specaug_accum1_preenc128_warmup20k.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_transformer_wavlm_lr1e-3_specaug_accum1_preenc128_warmup20k_raw_en_bpe1000_sp
ngpu: 0
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: null
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: true
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 8
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 5
nbest_averaging_interval: 0
grad_clip: 5
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param:
- frontend.upstream
num_iters_per_epoch: null
batch_size: 48
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_en_bpe1000_sp/train/speech_shape
- exp/asr_stats_raw_en_bpe1000_sp/train/text_shape.bpe
valid_shape_file:
- exp/asr_stats_raw_en_bpe1000_sp/valid/speech_shape
- exp/asr_stats_raw_en_bpe1000_sp/valid/text_shape.bpe
batch_type: folded
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_worn_simu_u400k_cleaned_sp/wav.scp
- speech
- kaldi_ark
- - dump/raw/train_worn_simu_u400k_cleaned_sp/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/dev_gss_multiarray/wav.scp
- speech
- kaldi_ark
- - dump/raw/dev_gss_multiarray/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.001
scheduler: warmuplr
scheduler_conf:
warmup_steps: 20000
token_list:
- <blank>
- <unk>
- '[inaudible]'
- '[laughs]'
- '[noise]'
- ▁
- s
- ''''
- ▁i
- ▁it
- t
- ▁you
- ▁the
- ▁yeah
- ▁a
- ▁like
- ▁that
- ▁and
- ▁to
- m
- ▁oh
- ▁so
- '-'
- e
- re
- a
- ▁just
- ▁no
- d
- ▁we
- n
- ▁in
- ing
- i
- ▁of
- ▁do
- ▁is
- ▁have
- ▁what
- ▁was
- ▁this
- ▁can
- o
- ▁one
- r
- ▁but
- er
- y
- ▁they
- ed
- ▁uh
- ▁for
- ▁okay
- ▁there
- ▁be
- ▁he
- ▁don
- g
- ll
- ▁right
- p
- ▁not
- u
- ▁on
- c
- ▁then
- ▁know
- ▁my
- ▁or
- ▁get
- ▁are
- ▁all
- ▁um
- ▁me
- ▁if
- ▁go
- ▁good
- ▁with
- ▁really
- b
- ▁gonna
- ▁think
- ▁cuz
- in
- ▁your
- k
- ve
- le
- w
- an
- ▁she
- l
- ▁well
- en
- f
- ▁up
- al
- ▁two
- h
- ar
- ▁how
- ▁mhm
- v
- ▁here
- ly
- ▁put
- ▁out
- ▁would
- ▁at
- ▁need
- ▁did
- ▁f
- ▁want
- ▁mm
- ▁more
- ch
- ri
- ▁now
- or
- ▁when
- ▁k
- ▁p
- ▁see
- ▁got
- ▁too
- ▁thing
- ▁time
- 'on'
- ▁actually
- ▁where
- ne
- ▁guys
- ▁some
- ▁had
- ▁why
- ic
- ▁them
- ▁st
- ro
- ▁make
- ur
- ▁three
- ▁b
- ▁mean
- ▁wanna
- ▁should
- at
- ▁from
- th
- ▁didn
- ▁about
- ▁yes
- ▁because
- ▁yep
- ▁people
- ▁co
- ▁could
- ▁were
- ▁take
- ▁has
- ▁something
- ce
- ▁w
- ▁c
- ▁sure
- ▁who
- ▁other
- ▁sh
- ▁say
- ▁an
- ▁her
- ▁g
- ▁work
- il
- es
- ▁little
- el
- ▁much
- ▁eat
- ▁still
- ▁wait
- ▁ma
- ▁four
- ▁de
- ▁only
- ▁down
- ▁though
- ▁way
- ▁lot
- ▁use
- ▁over
- ▁let
- ▁pretty
- ▁these
- ▁bo
- ▁any
- ▁off
- ▁ba
- ▁di
- ▁d
- ▁back
- ▁sorry
- ▁those
- ▁very
- ▁bit
- ▁even
- li
- ▁stuff
- ke
- ate
- z
- ▁probably
- ▁nice
- ▁turn
- ▁doesn
- ▁first
- ▁does
- ▁hmm
- ▁look
- ▁going
- ▁play
- ▁ho
- pe
- ▁maybe
- ▁come
- ▁fine
- ▁cut
- ▁man
- ▁bu
- ▁ca
- ▁mo
- ▁th
- lo
- ▁never
- ry
- ▁po
- ▁h
- ▁will
- us
- x
- ge
- ▁five
- ▁start
- ▁him
- ▁long
- ▁give
- ▁se
- ting
- ▁sp
- ▁ra
- ▁done
- ▁con
- ▁big
- ▁his
- ▁y
- ▁which
- ▁been
- ▁dunno
- est
- ion
- ▁fa
- ▁than
- me
- ▁our
- ▁also
- ▁six
- ▁kinda
- co
- ▁cool
- ty
- ▁game
- ▁thought
- ▁fi
- ▁after
- ▁day
- ▁doing
- ment
- ▁said
- ▁whatever
- ap
- ▁place
- ▁anything
- ▁j
- ▁guess
- em
- ▁always
- ▁things
- ▁card
- ▁li
- ▁thank
- ▁last
- ▁before
- ▁many
- ▁watch
- ▁pa
- ▁year
- ▁ah
- ▁hot
- ▁into
- ▁ten
- ▁keep
- ▁bad
- tion
- ▁us
- ▁cr
- ▁part
- ▁cook
- ▁o
- ▁cards
- ▁everything
- ▁la
- ▁ha
- ▁by
- ▁wow
- ▁their
- ies
- ▁hey
- ▁same
- ▁went
- ▁pick
- ▁might
- ▁sc
- ▁ex
- ie
- ▁wood
- ight
- ▁another
- ▁better
- ▁try
- ard
- ▁seven
- ▁guy
- ▁point
- up
- op
- ▁twenty
- ▁hand
- ▁wh
- ▁food
- ▁tra
- ation
- ▁buy
- ▁kind
- ist
- ▁whole
- ive
- is
- ▁half
- able
- ▁pro
- ▁win
- ▁different
- ▁cl
- age
- ▁already
- ▁gotta
- ack
- ▁ti
- ▁lo
- ▁every
- ▁super
- ▁again
- ▁new
- ▁remember
- ers
- ▁dude
- um
- ▁feel
- ▁roll
- ▁cheese
- ▁na
- ▁sit
- ▁sa
- way
- ▁hard
- ▁enough
- 'no'
- ▁eight
- ity
- ▁friend
- ▁un
- ul
- ▁love
- ▁salt
- ▁mi
- ▁steak
- ▁nine
- ▁else
- ▁looks
- ▁pu
- ▁fl
- ▁build
- ▁pre
- ▁end
- ▁ta
- ▁salad
- ▁high
- ▁find
- ▁water
- ▁usually
- ▁small
- ▁around
- ▁butter
- ▁car
- ▁made
- ▁wash
- ▁move
- ▁plate
- ▁true
- ▁pan
- ain
- cu
- ▁nope
- ▁ooh
- ▁sauce
- ▁help
- ▁wa
- ▁left
- ▁person
- uck
- ▁top
- ▁side
- ▁cha
- ▁god
- ▁leave
- ▁goes
- ▁weird
- ▁each
- ▁r
- ▁basically
- ▁chicken
- ted
- ▁oil
- ▁trying
- ▁fun
- ▁close
- ▁taste
- ▁old
- ▁show
- ble
- ▁next
- ▁name
- ▁used
- ▁mine
- ous
- ▁great
- ▁pot
- ally
- ▁burn
- ▁huh
- ▁minutes
- ▁once
- ▁phone
- ▁bowl
- tic
- ▁tell
- ound
- ▁ask
- ▁mu
- ▁thirty
- ▁someone
- ▁piece
- ▁saying
- ▁vi
- ish
- ▁ja
- ▁comp
- ▁called
- ▁through
- ▁gr
- ize
- ▁everyone
- ▁funny
- ▁getting
- ▁won
- ▁bl
- ▁away
- ▁pi
- ▁chi
- ▁totally
- ▁red
- ▁word
- ▁hundred
- ▁open
- ▁dollar
- ▁stone
- ▁yet
- ade
- ▁du
- ▁mmm
- ▁sound
- ▁both
- ▁mar
- ant
- ▁potatoes
- ▁garlic
- fi
- ▁hear
- ▁pass
- ▁saw
- ▁kill
- ▁second
- ▁girl
- ▁shit
- ▁throw
- ▁bought
- ▁please
- ▁che
- ▁da
- ▁hit
- ▁tea
- ▁hold
- ▁shoot
- ▁most
- ▁clean
- ▁wanted
- ▁pepper
- ▁happen
- ▁aw
- ▁home
- ▁drink
- ance
- ▁yo
- ▁sheep
- ▁while
- ▁ro
- ▁house
- ▁call
- ▁meat
- ▁face
- ▁fuck
- ▁talking
- ▁green
- ries
- side
- ▁set
- ▁exactly
- huh
- ▁hour
- ▁ready
- ▁played
- ▁finish
- ▁add
- ▁susie
- q
- ▁stop
- ▁almost
- ▁bring
- ▁rice
- ▁ear
- ▁sweet
- ▁hi
- ▁pizza
- ake
- ▁wi
- ▁gra
- ▁free
- ▁night
- ▁pay
- ▁rick
- ▁full
- ▁wheat
- ▁count
- ▁white
- ful
- ▁light
- ▁plan
- ▁supposed
- ▁either
- ▁bacon
- ▁sim
- ▁sense
- ▁blue
- ▁team
- ▁interesting
- ▁care
- ▁room
- nut
- ward
- ▁real
- ▁week
- ▁heard
- ▁told
- ▁mind
- ▁table
- ▁head
- ash
- ▁looking
- ▁ever
- ▁check
- ▁together
- ▁ju
- ▁app
- ▁grab
- ▁brown
- ▁eh
- book
- ▁stick
- ▁later
- ▁pea
- ▁talk
- ▁awesome
- ▁cream
- ling
- ▁fifty
- ▁color
- ▁qu
- ▁round
- ▁nothing
- ▁power
- ▁deal
- ▁matter
- ▁player
- ▁draw
- ▁having
- ▁kid
- ▁fish
- ▁damn
- ▁own
- ▁crazy
- ▁dad
- ▁took
- ▁perfect
- ▁idea
- ▁couple
- ▁live
- ▁job
- ▁smell
- ▁number
- ▁reason
- ▁best
- ▁forty
- ▁making
- ▁dinner
- ▁change
- ▁playing
- ▁sometimes
- ▁fridge
- ▁miss
- j
- ▁woah
- ▁chancey
- ▁bucks
- ▁brick
- ▁rec
- ▁run
- ▁far
- ball
- ▁bread
- ▁fast
- ▁knife
- ▁black
- ▁break
- ▁mix
- ▁today
- ▁cheap
- ▁mike
- ▁expensive
- out
- ▁normal
- ▁under
- ▁using
- ▁double
- ▁gold
- ▁life
- ▁oven
- ▁less
- ▁space
- ▁wine
- ence
- land
- ▁sea
- ▁corn
- ▁cooking
- ▁stay
- ▁line
- ▁may
- ▁bar
- ▁block
- ▁late
- ▁yourself
- ▁quite
- ▁apple
- ▁extra
- ▁wedding
- ▁happened
- ▁kitchen
- ▁coming
- ▁zero
- ▁definitely
- ▁connect
- ▁read
- ▁crab
- ▁easier
- ▁mkay
- ▁egg
- ▁came
- ▁money
- ▁anyone
- ▁save
- ▁problem
- ▁club
- ▁tried
- ▁wrong
- ▁spot
- ▁low
- ▁amazing
- ▁milk
- ▁jeff
- ▁flip
- ▁text
- ▁bottle
- jo
- ▁without
- ▁parents
- ▁anymore
- ▁course
- ship
- ▁month
- ▁chinese
- ▁must
- ▁movie
- ▁wonder
- ▁bunch
- ▁family
- ▁season
- ▁quick
- ▁past
- ▁paul
- ▁rid
- ▁tennis
- town
- ▁cold
- ▁serious
- ▁drive
- ▁boil
- ▁screw
- ▁least
- ▁everybody
- ▁sort
- ▁thomas
- ▁rest
- ▁suck
- ▁road
- ▁fair
- ▁forgot
- ▁order
- ▁middle
- ▁babe
- ▁bang
- ▁dress
- ▁sleep
- ▁question
- ▁until
- ▁sheriff
- ▁chop
- ▁restaurant
- ▁outside
- ▁learn
- ▁stand
- ▁walk
- ▁attack
- ▁trade
- ▁phil
- ▁few
- ▁strong
- ▁school
- ▁world
- ▁company
- ▁easy
- ▁hockey
- ▁somebody
- ▁short
- ▁figure
- ▁spice
- ▁apparently
- ▁since
- ▁serve
- ▁huge
- ▁saboteur
- ▁fifteen
- ▁myself
- ▁such
- ▁port
- ▁literally
- ▁lose
- ▁crap
- ught
- ▁gosh
- ▁unless
- ▁joke
- ▁store
- ▁bigger
- ▁spell
- ▁ago
- ▁hang
- ▁depend
- ▁ginger
- ▁slow
- ▁medium
- ▁record
- acti
- ▁kenny
- ▁picture
- old
- ▁thousand
- ▁cover
- ▁tree
- ▁obvious
- ▁glass
- ▁taking
- ▁letter
- ▁eleven
- ▁skin
- ▁market
- ▁anybody
- ▁ahead
- ▁morning
- ▁brand
- ▁paper
- ▁lemon
- ▁onions
- ▁juice
- ▁jimmy
- ▁living
- ▁front
- ▁bottom
- ▁dark
- ▁oops
- ▁arjan
- ▁shot
- ▁rule
- ▁hun
- ▁flavor
- ▁speak
- ▁gun
- ▁potato
- ▁worry
- ▁twelve
- ▁sandwich
- ▁plus
- ▁believe
- ▁knew
- ▁realize
- ▁sugar
- ▁happy
- ▁sister
- ▁entire
- ▁master
- ▁eye
- ▁touch
- ▁wenny
- ▁drop
- ▁price
- ▁slice
- ▁sword
- ▁spicy
- ▁listen
- ▁outlaw
- que
- ▁percent
- ▁yesterday
- ▁mushroom
- ▁worth
- ▁proper
- ▁story
- ▁megan
- ▁character
- ▁hair
- ▁straight
- ▁discard
- ▁spoon
- ▁understand
- ▁computer
- ▁type
- ▁nikki
- ▁tomorrow
- ▁trump
- ▁third
- ▁bennet
- ▁nobody
- ▁somewhere
- ▁amount
- ▁split
- ▁accent
- ▁group
- ▁trip
- ▁lunch
- ▁racket
- ▁level
- ▁difference
- ▁orange
- ▁gave
- ▁dessert
- ▁single
- ▁chocolate
- ▁junette
- ▁camera
- ▁regular
- ▁video
- ▁gross
- ▁notice
- ▁actual
- ▁between
- ▁surprise
- ▁smart
- ▁east
- ▁craft
- ▁rock
- ▁certain
- ▁rather
- ▁lobster
- ▁photo
- ▁favorite
- ▁behind
- ▁across
- ▁steal
- ▁spend
- ▁weekend
- ▁special
- ▁sign
- ▁wrap
- ▁except
- ▁john
- ▁conversation
- ▁asian
- ▁grand
- ▁online
- ▁explain
- ▁dishes
- ▁magic
- ▁decide
- ▁fancy
- ▁random
- ▁tunnel
- ▁switch
- ▁transcribe
- ▁english
- ▁giant
- ▁kick
- ▁claire
- ▁laugh
- ▁yellow
- ▁delicious
- ▁freeze
- ▁drunk
- ▁general
- ▁gimme
- ▁damage
- ▁breakfast
- ▁roast
- ▁josh
- ▁choose
- ▁email
- ▁direct
- ▁tomatoes
- ▁fruit
- ▁apart
- ▁chopstick
- ▁vancouver
- ▁kept
- tract
- ▁chunk
- ▁girlfriend
- ▁shuffle
- ▁terrible
- ▁diamond
- ▁sausage
- ▁sweat
- ▁iphone
- ▁pineapple
- ▁summer
- ▁french
- ▁fresh
- ▁heavy
- ▁million
- ▁instead
- ▁ridiculous
- ▁tough
- ▁friday
- ▁whenever
- ▁coffee
- ▁hilarious
- ▁worried
- ▁especially
- ▁shrimp
- ▁avocado
- '&'
- ä
- '#'
- ǎ
- î
- ü
- ǐ
- ñ
- â
- ç
- ']'
- é
- <sos/eos>
init: xavier_uniform
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
joint_net_conf: null
use_preprocessor: true
token_type: bpe
bpemodel: data/en_token_list/bpe_unigram1000/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: s3prl
frontend_conf:
frontend_conf:
upstream: wavlm_large
download_dir: ./hub
multilayer_feature: true
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 100
num_freq_mask: 4
apply_time_mask: true
time_mask_width_range:
- 0
- 40
num_time_mask: 2
normalize: utterance_mvn
normalize_conf: {}
model: espnet
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
extract_feats_in_collect_stats: false
preencoder: linear
preencoder_conf:
input_size: 1024
output_size: 128
encoder: transformer
encoder_conf:
output_size: 256
attention_heads: 4
linear_units: 2048
num_blocks: 12
dropout_rate: 0.1
attention_dropout_rate: 0.0
input_layer: conv2d2
normalize_before: true
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
input_layer: embed
attention_heads: 4
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.0
self_attention_dropout_rate: 0.0
src_attention_dropout_rate: 0.0
required:
- output_dir
- token_list
version: '202204'
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
e8db10bd939b22d4b2a999eac79034bd
|
Shenyancheng/distilbert-base-uncased-finetuned-ner
|
Shenyancheng
|
distilbert
| 18 | 7 |
transformers
| 0 |
token-classification
| true | false | false |
apache-2.0
| null |
['conll2003']
| null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,556 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0620
- Precision: 0.9267
- Recall: 0.9371
- F1: 0.9319
- Accuracy: 0.9838
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2462 | 1.0 | 878 | 0.0714 | 0.9052 | 0.9223 | 0.9137 | 0.9803 |
| 0.0535 | 2.0 | 1756 | 0.0615 | 0.9188 | 0.9331 | 0.9259 | 0.9827 |
| 0.0315 | 3.0 | 2634 | 0.0620 | 0.9267 | 0.9371 | 0.9319 | 0.9838 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
4b85c7099f79176753f4dd763921043f
|
StivenLancheros/roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT_Augmented_ES
|
StivenLancheros
|
roberta
| 14 | 14 |
transformers
| 0 |
token-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,309 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-biomedical-clinical-es-finetuned-ner-CRAFT_Augmented_ES
This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-biomedical-clinical-es](https://huggingface.co/PlanTL-GOB-ES/roberta-base-biomedical-clinical-es) on the CRAFT dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2224
- Precision: 0.8298
- Recall: 0.8306
- F1: 0.8302
- Accuracy: 0.9659
## Model description
This model performs Named Entity Recognition for 6 entity tags: Sequence, Cell, Protein, Gene, Taxon, and Chemical from the CRAFT(Colorado Richly Annotated Full Text) Corpus in English. Entity tags have been normalized and replaced from the original three letter code to a full name e.g. B-Protein, I-Chemical.
This model is trained on augmented data created using Entity Replacement. 20% of the entities were replaced using a list of entities for each entity tag obtained from the official ontologies for each entity class. Three datasets (original, augmented, MT translated CRAFT) were concatenated.
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0624 | 1.0 | 4078 | 0.1844 | 0.8002 | 0.7923 | 0.7963 | 0.9607 |
| 0.0284 | 2.0 | 8156 | 0.1937 | 0.8394 | 0.7988 | 0.8186 | 0.9637 |
| 0.0118 | 3.0 | 12234 | 0.2007 | 0.8285 | 0.8232 | 0.8258 | 0.9649 |
| 0.0043 | 4.0 | 16312 | 0.2224 | 0.8298 | 0.8306 | 0.8302 | 0.9659 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
5e2255037919c0ddea01627b1996d2a5
|
IIIT-L/muril-base-cased-finetuned-TRAC-DS
|
IIIT-L
|
bert
| 10 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,378 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# muril-base-cased-finetuned-TRAC-DS
This model is a fine-tuned version of [google/muril-base-cased](https://huggingface.co/google/muril-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1894
- Accuracy: 0.6838
- Precision: 0.6534
- Recall: 0.6513
- F1: 0.6522
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 43
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 1.0109 | 1.99 | 612 | 0.9284 | 0.5948 | 0.4327 | 0.5193 | 0.4509 |
| 0.8635 | 3.99 | 1224 | 0.8556 | 0.6291 | 0.6012 | 0.5865 | 0.5888 |
| 0.764 | 5.98 | 1836 | 0.8585 | 0.6609 | 0.6249 | 0.6275 | 0.6260 |
| 0.6744 | 7.97 | 2448 | 0.8469 | 0.6732 | 0.6391 | 0.6408 | 0.6398 |
| 0.5865 | 9.97 | 3060 | 0.8438 | 0.6667 | 0.6424 | 0.6395 | 0.6395 |
| 0.4978 | 11.96 | 3672 | 0.9269 | 0.6855 | 0.6532 | 0.6582 | 0.6542 |
| 0.4245 | 13.95 | 4284 | 0.9934 | 0.6699 | 0.6397 | 0.6482 | 0.6396 |
| 0.378 | 15.95 | 4896 | 1.0488 | 0.6830 | 0.6530 | 0.6446 | 0.6474 |
| 0.3349 | 17.94 | 5508 | 1.0548 | 0.6806 | 0.6505 | 0.6536 | 0.6518 |
| 0.3019 | 19.93 | 6120 | 1.1092 | 0.6757 | 0.6476 | 0.6497 | 0.6482 |
| 0.2869 | 21.93 | 6732 | 1.1515 | 0.6814 | 0.6507 | 0.6514 | 0.6510 |
| 0.2575 | 23.92 | 7344 | 1.1894 | 0.6838 | 0.6534 | 0.6513 | 0.6522 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.10.1+cu111
- Datasets 2.3.2
- Tokenizers 0.12.1
|
5a8a17bf0ffd3b8cb5717128206ad566
|
Rocketknight1/temp-colab-upload-test4
|
Rocketknight1
|
distilbert
| 8 | 1 |
transformers
| 0 |
text-classification
| false | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,205 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Rocketknight1/temp-colab-upload-test4
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0000
- Validation Loss: 0.0000
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 0.001, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.0000 | 0.0000 | 0 |
| 0.0000 | 0.0000 | 1 |
### Framework versions
- Transformers 4.18.0.dev0
- TensorFlow 2.8.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
d7a40ac2634be601a5da2b34799e78e8
|
BlueRaccoon/whisper-medium-br
|
BlueRaccoon
|
whisper
| 23 | 0 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['br']
|
['mozilla-foundation/common_voice_11_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['whisper-event', 'generated_from_trainer']
| true | true | true | 1,558 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Medium Breton
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the mozilla-foundation/common_voice_11_0 br dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8486
- Wer: 41.6117
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-06
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0602 | 5.03 | 1000 | 0.7324 | 43.6957 |
| 0.0036 | 10.05 | 2000 | 0.8486 | 41.6117 |
| 0.001 | 15.08 | 3000 | 0.9033 | 42.0458 |
| 0.0004 | 20.1 | 4000 | 0.9351 | 41.6811 |
| 0.0003 | 25.13 | 5000 | 0.9468 | 41.7853 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
f0314155a728b09ccfba2d9c53b0a094
|
research-backup/t5-small-subjqa-vanilla-movies-qg
|
research-backup
|
t5
| 34 | 1 |
transformers
| 0 |
text2text-generation
| true | false | false |
cc-by-4.0
|
['en']
|
['lmqg/qg_subjqa']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['question generation']
| true | true | true | 3,961 | false |
# Model Card of `research-backup/t5-small-subjqa-vanilla-movies-qg`
This model is fine-tuned version of [t5-small](https://huggingface.co/t5-small) for question generation task on the [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) (dataset_name: movies) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
### Overview
- **Language model:** [t5-small](https://huggingface.co/t5-small)
- **Language:** en
- **Training data:** [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) (movies)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="en", model="research-backup/t5-small-subjqa-vanilla-movies-qg")
# model prediction
questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "research-backup/t5-small-subjqa-vanilla-movies-qg")
output = pipe("generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.")
```
## Evaluation
- ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/research-backup/t5-small-subjqa-vanilla-movies-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.movies.json)
| | Score | Type | Dataset |
|:-----------|--------:|:-------|:-----------------------------------------------------------------|
| BERTScore | 4.78 | movies | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) |
| Bleu_1 | 0.17 | movies | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) |
| Bleu_2 | 0 | movies | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) |
| Bleu_3 | 0 | movies | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) |
| Bleu_4 | 0 | movies | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) |
| METEOR | 0.22 | movies | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) |
| MoverScore | 49.11 | movies | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) |
| ROUGE_L | 0.28 | movies | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_subjqa
- dataset_name: movies
- input_types: ['paragraph_answer']
- output_types: ['question']
- prefix_types: ['qg']
- model: t5-small
- max_length: 512
- max_length_output: 32
- epoch: 1
- batch: 32
- lr: 0.0001
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 2
- label_smoothing: 0.15
The full configuration can be found at [fine-tuning config file](https://huggingface.co/research-backup/t5-small-subjqa-vanilla-movies-qg/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
|
7b1c21b0a2a286c5da391b06bf432328
|
rohbrian/distilbert-base-uncased-finetuned-squad
|
rohbrian
|
distilbert
| 12 | 1 |
transformers
| 0 |
question-answering
| true | false | false |
apache-2.0
| null |
['squad']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,284 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1561
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2353 | 1.0 | 5533 | 1.1740 |
| 0.9722 | 2.0 | 11066 | 1.1192 |
| 0.7677 | 3.0 | 16599 | 1.1561 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
2ea46d75c5faf30cbe07e98551aba010
|
aviator-neural/gpt2-donald_trump
|
aviator-neural
|
gpt2
| 14 | 51 |
transformers
| 0 |
text-generation
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,115 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-donald_trump
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8721
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 391 | 2.8721 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.0
- Tokenizers 0.10.3
|
5ec1af7ba5c9b4b36f0b652b213605ae
|
lixiqi/beit-base-patch16-224-pt22k-ft22k-finetuned-FER2013-9e-05
|
lixiqi
|
beit
| 23 | 1 |
transformers
| 0 |
image-classification
| true | false | false |
apache-2.0
| null |
['image_folder']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,508 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# beit-base-patch16-224-pt22k-ft22k-finetuned-FER2013-9e-05
This model is a fine-tuned version of [microsoft/beit-base-patch16-224-pt22k-ft22k](https://huggingface.co/microsoft/beit-base-patch16-224-pt22k-ft22k) on the image_folder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8481
- Accuracy: 0.6840
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 9e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1839 | 1.0 | 224 | 1.0266 | 0.6120 |
| 1.0333 | 2.0 | 448 | 0.9063 | 0.6608 |
| 0.9655 | 3.0 | 672 | 0.8481 | 0.6840 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
9db21bbc2c20b9d700c4373407fe93b5
|
HPL/roberta-large-unlabeled-gab-semeval2023-task10-9000sample
|
HPL
|
roberta
| 11 | 0 |
transformers
| 0 |
fill-mask
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,275 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-unlabeled-gab-semeval2023-task10-9000sample
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0541
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.515 | 1.0 | 563 | 2.3288 |
| 2.2807 | 2.0 | 1126 | 2.1769 |
| 2.0351 | 3.0 | 1689 | 2.0541 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.10.3
|
bc3be1d9f0c7a04c8fd71ddc1b668840
|
ahernandezmiro/WednesdayAddams
|
ahernandezmiro
| null | 8 | 0 | null | 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['stable-diffusion', 'text-to-image']
| false | true | true | 1,750 | false |
**Wednesday Diffusion**
This is the fine-tuned Stable Diffusion 1.4 model trained on promotional pictures of Jenna Ortega as Wednesday Addams on Netflix's adaptation.
Use the tokens **_WednesdayAdJO_** in your prompts for the effect.
This model was trained using the diffusers based dreambooth training by ShivamShrirao.
**Examples**
**1:**

**2:**

**3:**

**4:**

**5:**

## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
|
f1c6bbe0300304dc0ee57ba33fffb526
|
vai6hav/wav2vec2-large-xls-r-300m-hindi-epochs60-colab
|
vai6hav
|
wav2vec2
| 13 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null |
['common_voice']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,339 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-hindi-epochs60-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7322
- Wer: 0.9188
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 60
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 6.2832 | 44.42 | 400 | 1.7322 | 0.9188 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
da236547057d5722c08625366a951bb1
|
sphchen/EHR_ML_simulation_2
|
sphchen
|
gpt2
| 11 | 0 |
transformers
| 0 |
text-generation
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,025 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# EHR_ML_simulation_2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0
- Datasets 2.7.1
- Tokenizers 0.13.2
|
897409c050566f35c97be59405c0075a
|
Josh98/t5-small-finetuned-English-to-BASH
|
Josh98
|
t5
| 15 | 4 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,985 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-English-to-BASH
This model is a fine-tuned version of [kevinum/t5-small-finetuned-English-to-BASH](https://huggingface.co/kevinum/t5-small-finetuned-English-to-BASH) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7624
- Bleu: 15.8119
- Gen Len: 7.75
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| No log | 1.0 | 36 | 2.4759 | 9.4129 | 12.8472 |
| No log | 2.0 | 72 | 2.2581 | 14.8612 | 9.7639 |
| No log | 3.0 | 108 | 2.0998 | 16.1955 | 8.7222 |
| No log | 4.0 | 144 | 1.9945 | 14.576 | 8.4444 |
| No log | 5.0 | 180 | 1.9181 | 15.4464 | 8.1806 |
| No log | 6.0 | 216 | 1.8639 | 14.7446 | 7.9028 |
| No log | 7.0 | 252 | 1.8185 | 14.5825 | 8.0833 |
| No log | 8.0 | 288 | 1.7867 | 14.9773 | 7.9444 |
| No log | 9.0 | 324 | 1.7679 | 15.8119 | 7.75 |
| No log | 10.0 | 360 | 1.7624 | 15.8119 | 7.75 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
3804ed13c53239a369fea5dee37288f6
|
spacemanidol/esci-mlm-us-bert-base-uncased
|
spacemanidol
|
bert
| 13 | 5 |
transformers
| 0 |
fill-mask
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,163 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# esci-us-bert-base-uncased
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1785
- Accuracy: 0.7499
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 256
- total_eval_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.7.1+cu110
- Datasets 1.18.0
- Tokenizers 0.12.1
|
c6cbad89ce55cb14a31b8ecf9a7174be
|
Finnish-NLP/ul2-mini-nl8-finnish
|
Finnish-NLP
|
t5
| 19 | 5 |
transformers
| 0 |
text2text-generation
| true | false | true |
apache-2.0
|
['fi']
|
['Finnish-NLP/mc4_fi_cleaned', 'wikipedia']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['finnish', 't5', 't5x', 'seq2seq', 'ul2']
| false | true | true | 12,671 | false |
# UL2-mini-nl8 for Finnish
Pretrained T5 model on Finnish language using a UL2 (Mixture-of-Denoisers) objective. T5 model was introduced in
[this paper](https://arxiv.org/abs/1910.10683)
and first released at [this page](https://github.com/google-research/text-to-text-transfer-transformer).
The UL2 objective was introduced in
[this paper](https://arxiv.org/abs/2205.05131)
and first released at [this page](https://github.com/google-research/google-research/tree/master/ul2).
**Note:** The Hugging Face inference widget is deactivated because this model needs a text-to-text fine-tuning on a specific downstream task to be useful in practice. As an example of a fine-tuned Finnish T5 model, you can check [Finnish-NLP/t5-small-nl24-casing-punctuation-correction](https://huggingface.co/Finnish-NLP/t5-small-nl24-casing-punctuation-correction) which has been fine-tuned to correct missing casing and punctuation for Finnish text.
## Model description
T5 is an encoder-decoder model and treats all NLP problems in a text-to-text format.
Finnish T5 is a transformers model pretrained on a very large corpus of Finnish data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and outputs from those texts.
This model used the [T5 v1.1](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) improvements compared to the original T5 model during the pretraining:
- GEGLU activation in feed-forward hidden layer, rather than ReLU - see [here](https://arxiv.org/abs/2002.05202)
- Dropout was turned off in pretraining (quality win). Dropout should be re-enabled during fine-tuning
- Pretrained on self-supervised objective only without mixing in the downstream tasks
- No parameter sharing between embedding and classifier layer
This model also used the "efficient" T5 architecture findings presented in [this paper](https://arxiv.org/abs/2109.10686). In a nutshell, the paper indicates that a Deep-Narrow model architecture is favorable for downstream performance compared to other model architectures of similar parameter count. To be more precise, model depth is defined as the number of transformer blocks that are stacked sequentially.
This model uses the [t5-efficient-mini-nl8](https://huggingface.co/google/t5-efficient-mini-nl8) architecture's layer depth which means both the encoder and the decoder have 8 transformer layers compared to the original T5 "mini" model's architecture of 4 transformer layers.
In total, this model has 72 million parameters.
### UL2 pretraining objective
This model was pretrained with the UL2's Mixture-of-Denoisers (MoD) objective, that combines diverse pre-training paradigms together. UL2 frames different objective functions for training language models as denoising tasks, where the model has to recover missing sub-sequences of a given input. During pre-training it uses a novel mixture-of-denoisers that samples from a varied set of such objectives, each with different configurations. UL2 is trained using a mixture of three denoising tasks: (1) R-denoising (or regular span corruption), which emulates the standard T5 span corruption objective; (2) X-denoising (or extreme span corruption); and (3) S-denoising (or sequential PrefixLM). During pre-training, we sample from the available denoising tasks based on user-specified ratios.
UL2 introduces a notion of mode switching, wherein downstream fine-tuning is associated with specific pre-training denoising task. During the pretraining, a paradigm token is inserted to the input (`[NLU]` for R-denoising, `[NLG]` for X-denoising, or `[S2S]` for S-denoising) indicating the denoising task at hand. Then, during fine-tuning the same input token should be inserted to get the best performance for different downstream fine-tuning tasks.
## Intended uses & limitations
This model was only pretrained in a self-supervised way excluding any supervised training. Therefore, this model has to be fine-tuned before it is usable on a downstream task, like text classification, unlike the Google's original T5 model. **Note:** You most likely need to fine-tune these T5/UL2 models without mixed precision so fine-tune them with full fp32 precision. You can also find more fine-tuning tips from [here](https://discuss.huggingface.co/t/t5-finetuning-tips), for example.
**Note**: For fine-tuning, most likely you can get better results if you insert a prefix token of `[NLU]`, `[NLG]`, or `[S2S]` to your input texts. For general language understanding fine-tuning tasks, you could use the `[NLU]` token. For GPT-style causal language generation, you could use the `[S2S]` token. The token `[NLG]` of the X-denoising pretrain task is somewhat mix between the language understanding and causal language generation so the token `[NLG]` could maybe be used for language generation fine-tuning too.
### How to use
Here is how to use this model in PyTorch:
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("Finnish-NLP/ul2-mini-nl8-finnish")
model = T5ForConditionalGeneration.from_pretrained("Finnish-NLP/ul2-mini-nl8-finnish")
```
and in TensorFlow:
```python
from transformers import T5Tokenizer, TFT5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("Finnish-NLP/ul2-mini-nl8-finnish")
model = T5ForConditionalGeneration.from_pretrained("Finnish-NLP/ul2-mini-nl8-finnish", from_pt=True)
```
### Limitations and bias
The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model.
## Training data
This Finnish T5 model was pretrained on the combination of six datasets:
- [mc4_fi_cleaned](https://huggingface.co/datasets/Finnish-NLP/mc4_fi_cleaned), the dataset mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the mC4 dataset and further cleaned it with our own text data cleaning codes (check the dataset repo).
- [wikipedia](https://huggingface.co/datasets/wikipedia) We used the Finnish subset of the wikipedia (August 2021) dataset
- [Yle Finnish News Archive 2011-2018](http://urn.fi/urn:nbn:fi:lb-2017070501)
- [Yle Finnish News Archive 2019-2020](http://urn.fi/urn:nbn:fi:lb-2021050401)
- [Finnish News Agency Archive (STT)](http://urn.fi/urn:nbn:fi:lb-2018121001)
- [The Suomi24 Sentences Corpus](http://urn.fi/urn:nbn:fi:lb-2020021803)
Raw datasets were automatically cleaned to filter out bad quality and non-Finnish examples. Also, a [perplexity](https://huggingface.co/course/chapter7/3#perplexity-for-language-models) score was calculated for all texts with a KenLM model which was trained with very clean Finnish texts only. This perplexity score can then be used to determine how "clean" Finnish language the text contains. Lastly, all datasets were concatenated and the top 90% perplexity score was used as a filtering threshold to filter out the worst quality 10% of texts. Together these cleaned datasets were around 76GB of text.
## Training procedure
### Preprocessing
The texts are tokenized using WordPiece and a vocabulary size of 32000. The inputs and the outputs are sequences of 512 consecutive tokens. Texts are not lower cased so this model is case-sensitive: it makes a difference between finnish and Finnish.
### Pretraining
The model was trained on TPUv3-8 VM, sponsored by the [Google TPU Research Cloud](https://sites.research.google/trc/about/), for 500K steps with a batch size of 256 (in total 66B tokens). The optimizer used was a AdaFactor with learning rate warmup for 10K steps with a constant learning rate of 1e-2, and then an inverse square root decay (exponential decay) of the learning rate after.
Training code was from the Google's Jax/Flax based [t5x framework](https://github.com/google-research/t5x) and also some t5x task definitions were adapted from [Per's t5x work](https://huggingface.co/pere).
The UL2 training objective code used with the [t5x framework](https://github.com/google-research/t5x) was copied and slightly modified from the [UL2 paper](https://arxiv.org/pdf/2205.05131.pdf) appendix chapter 9.2. Used UL2 objective code is available in this repository in the files `ul2_objective.py` and `tasks.py`.
UL2's mixture-of-denoisers configuration was otherwise equal to the UL2 paper but for the rate of mixing denoisers, 20% for S-denoising was used (suggested at the paper chapter 4.5) and the rest was divided equally between the R-denoising and X-denoising (i.e. 40% for both).
## Evaluation results
Evaluation was done by fine-tuning the model on a downstream text classification task with two different labeled Finnish datasets: [Yle News](https://github.com/spyysalo/yle-corpus) and [Eduskunta](https://github.com/aajanki/eduskunta-vkk). Classification fine-tuning was done with a sequence length of 128 tokens. Also, for UL2 models a prefix token of `[NLU]` has been added to each input text.
When fine-tuned on those datasets, this model (the second row of the table) achieves the following accuracy results compared to our other UL2 models and their parameter counts:
| | Model parameters | Yle News accuracy | Eduskunta accuracy |
|-------------------------------------------------------|------------------|---------------------|----------------------|
|Finnish-NLP/ul2-tiny-nl6-finnish | 31 million |92.88 |69.40 |
|Finnish-NLP/ul2-mini-nl8-finnish | 72 million |93.83 |70.10 |
|Finnish-NLP/ul2-small-nl16-finnish | 184 million |94.25 |74.63 |
|Finnish-NLP/ul2-small-nl24-finnish | 260 million |94.03 |73.87 |
|Finnish-NLP/ul2-base-nl36-finnish | 814 million |94.35 |75.47 |
Results of fine-tuning our T5 models (with the original T5 pretraining task) on the same datasets are following:
| | Model parameters | Yle News accuracy | Eduskunta accuracy |
|-------------------------------------------------------|------------------|---------------------|----------------------|
|Finnish-NLP/t5-tiny-nl6-finnish | 31 million |92.80 |69.07 |
|Finnish-NLP/t5-mini-nl8-finnish | 72 million |93.89 |71.43 |
|Finnish-NLP/t5-small-nl16-finnish | 184 million |94.46 |74.00 |
|Finnish-NLP/t5-small-nl24-finnish | 260 million |**94.68** |74.90 |
|Finnish-NLP/byt5-base-finnish | 582 million |92.33 |73.13 |
|Finnish-NLP/t5-base-nl36-finnish | 814 million |94.40 |**75.97** |
|Finnish-NLP/t5-large-nl36-finnish | 1425 million |94.17 |73.50 |
Fine-tuning Google's multilingual mT5 models on the same datasets we can clearly see that our monolingual Finnish T5 models achieve much better results on Finnish text classification:
| | Model parameters | Yle News accuracy | Eduskunta accuracy |
|-------------------------------------------------------|------------------|---------------------|----------------------|
|google/mt5-small | 301 million |91.51 |64.10 |
|google/mt5-base | 583 million |92.71 |68.40 |
## Acknowledgements
This project would not have been possible without compute generously provided by Google through the
[TPU Research Cloud](https://sites.research.google/trc/).
## Team Members
- Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/)
- Rasmus Toivanen, [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/)
Feel free to contact us for more details 🤗
|
29c8b4c8e5d253f523cca5d6e123ce9c
|
fpuentes/bert-fromscratch-galician-tiny
|
fpuentes
|
roberta
| 11 | 13 |
transformers
| 0 |
fill-mask
| true | false | false |
apache-2.0
|
['gl']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| true | true | true | 1,331 | false |
## Descripción do modelo
Modelo de (~) 67M de parámetros, adestrado e afinado desde cero, usando un dataset en galego de 305MB obtido da wikipedia en galego.
No contexto da Resolución do 22 de decembro de 2021 da Secretaría Xeral de Educación e Formación Profesional pola que se convocan premios para o desenvolvemento de proxectos de innovación tecnolóxica ou científica e proxectos de innovación didáctica no ámbito da formación profesional en centros públicos dependentes da Consellería de Cultura, Educación e Universidade, baixo o nome de "Creación dun modelo de linguaxe adestrado previamente mediante técnicas de autoatención para explorar arquitecturas que permitan o seu uso en solucións de procesamento da linguaxe natural en galego tanto na docencia como na contorna empresarial"
## Usos e limitacións
Este modelo foi creado con fins pedagóxicos e de investigación.
## Hyperparametros de entrenamento
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.08113086280077723,0.8857246592117177) and epsilon=5.264065162059701e-07
- lr_scheduler_type: linear
- num_epochs: 15
### Resultados
- Loss: 1.6262
### Versións
- Transformers 4.24.0
- Pytorch 1.13.1
- Datasets 2.6.1
- Tokenizers 0.11.0
|
731cac844341c3035d3330aed29c45c0
|
gokuls/distilbert_sa_GLUE_Experiment_rte_192
|
gokuls
|
distilbert
| 17 | 4 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
|
['en']
|
['glue']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,112 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_sa_GLUE_Experiment_rte_192
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE RTE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6920
- Accuracy: 0.5271
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6951 | 1.0 | 10 | 0.6927 | 0.5271 |
| 0.6935 | 2.0 | 20 | 0.6925 | 0.5271 |
| 0.692 | 3.0 | 30 | 0.6931 | 0.5162 |
| 0.694 | 4.0 | 40 | 0.6932 | 0.5090 |
| 0.6923 | 5.0 | 50 | 0.6950 | 0.4729 |
| 0.6932 | 6.0 | 60 | 0.6921 | 0.5271 |
| 0.6926 | 7.0 | 70 | 0.6928 | 0.5235 |
| 0.6917 | 8.0 | 80 | 0.6929 | 0.5271 |
| 0.6896 | 9.0 | 90 | 0.6920 | 0.5271 |
| 0.6758 | 10.0 | 100 | 0.7009 | 0.4801 |
| 0.6273 | 11.0 | 110 | 0.7272 | 0.4946 |
| 0.5267 | 12.0 | 120 | 0.7684 | 0.5199 |
| 0.4413 | 13.0 | 130 | 0.8273 | 0.4946 |
| 0.3725 | 14.0 | 140 | 0.8790 | 0.4946 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.8.0
- Tokenizers 0.13.2
|
21ff6b5e88e765eaf421da8ee9ccb667
|
IDEA-CCNL/Erlangshen-TCBert-110M-Classification-Chinese
|
IDEA-CCNL
|
bert
| 5 | 30 |
transformers
| 1 |
fill-mask
| true | false | false |
apache-2.0
|
['zh']
| null | null | 0 | 0 | 0 | 0 | 1 | 0 | 1 |
['classification']
| false | true | true | 8,635 | false |
# Erlangshen-TCBert-110M-Classification-Chinese
- Github: [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)
- Docs: [Fengshenbang-Docs](https://fengshenbang-doc.readthedocs.io/)
## 简介 Brief Introduction
110M参数的Topic Classification BERT (TCBert)。
The TCBert with 110M parameters is pre-trained for, not limited to, Chinese topic classification tasks.
## 模型分类 Model Taxonomy
| 需求 Demand | 任务 Task | 系列 Series | 模型 Model | 参数 Parameter | 额外 Extra |
| :----: | :----: | :----: | :----: | :----: | :----: |
| 通用 General | 自然语言理解 NLU | 二郎神 Erlangshen | TCBert | 110M | Chinese |
## 模型信息 Model Information
为了提高模型在话题分类上的效果,我们收集了大量话题分类数据进行基于prompts的预训练。
To improve the model performance on the topic classification task, we collected numerous topic classification datasets for pre-training based on general prompts.
### 下游效果 Performance
我们为每个数据集设计了两个prompt模板。
We customize two prompts templates for each dataset.
第一个prompt模板:
For ***prompt template 1***:
| Dataset | Prompt template 1 |
|---------|:------------------------:|
| TNEWS | 下面是一则关于__的新闻: |
| CSLDCP | 这一句描述__的内容如下: |
| IFLYTEK | 这一句描述__的内容如下: |
第一个prompt模板的微调实验结果:
The **fine-tuning** results for prompt template 1:
| Model | TNEWS | CLSDCP | IFLYTEK |
|-----------------|:------:|:------:|:-------:|
| Macbert-base | 55.02 | 57.37 | 51.34 |
| Macbert-large | 55.77 | 58.99 | 50.31 |
| Erlangshen-1.3B | 57.36 | 62.35 | 53.23 |
| TCBert-base<sub>110M-Classification-Chinese | 55.57 | 58.60 | 49.63 |
| TCBert-large<sub>330M-Classification-Chinese | 56.17 | 60.06 | 51.34 |
| TCBert-1.3B<sub>1.3B-Classification-Chinese | 57.41 | 65.10 | 53.75 |
| TCBert-base<sub>110M-Sentence-Embedding-Chinese | 54.68 | 59.78 | 49.40 |
| TCBert-large<sub>330M-Sentence-Embedding-Chinese | 55.32 | 62.07 | 51.11 |
| TCBert-1.3B<sub>1.3B-Sentence-Embedding-Chinese | 57.46 | 65.04 | 53.06 |
第一个prompt模板的句子相似度结果:
The **sentence similarity** results for prompt template 1:
| | TNEWS | | CSLDCP | | IFLYTEK | |
|-----------------|:--------:|:---------:|:---------:|:---------:|:---------:|:---------:|
| Model | referece | whitening | reference | whitening | reference | whitening |
| Macbert-base | 43.53 | 47.16 | 33.50 | 36.53 | 28.99 | 33.85 |
| Macbert-large | 46.17 | 49.35 | 37.65 | 39.38 | 32.36 | 35.33 |
| Erlangshen-1.3B | 45.72 | 49.60 | 40.56 | 44.26 | 29.33 | 36.48 |
| TCBert-base<sub>110M-Classification-Chinese | 48.61 | 51.99 | 43.31 | 45.15 | 33.45 | 37.28 |
| TCBert-large<sub>330M-Classification-Chinese | 50.50 | 52.79 | 52.89 | 53.89 | 34.93 | 38.31 |
| TCBert-1.3B<sub>1.3B-Classification-Chinese | 50.80 | 51.59 | 51.93 | 54.12 | 33.96 | 38.08 |
| TCBert-base<sub>110M-Sentence-Embedding-Chinese | 45.82 | 47.06 | 42.91 | 43.87 | 33.28 | 34.76 |
| TCBert-large<sub>330M-Sentence-Embedding-Chinese | 50.10 | 50.90 | 53.78 | 53.33 | 37.62 | 36.94 |
| TCBert-1.3B<sub>1.3B-Sentence-Embedding-Chinese | 50.70 | 53.48 | 52.66 | 54.40 | 36.88 | 38.48 |
第二个prompt模板:
For ***prompt template 2***:
| Dataset | Prompt template 2 |
|---------|:------------------------:|
| TNEWS | 接下来的新闻,是跟__相关的内容: |
| CSLDCP | 接下来的学科,是跟__相关: |
| IFLYTEK | 接下来的生活内容,是跟__相关: |
第二个prompt模板的微调结果:
The **fine-tuning** results for prompt template 2:
| Model | TNEWS | CLSDCP | IFLYTEK |
|-----------------|:------:|:------:|:-------:|
| Macbert-base | 54.78 | 58.38 | 50.83 |
| Macbert-large | 56.77 | 60.22 | 51.63 |
| Erlangshen-1.3B | 57.81 | 62.80 | 52.77 |
| TCBert-base<sub>110M-Classification-Chinese | 54.58 | 59.16 | 49.80 |
| TCBert-large<sub>330M-Classification-Chinese | 56.22 | 61.23 | 50.77 |
| TCBert-1.3B<sub>1.3B-Classification-Chinese | 57.41 | 64.82 | 53.34 |
| TCBert-base<sub>110M-Sentence-Embedding-Chinese | 54.68 | 59.78 | 49.40 |
| TCBert-large<sub>330M-Sentence-Embedding-Chinese | 55.32 | 62.07 | 51.11 |
| TCBert-1.3B<sub>1.3B-Sentence-Embedding-Chinese | 56.87 | 65.83 | 52.94 |
第二个prompt模板的句子相似度结果:
The **sentence similarity** results for prompt template 2:
| | TNEWS | | CSLDCP | | IFLYTEK | |
|-----------------|:--------:|:---------:|:---------:|:---------:|:---------:|:---------:|
| Model | referece | whitening | reference | whitening | reference | whitening |
| Macbert-base | 42.29 | 45.22 | 34.23 | 37.48 | 29.62 | 34.13 |
| Macbert-large | 46.22 | 49.60 | 40.11 | 44.26 | 32.36 | 35.16 |
| Erlangshen-1.3B | 46.17 | 49.10 | 40.45 | 45.88 | 30.36 | 36.88 |
| TCBert-base<sub>110M-Classification-Chinese | 48.31 | 51.34 | 43.42 | 45.27 | 33.10 | 36.19 |
| TCBert-large<sub>330M-Classification-Chinese | 51.19 | 51.69 | 52.55 | 53.28 | 34.31 | 37.45 |
| TCBert-1.3B<sub>1.3B-Classification-Chinese | 52.14 | 52.39 | 51.71 | 53.89 | 33.62 | 38.14 |
| TCBert-base<sub>110M-Sentence-Embedding-Chinese | 46.72 | 48.86 | 43.19 | 43.53 | 34.08 | 35.79 |
| TCBert-large<sub>330M-Sentence-Embedding-Chinese | 50.65 | 51.94 | 53.84 | 53.67 | 37.74 | 36.65 |
| TCBert-1.3B<sub>1.3B-Sentence-Embedding-Chinese | 50.75 | 54.78 | 51.43 | 54.34 | 36.48 | 38.36 |
更多关于TCBERTs的细节,请参考我们的技术报告。基于新的数据,我们会更新TCBERTs,请留意我们仓库的更新。
For more details about TCBERTs, please refer to our paper. We may regularly update TCBERTs upon new coming data, please keep an eye on the repo!
## 使用 Usage
### 使用示例 Usage Examples
```python
# Prompt-based MLM fine-tuning
from transformers import BertForMaskedLM, BertTokenizer
import torch
# Loading models
tokenizer=BertTokenizer.from_pretrained("IDEA-CCNL/Erlangshen-TCBert-110M-Classification-Chinese")
model=BertForMaskedLM.from_pretrained("IDEA-CCNL/Erlangshen-TCBert-110M-Classification-Chinese")
# Prepare the data
inputs = tokenizer("下面是一则关于[MASK][MASK]的新闻:怎样的房子才算户型方正?", return_tensors="pt")
labels = tokenizer("下面是一则关于房产的新闻:怎样的房子才算户型方正?", return_tensors="pt")["input_ids"]
labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100)
# Output the loss
outputs = model(**inputs, labels=labels)
loss = outputs.loss
```
```python
# Prompt-based Sentence Similarity
# To extract sentence representations.
from transformers import BertForMaskedLM, BertTokenizer
import torch
# Loading models
tokenizer=BertTokenizer.from_pretrained("IDEA-CCNL/Erlangshen-TCBert-110M-Classification-Chinese")
model=BertForMaskedLM.from_pretrained("IDEA-CCNL/Erlangshen-TCBert-110M-Classification-Chinese")
# Cosine similarity function
cos = torch.nn.CosineSimilarity(dim=0, eps=1e-8)
with torch.no_grad():
# To extract sentence representations for training data
training_input = tokenizer("怎样的房子才算户型方正?", return_tensors="pt")
training_output = BertForMaskedLM(**token_text, output_hidden_states=True)
training_representation = torch.mean(training_outputs.hidden_states[-1].squeeze(), dim=0)
# To extract sentence representations for training data
test_input = tokenizer("下面是一则关于[MASK][MASK]的新闻:股票放量下趺,大资金出逃谁在接盘?", return_tensors="pt")
test_output = BertForMaskedLM(**token_text, output_hidden_states=True)
test_representation = torch.mean(training_outputs.hidden_states[-1].squeeze(), dim=0)
# Calculate similarity scores
similarity_score = cos(training_input, test_input)
```
## 引用 Citation
如果您在您的工作中使用了我们的模型,可以引用我们的[技术报告](https://arxiv.org/abs/2211.11304):
If you use for your work, please cite the following paper
```
@article{han2022tcbert,
title={TCBERT: A Technical Report for Chinese Topic Classification BERT},
author={Han, Ting and Pan, Kunhao and Chen, Xinyu and Song, Dingjie and Fan, Yuchen and Gao, Xinyu and Gan, Ruyi and Zhang, Jiaxing},
journal={arXiv preprint arXiv:2211.11304},
year={2022}
}
```
如果您在您的工作中使用了我们的模型,可以引用我们的[网站](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
```text
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2021},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
```
|
b0516e3134f782fc4bd07724e5541131
|
ali221000262/wav2vec2-base-timit-demo-colab
|
ali221000262
|
wav2vec2
| 19 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,309 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [ali221000262/wav2vec2-base-timit-demo-colab](https://huggingface.co/ali221000262/wav2vec2-base-timit-demo-colab) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2161
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.01
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 25
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 2.6432 | 13.89 | 500 | 3.2161 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
f0834eb6b2a327b5637c5599df1bcb89
|
z5ying/distilgpt2-finetuned-wikitext2
|
z5ying
|
gpt2
| 17 | 2 |
transformers
| 0 |
text-generation
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,121 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [z5ying/distilgpt2-finetuned-wikitext2](https://huggingface.co/z5ying/distilgpt2-finetuned-wikitext2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 118 | 3.0306 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.12.0
|
10e16e8300d0cc1ef42b27c4e5dce19b
|
venturaville/xlm-roberta-base-finetuned-panx-de
|
venturaville
|
xlm-roberta
| 14 | 24 |
transformers
| 0 |
token-classification
| true | false | false |
mit
| null |
['xtreme']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,320 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1367
- F1: 0.8633
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2582 | 1.0 | 525 | 0.1653 | 0.8238 |
| 0.1301 | 2.0 | 1050 | 0.1417 | 0.8439 |
| 0.0841 | 3.0 | 1575 | 0.1367 | 0.8633 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu102
- Datasets 1.16.1
- Tokenizers 0.10.3
|
cd9b6fb24159c5690702e1a51e13b751
|
hfl/cino-base-v2
|
hfl
|
xlm-roberta
| 8 | 45 |
transformers
| 4 |
fill-mask
| true | true | false |
apache-2.0
|
['zh', 'bo', 'kk', 'ko', 'mn', 'ug', 'yue']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,388 | false |
## CINO: Pre-trained Language Models for Chinese Minority Languages(中国少数民族预训练模型)
Multilingual Pre-trained Language Model, such as mBERT, XLM-R, provide multilingual and cross-lingual ability for language understanding.
We have seen rapid progress on building multilingual PLMs in recent year.
However, there is a lack of contributions on building PLMs on Chines minority languages, which hinders researchers from building powerful NLP systems.
To address the absence of Chinese minority PLMs, Joint Laboratory of HIT and iFLYTEK Research (HFL) proposes CINO (Chinese-miNOrity pre-trained language model), which is built on XLM-R with additional pre-training using Chinese minority corpus, such as
- Chinese,中文(zh)
- Tibetan,藏语(bo)
- Mongolian (Uighur form),蒙语(mn)
- Uyghur,维吾尔语(ug)
- Kazakh (Arabic form),哈萨克语(kk)
- Korean,朝鲜语(ko)
- Zhuang,壮语
- Cantonese,粤语(yue)
Please read our GitHub repository for more details (Chinese): https://github.com/ymcui/Chinese-Minority-PLM
You may also interested in,
Chinese MacBERT: https://github.com/ymcui/MacBERT
Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm
Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA
Chinese XLNet: https://github.com/ymcui/Chinese-XLNet
Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer
More resources by HFL: https://github.com/ymcui/HFL-Anthology
|
2ef46758154973d45ae4be3f80f74156
|
kejian/immaculate-awr
|
kejian
|
gpt2
| 36 | 1 |
transformers
| 0 | null | true | false | false |
apache-2.0
|
['en']
|
['kejian/codeparrot-train-more-filter-3.3b-cleaned']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 4,256 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# immaculate-awr
This model was trained from scratch on the kejian/codeparrot-train-more-filter-3.3b-cleaned dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 12588
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.23.0
- Pytorch 1.13.0+cu116
- Datasets 2.0.0
- Tokenizers 0.12.1
# Full config
{'dataset': {'datasets': ['kejian/codeparrot-train-more-filter-3.3b-cleaned'],
'is_split_by_sentences': True},
'generation': {'batch_size': 128,
'metrics_configs': [{}, {'n': 1}, {}],
'scenario_configs': [{'display_as_html': True,
'generate_kwargs': {'do_sample': True,
'eos_token_id': 0,
'max_length': 640,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'unconditional',
'num_samples': 512},
{'display_as_html': True,
'generate_kwargs': {'do_sample': True,
'eos_token_id': 0,
'max_length': 272,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'functions',
'num_samples': 512,
'prompts_path': 'resources/functions_csnet.jsonl',
'use_prompt_for_scoring': True}],
'scorer_config': {}},
'kl_gpt3_callback': {'gpt3_kwargs': {'model_name': 'code-cushman-001'},
'max_tokens': 64,
'num_samples': 4096},
'model': {'from_scratch': True,
'gpt2_config_kwargs': {'reorder_and_upcast_attn': True,
'scale_attn_by': True},
'model_kwargs': {'value_head_config': {'is_detached': False}},
'path_or_name': 'codeparrot/codeparrot-small'},
'objective': {'alpha': 0.05, 'beta': 1, 'name': 'AWR'},
'tokenizer': {'path_or_name': 'codeparrot/codeparrot-small'},
'training': {'dataloader_num_workers': 0,
'effective_batch_size': 256,
'evaluation_strategy': 'no',
'fp16': True,
'hub_model_id': 'immaculate-awr',
'hub_strategy': 'all_checkpoints',
'learning_rate': 0.001,
'logging_first_step': True,
'logging_steps': 1,
'num_tokens': 3300000000.0,
'output_dir': 'training_output',
'per_device_train_batch_size': 16,
'push_to_hub': True,
'remove_unused_columns': False,
'save_steps': 6294,
'save_strategy': 'steps',
'seed': 42,
'warmup_ratio': 0.01,
'weight_decay': 0.1}}
# Wandb URL:
https://wandb.ai/kejian/uncategorized/runs/1nozp91y
|
c74b3f360f538c4df7efff56e5dc9781
|
theojolliffe/bart-large-cnn-finetuned-roundup-3-8
|
theojolliffe
|
bart
| 13 | 3 |
transformers
| 0 |
text2text-generation
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,191 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-finetuned-roundup-3-8
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4132
- Rouge1: 49.6606
- Rouge2: 28.4044
- Rougel: 31.5419
- Rougelsum: 46.2463
- Gen Len: 142.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:|
| No log | 1.0 | 258 | 1.2686 | 48.8513 | 28.7007 | 31.1199 | 45.7318 | 142.0 |
| 1.1738 | 2.0 | 516 | 1.1884 | 49.8072 | 28.9817 | 31.3611 | 46.9639 | 141.6875 |
| 1.1738 | 3.0 | 774 | 1.1970 | 49.3865 | 28.3426 | 30.0945 | 46.4681 | 141.3438 |
| 0.7069 | 4.0 | 1032 | 1.1984 | 50.6743 | 29.4728 | 31.5364 | 47.989 | 141.7188 |
| 0.7069 | 5.0 | 1290 | 1.2494 | 49.4461 | 28.9295 | 31.0334 | 46.6611 | 142.0 |
| 0.4618 | 6.0 | 1548 | 1.2954 | 50.6789 | 30.2783 | 32.1932 | 47.5929 | 142.0 |
| 0.4618 | 7.0 | 1806 | 1.3638 | 49.9476 | 30.223 | 32.4346 | 46.7383 | 142.0 |
| 0.3293 | 8.0 | 2064 | 1.4132 | 49.6606 | 28.4044 | 31.5419 | 46.2463 | 142.0 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
aa5f40c76901de903e55d1b5b5c866bb
|
gokuls/distilbert_sa_GLUE_Experiment_mrpc_256
|
gokuls
|
distilbert
| 17 | 5 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
|
['en']
|
['glue']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,216 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_sa_GLUE_Experiment_mrpc_256
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5996
- Accuracy: 0.6814
- F1: 0.8105
- Combined Score: 0.7459
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:|
| 0.6343 | 1.0 | 15 | 0.6246 | 0.6838 | 0.8122 | 0.7480 |
| 0.6276 | 2.0 | 30 | 0.6234 | 0.6838 | 0.8122 | 0.7480 |
| 0.6306 | 3.0 | 45 | 0.6243 | 0.6838 | 0.8122 | 0.7480 |
| 0.6279 | 4.0 | 60 | 0.6205 | 0.6838 | 0.8122 | 0.7480 |
| 0.6168 | 5.0 | 75 | 0.5996 | 0.6814 | 0.8105 | 0.7459 |
| 0.5632 | 6.0 | 90 | 0.6020 | 0.6936 | 0.7954 | 0.7445 |
| 0.5021 | 7.0 | 105 | 0.6094 | 0.6936 | 0.7841 | 0.7389 |
| 0.4263 | 8.0 | 120 | 0.6844 | 0.6299 | 0.7113 | 0.6706 |
| 0.3476 | 9.0 | 135 | 0.7218 | 0.6373 | 0.7098 | 0.6735 |
| 0.2966 | 10.0 | 150 | 0.7759 | 0.7010 | 0.7953 | 0.7481 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.8.0
- Tokenizers 0.13.2
|
b4a8e43a6bf1bca5bdb2ed0af76208d9
|
wietsedv/xlm-roberta-base-ft-udpos28-sa
|
wietsedv
|
xlm-roberta
| 8 | 15 |
transformers
| 0 |
token-classification
| true | false | false |
apache-2.0
|
['sa']
|
['universal_dependencies']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['part-of-speech', 'token-classification']
| true | true | true | 568 | false |
# XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Sanskrit
This model is part of our paper called:
- Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages
Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-sa")
model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-sa")
```
|
f637cc714a2176f7a237c0668fb16a59
|
emmyapi/distilbart-podimo-data-5
|
emmyapi
|
bart
| 13 | 2 |
transformers
| 1 |
text2text-generation
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer', 'Summarization']
| true | true | true | 1,849 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbart-podimo-data-5
This model is a fine-tuned version of [sshleifer/distilbart-cnn-12-6](https://huggingface.co/sshleifer/distilbart-cnn-12-6) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1325
## Model description
model | rouge1 | rouge2 | rougeL | rougeLsum
--- | --- | --- | --- |---
sshleifer/distilbart-cnn-12-6 | 0.202654 | 0.025766 | 0.123072 | 0.130183
emmyapi/distilbart-podimo-data-3 | 0.235147 | 0.047087 | 0.151535 | 0.161782
emmyapi/distilbart-podimo-data-4 | 0.236926 | 0.048327 | 0.153539 | 0.165026
emmyapi/distilbart-podimo-data-5 | 0.259024 | 0.061665 | 0.167187 | 0.178399
emmyapi/distilbart-podimo-data-7 | 0.298888 | 0.059900 | 0.159479 | 0.185049
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 64
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 16
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.3477 | 3.33 | 500 | 3.7027 |
| 2.6286 | 6.66 | 1000 | 3.6995 |
| 2.0718 | 10.0 | 1500 | 3.8868 |
| 1.7806 | 13.33 | 2000 | 4.1325 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.11.0
- Datasets 2.2.1
- Tokenizers 0.12.1
|
842b08b074e9fdded6c1e8dbfd4cecfc
|
Mentatko/distilbert-base-uncased-finetuned-squad
|
Mentatko
|
distilbert
| 10 | 3 |
transformers
| 0 |
question-answering
| true | false | false |
apache-2.0
| null |
['squad']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 926 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.23.1
- Pytorch 1.13.0+cpu
- Datasets 2.6.1
- Tokenizers 0.13.2
|
297699c42e3222a12016c7ab73f7b3cf
|
anuragshas/whisper-large-v2-hi-v3
|
anuragshas
|
whisper
| 23 | 1 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['hi']
|
['mozilla-foundation/common_voice_11_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['whisper-event', 'generated_from_trainer']
| true | true | true | 1,624 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Large-v2 Hindi
This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the mozilla-foundation/common_voice_11_0 hi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3191
- Wer: 11.3039
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0479 | 2.06 | 200 | 0.2189 | 12.3226 |
| 0.0081 | 5.06 | 400 | 0.2649 | 11.5740 |
| 0.001 | 8.06 | 600 | 0.2998 | 11.4252 |
| 0.0004 | 11.05 | 800 | 0.3191 | 11.3039 |
| 0.0003 | 14.05 | 1000 | 0.3267 | 11.3291 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
a5df2bfbea2d8109b98b9c9404168716
|
DOOGLAK/Article_250v3_NER_Model_3Epochs_AUGMENTED
|
DOOGLAK
|
bert
| 13 | 5 |
transformers
| 0 |
token-classification
| true | false | false |
apache-2.0
| null |
['article250v3_wikigold_split']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,559 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Article_250v3_NER_Model_3Epochs_AUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the article250v3_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2531
- Precision: 0.6347
- Recall: 0.6342
- F1: 0.6345
- Accuracy: 0.9207
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 82 | 0.2668 | 0.5478 | 0.5370 | 0.5424 | 0.9064 |
| No log | 2.0 | 164 | 0.2516 | 0.6272 | 0.6154 | 0.6212 | 0.9179 |
| No log | 3.0 | 246 | 0.2531 | 0.6347 | 0.6342 | 0.6345 | 0.9207 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
|
f49b6dfea0c0d8142e77f861821b1447
|
gggggxy/ddpm-butterflies-128
|
gggggxy
| null | 13 | 1 |
diffusers
| 0 | null | false | false | false |
apache-2.0
|
['en']
|
['huggan/smithsonian_butterflies_subset']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,229 | false |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/gggggxy/ddpm-butterflies-128/tensorboard?#scalars)
|
8f5a6d5841b4f9175bf27cd02372547c
|
OFA-Sys/small-stable-diffusion-v0
|
OFA-Sys
| null | 19 | 1,849 |
diffusers
| 36 |
text-to-image
| false | false | false |
openrail
|
['en']
|
['ChristophSchuhmann/improved_aesthetics_6plus']
| null | 3 | 1 | 2 | 0 | 3 | 1 | 2 |
['stable-diffusion', 'stable-diffusion-diffusers', 'text-to-image']
| false | true | true | 9,159 | false |
# Small Stable Diffusion Model Card
【Update 2023/02/07】 Recently, we have released [a diffusion deployment repo](https://github.com/OFA-Sys/diffusion-deploy) to speedup the inference on both GPU (\~4x speedup, based on TensorRT) and CPU (\~12x speedup, based on IntelOpenVINO).
Integrated with this repo, small-stable-diffusion could generate images in just **5 seconds on the CPU**\*.
*\* Test on Intel(R) Xeon(R) Platinum 8369B CPU, DPMSolverMultistepScheduler 10 steps, fix channel/height/width when converting to Onnx*
Similar image generation quality, but is nearly 1/2 smaller!
Here are some samples:

# Gradio
We support a [Gradio](https://github.com/gradio-app/gradio) Web UI to run small-stable-diffusion-v0:
[](https://huggingface.co/spaces/akhaliq/small-stable-diffusion-v0)
We also provide a space demo for [`small-stable-diffusion-v0 + diffusion-deploy`](https://huggingface.co/spaces/OFA-Sys/FAST-CPU-small-stable-diffusion-v0).
*As huggingface provides AMD CPU for the space demo, it costs about 35 seconds to generate an image with 15 steps, which is much slower than the Intel CPU environment as diffusion-deploy is based on Intel's OpenVINO.*
## Example
*Use `Diffusers` >=0.8.0, do not support lower versions.*
```python
import torch
from diffusers import StableDiffusionPipeline
model_id = "OFA-Sys/small-stable-diffusion-v0/"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "an apple, 4k"
image = pipe(prompt).images[0]
image.save("apple.png")
```
# Training
### Initialization
This model is initialized from stable-diffusion v1-4. As the model structure is not the same as stable-diffusion and the number of parameters is smaller, the parameters of stable diffusion could not be utilized directly. Therefore, small stable diffusion set `layers_per_block=1` and select the first layer of each block in original stable diffusion to initilize the small model.
### Training Procedure
After the initialization, the model has been trained for 1100k steps in 8xA100 GPUS. The training progress consists of three stages. The first stage is a simple pre-training precedure. In the last two stages, the original stable diffusion was utilized to distill knowledge to small model as a teacher model. In all stages, only the parameters in unet were trained and other parameters were frozen.
- **Hardware:** 8 x A100-80GB GPUs
- **Optimizer:** AdamW
- **Stage 1** - Pretrain the unet part of the model.
- **Steps**: 500,000
- **Batch:** batch size=8, GPUs=8, Gradient Accumulations=2. Total batch size=128
- **Learning rate:** warmup to 1e-5 for 10,000 steps and then kept constant
- **Stage 2** - Distill the model using stable-diffusion v1-4 as the teacher. Besides the ground truth, the training in this stage uses the soft-label (`pred_noise`) generated by teacher model as well.
- **Steps**: 400,000
- **Batch:** batch size=8, GPUs=8, Gradient Accumulations=2. Total batch size=128
- **Learning rate:** warmup to 1e-5 for 5,000 steps and then kept constant
- **Soft label weight:** 0.5
- **Hard label weight:** 0.5
- **Stage 3** - Distill the model using stable-diffusion v1-5 as the teacher. Use several techniques in `Knowledge Distillation of Transformer-based Language Models Revisited`, including similarity-based layer match apart from soft label.
- **Steps**: 200,000
- **Batch:** batch size=8, GPUs=8, Gradient Accumulations=2. Total batch size=128
- **Learning rate:** warmup to 1e-5 for 5,000 steps and then kept constant
- **Softlabel weight:** 0.5
- **Hard label weight:** 0.5
### Training Data
The model developers used the following dataset for training the model:
1. [LAION-2B en aesthetic](https://huggingface.co/datasets/laion/laion2B-en-aesthetic)
2. [LAION-Art](https://huggingface.co/datasets/laion/laion-art)
3. [LAION-HD](https://huggingface.co/datasets/laion/laion-high-resolution)
### Citation
```bibtex
@article{Lu2022KnowledgeDO,
title={Knowledge Distillation of Transformer-based Language Models Revisited},
author={Chengqiang Lu and Jianwei Zhang and Yunfei Chu and Zhengyu Chen and Jingren Zhou and Fei Wu and Haiqing Chen and Hongxia Yang},
journal={ArXiv},
year={2022},
volume={abs/2206.14366}
}
```
# Uses
_The following section is adapted from the [Stable Diffusion model card](https://huggingface.co/CompVis/stable-diffusion-v1-4)_
## Direct Use
The model is intended for research purposes only. Possible research areas and
tasks include
- Safe deployment of models which have the potential to generate harmful content.
- Probing and understanding the limitations and biases of generative models.
- Generation of artworks and use in design and other artistic processes.
- Applications in educational or creative tools.
- Research on generative models.
Excluded uses are described below.
### Misuse, Malicious Use, and Out-of-Scope Use
The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
#### Out-of-Scope Use
The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
#### Misuse and Malicious Use
Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
- Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc.
- Intentionally promoting or propagating discriminatory content or harmful stereotypes.
- Impersonating individuals without their consent.
- Sexual content without consent of the people who might see it.
- Mis- and disinformation
- Representations of egregious violence and gore
- Sharing of copyrighted or licensed material in violation of its terms of use.
- Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use.
## Limitations and Bias
### Limitations
- The model does not achieve perfect photorealism
- The model cannot render legible text
- The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere”
- Faces and people in general may not be generated properly.
- The model was trained mainly with English captions and will not work as well in other languages.
- The autoencoding part of the model is lossy
- The model was trained on a large-scale dataset
[LAION-5B](https://laion.ai/blog/laion-5b/) which contains adult material
and is not fit for product use without additional safety mechanisms and
considerations.
- No additional measures were used to deduplicate the dataset. As a result, we observe some degree of memorization for images that are duplicated in the training data.
The training data can be searched at [https://rom1504.github.io/clip-retrieval/](https://rom1504.github.io/clip-retrieval/) to possibly assist in the detection of memorized images.
### Bias
While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
Stable Diffusion v1 was trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/),
which consists of images that are primarily limited to English descriptions.
Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for.
This affects the overall output of the model, as white and western cultures are often set as the default. Further, the
ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts.
### Safety Module
The intended use of this model is with the [Safety Checker](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/safety_checker.py) in Diffusers.
This checker works by checking model outputs against known hard-coded NSFW concepts.
The concepts are intentionally hidden to reduce the likelihood of reverse-engineering this filter.
Specifically, the checker compares the class probability of harmful concepts in the embedding space of the `CLIPModel` *after generation* of the images.
The concepts are passed into the model with the generated image and compared to a hand-engineered weight for each NSFW concept.
*This model card was written by: Justin Pinkney and is based on the [Stable Diffusion model card](https://huggingface.co/CompVis/stable-diffusion-v1-4).*
|
bb65bdc32b48fb65c0d7a0e2ae3e177f
|
ankile/ddpm-pcam-96-first-100
|
ankile
| null | 11 | 0 |
diffusers
| 0 | null | false | false | false |
apache-2.0
|
['en']
|
['pcam-96']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,202 | false |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-pcam-96-first-100
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `pcam-96` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/ankile/ddpm-pcam-96-first-100/tensorboard?#scalars)
|
219c1d36f4d9cae8a66f6fc816ddb56d
|
saattrupdan/xlmr-base-texas-squad-da
|
saattrupdan
|
xlm-roberta
| 32 | 40 |
transformers
| 2 |
question-answering
| true | false | false |
mit
|
['da']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,090 | false |
# TExAS-SQuAD-da
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the TExAS-SQuAD-da dataset.
It achieves the following results on the evaluation set:
- Exact match: 63.96%
- F1-score: 68.40%
In comparison, the `jacobshein/danish-bert-botxo-qa-squad` model achieves 30.37% EM and 37.15% F1.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.6438 | 1.0 | 4183 | 1.4711 |
| 1.4079 | 2.0 | 8366 | 1.4356 |
| 1.2532 | 3.0 | 12549 | 1.4509 |
### Framework versions
- Transformers 4.12.2
- Pytorch 1.8.1+cu101
- Datasets 1.12.1
- Tokenizers 0.10.3
|
62beb2b8769e55b1d830585465a3c978
|
EdBianchi/GPT-2-finetuned-papers
|
EdBianchi
|
gpt2
| 9 | 14 |
transformers
| 0 |
text-generation
| false | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,319 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# EdBianchi/GPT-2-finetuned-papers
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.4718
- Validation Loss: 2.2371
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'ExponentialDecay', 'config': {'initial_learning_rate': 0.0005, 'decay_steps': 500, 'decay_rate': 0.95, 'staircase': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.4718 | 2.2371 | 0 |
### Framework versions
- Transformers 4.21.3
- TensorFlow 2.10.0
- Datasets 2.4.0
- Tokenizers 0.12.1
|
4b107de4b033de7985550ef2df9c8b43
|
anas-awadalla/roberta-base-few-shot-k-1024-finetuned-squad-seed-42
|
anas-awadalla
|
roberta
| 13 | 7 |
transformers
| 0 |
question-answering
| true | false | false |
mit
| null |
['squad']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,037 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-few-shot-k-1024-finetuned-squad-seed-42
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
{'exact_match': 66.90633869441817, 'f1': 77.54482247690522}
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
1d88570bfe277ee9d3ed7b23844ee8f2
|
scasutt/wav2vec2-large-xlsr-53_toy_train_data_random_low_pass
|
scasutt
|
wav2vec2
| 7 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,823 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53_toy_train_data_random_low_pass
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6572
- Wer: 0.4973
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.0834 | 2.1 | 500 | 3.4478 | 1.0 |
| 1.0735 | 4.2 | 1000 | 0.9113 | 0.7815 |
| 0.5516 | 6.3 | 1500 | 0.7035 | 0.6081 |
| 0.4023 | 8.4 | 2000 | 0.6647 | 0.5649 |
| 0.3423 | 10.5 | 2500 | 0.6613 | 0.5450 |
| 0.2938 | 12.6 | 3000 | 0.6967 | 0.5318 |
| 0.2902 | 14.7 | 3500 | 0.6430 | 0.5089 |
| 0.2372 | 16.81 | 4000 | 0.6653 | 0.5045 |
| 0.2148 | 18.91 | 4500 | 0.6572 | 0.4973 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu102
- Datasets 2.0.0
- Tokenizers 0.12.0
|
35f0855e09e30681b1cba8740ed47a96
|
UchihaMadara/bert-finetuned-ner
|
UchihaMadara
|
bert
| 12 | 1 |
transformers
| 0 |
token-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,498 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0636
- Precision: 0.9410
- Recall: 0.9529
- F1: 0.9469
- Accuracy: 0.9862
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0863 | 1.0 | 1756 | 0.0673 | 0.9231 | 0.9335 | 0.9283 | 0.9827 |
| 0.0329 | 2.0 | 3512 | 0.0625 | 0.9297 | 0.9485 | 0.9390 | 0.9856 |
| 0.0171 | 3.0 | 5268 | 0.0636 | 0.9410 | 0.9529 | 0.9469 | 0.9862 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Tokenizers 0.13.2
|
712d98dfbed0067dc2dbe71e191977e1
|
speechbrain/google_speech_command_xvector
|
speechbrain
| null | 10 | 160 |
speechbrain
| 2 |
audio-classification
| true | false | false |
apache-2.0
|
['en']
|
['google speech commands']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['speechbrain', 'embeddings', 'Commands', 'Keywords', 'Keyword Spotting', 'pytorch', 'xvectors', 'TDNN', 'Command Recognition', 'audio-classification']
| false | true | true | 4,846 | false |
<iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe>
<br/><br/>
# Command Recognition with xvector embeddings on Google Speech Commands
This repository provides all the necessary tools to perform command recognition with SpeechBrain using a model pretrained on Google Speech Commands.
You can download the dataset [here](https://www.tensorflow.org/datasets/catalog/speech_commands)
The dataset provides small training, validation, and test sets useful for detecting single keywords in short audio clips. The provided system can recognize the following 12 keywords:
```
'yes', 'no', 'up', 'down', 'left', 'right', 'on', 'off', 'stop', 'go', 'unknown', 'silence'
```
For a better experience, we encourage you to learn more about
[SpeechBrain](https://speechbrain.github.io). The given model performance on the test set is:
| Release | Accuracy(%)
|:-------------:|:--------------:|
| 06-02-21 | 98.14 |
## Pipeline description
This system is composed of a TDNN model coupled with statistical pooling. A classifier, trained with Categorical Cross-Entropy Loss, is applied on top of that.
The system is trained with recordings sampled at 16kHz (single channel).
The code will automatically normalize your audio (i.e., resampling + mono channel selection) when calling *classify_file* if needed.
## Install SpeechBrain
First of all, please install SpeechBrain with the following command:
```
pip install speechbrain
```
Please notice that we encourage you to read our tutorials and learn more about
[SpeechBrain](https://speechbrain.github.io).
### Perform Command Recognition
```python
import torchaudio
from speechbrain.pretrained import EncoderClassifier
classifier = EncoderClassifier.from_hparams(source="speechbrain/google_speech_command_xvector", savedir="pretrained_models/google_speech_command_xvector")
out_prob, score, index, text_lab = classifier.classify_file('speechbrain/google_speech_command_xvector/yes.wav')
print(text_lab)
out_prob, score, index, text_lab = classifier.classify_file('speechbrain/google_speech_command_xvector/stop.wav')
print(text_lab)
```
### Inference on GPU
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
### Training
The model was trained with SpeechBrain (b7ff9dc4).
To train it from scratch follows these steps:
1. Clone SpeechBrain:
```bash
git clone https://github.com/speechbrain/speechbrain/
```
2. Install it:
```
cd speechbrain
pip install -r requirements.txt
pip install -e .
```
3. Run Training:
```
cd recipes/Google-speech-commands
python train.py hparams/xvect.yaml --data_folder=your_data_folder
```
You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1BKwtr1mBRICRe56PcQk2sCFq63Lsvdpc?usp=sharing).
### Limitations
The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.
#### Referencing xvectors
```@inproceedings{DBLP:conf/odyssey/SnyderGMSPK18,
author = {David Snyder and
Daniel Garcia{-}Romero and
Alan McCree and
Gregory Sell and
Daniel Povey and
Sanjeev Khudanpur},
title = {Spoken Language Recognition using X-vectors},
booktitle = {Odyssey 2018},
pages = {105--111},
year = {2018},
}
```
#### Referencing Google Speech Commands
```@article{speechcommands,
author = { {Warden}, P.},
title = "{Speech Commands: A Dataset for Limited-Vocabulary Speech Recognition}",
journal = {ArXiv e-prints},
archivePrefix = "arXiv",
eprint = {1804.03209},
primaryClass = "cs.CL",
keywords = {Computer Science - Computation and Language, Computer Science - Human-Computer Interaction},
year = 2018,
month = apr,
url = {https://arxiv.org/abs/1804.03209},
}
```
# **About SpeechBrain**
- Website: https://speechbrain.github.io/
- Code: https://github.com/speechbrain/speechbrain/
- HuggingFace: https://huggingface.co/speechbrain/
# **Citing SpeechBrain**
Please, cite SpeechBrain if you use it for your research or business.
```bibtex
@misc{speechbrain,
title={{SpeechBrain}: A General-Purpose Speech Toolkit},
author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio},
year={2021},
eprint={2106.04624},
archivePrefix={arXiv},
primaryClass={eess.AS},
note={arXiv:2106.04624}
}
```
|
cffa14cd07a50ad41054f0d4c7c1e98d
|
yinde/fatimah_fake_news_bert
|
yinde
|
distilbert
| 10 | 1 |
transformers
| 1 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,424 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fatimah_fake_news_bert
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on [Fake and real dataset on kaggle ]([distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english))
It achieves the following results on the evaluation set:
- Loss: 0.0010
- Accuracy: 0.9998
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 10
- eval_batch_size: 20
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3298 | 0.06 | 200 | 0.0094 | 0.9987 |
| 0.0087 | 0.11 | 400 | 0.0091 | 0.9988 |
| 0.0126 | 0.17 | 600 | 0.0132 | 0.9965 |
| 0.0081 | 0.22 | 800 | 0.0100 | 0.9987 |
| 0.0132 | 0.28 | 1000 | 0.0086 | 0.9990 |
| 0.0131 | 0.33 | 1200 | 0.0070 | 0.9986 |
| 0.0086 | 0.39 | 1400 | 0.0079 | 0.9990 |
| 0.0041 | 0.45 | 1600 | 0.0057 | 0.9991 |
| 0.0069 | 0.5 | 1800 | 0.0083 | 0.9989 |
| 0.0052 | 0.56 | 2000 | 0.0043 | 0.9993 |
| 0.0 | 0.61 | 2200 | 0.0047 | 0.9993 |
| 0.003 | 0.67 | 2400 | 0.0052 | 0.9994 |
| 0.0126 | 0.72 | 2600 | 0.0028 | 0.9997 |
| 0.0047 | 0.78 | 2800 | 0.0018 | 0.9996 |
| 0.0 | 0.84 | 3000 | 0.0027 | 0.9996 |
| 0.0001 | 0.89 | 3200 | 0.0029 | 0.9996 |
| 0.0079 | 0.95 | 3400 | 0.0010 | 0.9998 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
cd5486529374328cb60f5f5abf19f3f8
|
mqy/mt5-small-finetuned-18jan-3
|
mqy
|
mt5
| 21 | 6 |
transformers
| 0 |
summarization
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['summarization', 'generated_from_trainer']
| true | true | true | 2,150 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-18jan-3
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6115
- Rouge1: 7.259
- Rouge2: 0.3667
- Rougel: 7.1595
- Rougelsum: 7.156
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| 7.1947 | 1.0 | 60 | 3.1045 | 5.91 | 0.8583 | 5.8687 | 5.8123 |
| 3.8567 | 2.0 | 120 | 2.7744 | 8.0065 | 0.4524 | 8.0204 | 7.85 |
| 3.4346 | 3.0 | 180 | 2.7319 | 7.5954 | 0.4524 | 7.5204 | 7.4833 |
| 3.219 | 4.0 | 240 | 2.6736 | 8.5329 | 0.3333 | 8.487 | 8.312 |
| 3.0836 | 5.0 | 300 | 2.6583 | 8.3405 | 0.5667 | 8.2003 | 8.0543 |
| 2.9713 | 6.0 | 360 | 2.6516 | 8.8421 | 0.1667 | 8.7597 | 8.6754 |
| 2.9757 | 7.0 | 420 | 2.6369 | 8.04 | 0.3667 | 8.0018 | 7.8489 |
| 2.8321 | 8.0 | 480 | 2.6215 | 6.8739 | 0.3667 | 6.859 | 6.7917 |
| 2.794 | 9.0 | 540 | 2.6090 | 7.0738 | 0.4167 | 7.0232 | 6.9619 |
| 2.7695 | 10.0 | 600 | 2.6115 | 7.259 | 0.3667 | 7.1595 | 7.156 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
f12d03216f222c9592d0e03ce1b754cf
|
caush/Clickbait4
|
caush
|
bert
| 5 | 2 |
transformers
| 0 |
text-classification
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,819 | false |
This model is a fine-tuned version of microsoft/Multilingual-MiniLM-L12-H384 on the Webis-Clickbait-17 dataset. It achieves the following results on the evaluation set:
Loss: 0.0261
The following list presents the current performances achieved by the participants. As primary evaluation measure, Mean Squared Error (MSE) with respect to the mean judgments of the annotators is used. Our result is 0,0261 for the MSE metric. We do not compute the other metrics. We try not to cheat using unknown data at the time of the challenge. We do not use k-fold cross validation techniques.
| team | MSE | F1 | Precision | Recall| Accuracy| Runtime |
|----- |----- |--- |-----------|-------|---------|-------- |
|goldfish | 0.024 | 0.741 | 0.739 | 0.742 | 0.876 | 16:20:21|
|caush | 0.026 | | | | | 00:11:00|
|monkfish | 0.026 | 0.694 | 0.785 | 0.622 | 0.870 | 03:41:35|
|dartfish | 0.027 | 0.706 | 0.733 | 0.681 | 0.865 | 00:47:07|
|torpedo19 | 0.03 | 0.677 | 0.755 | 0.614 | 0.861 | 00:52:44|
|albacore | 0.031 | 0.67 | 0.731 | 0.62 | 0.855 | 00:01:10|
|blobfish | 0.032 | 0.646 | 0.738 | 0.574 | 0.85 | 00:03:22|
|zingel | 0.033 | 0.683 | 0.719 | 0.65 | 0.856 | 00:03:27|
|anchovy | 0.034 | 0.68 | 0.717 | 0.645 | 0.855 | 00:07:20|
|ray | 0.034 | 0.684 | 0.691 | 0.677 | 0.851 | 00:29:28|
|icarfish | 0.035 | 0.621 | 0.768 | 0.522 | 0.849 | 01:02:57|
|emperor | 0.036 | 0.641 | 0.714 | 0.581 | 0.845 | 00:04:03|
|carpetshark | 0.036 | 0.638 | 0.728 | 0.568 | 0.847 | 00:08:05|
|electriceel | 0.038 | 0.588 | 0.727 | 0.493 | 0.835 | 01:04:54|
|arowana | 0.039 | 0.656 | 0.659 | 0.654 | 0.837 | 00:35:24|
|pineapplefish | 0.041 | 0.631 | 0.642 | 0.621 | 0.827 | 00:54:28|
|whitebait | 0.043 | 0.565 | 0.7 | 0.474 | 0.826 | 00:04:31|
|
6c995e257675239d3681c4b9ef14c215
|
pszemraj/gpt2-medium-vaguely-human-dialogue
|
pszemraj
|
gpt2
| 11 | 5 |
transformers
| 0 |
text-generation
| true | false | false |
mit
|
['en']
|
['natural questions']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['text-generation', 'gpt2', 'gpt']
| false | true | true | 2,146 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pszemraj/gpt2-medium-vaguely-human-dialogue
This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on a parsed version of Wizard of Wikipedia. Because the batch size was so large, it learned a general understanding of words that makes sense together but does not specifically respond to anything - sort of like an alien learning to imitate human words to convince others that it is human.
It achieves the following results on the evaluation set:
- Loss: 4.3281
## Model description
- a decent example of what happens when your batch size is too large and the global optima does not reflect specific prompts / use cases.
## Intended uses & limitations
- there are no intended uses
## Training and evaluation data
- a parsed version of the wizard of Wikipedia dataset
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 34.991 | 1.0 | 837 | 14.8359 |
| 12.2881 | 2.0 | 1674 | 9.375 |
| 8.5071 | 3.0 | 2511 | 7.2148 |
| 7.6031 | 4.0 | 3348 | 6.1758 |
| 6.4808 | 5.0 | 4185 | 5.5820 |
| 5.8562 | 6.0 | 5022 | 5.0977 |
| 5.6094 | 7.0 | 5859 | 4.8203 |
| 5.2591 | 8.0 | 6696 | 4.5977 |
| 5.0031 | 9.0 | 7533 | 4.4219 |
| 4.8837 | 10.0 | 8370 | 4.3281 |
### Framework versions
- Transformers 4.16.1
- Pytorch 1.10.0+cu111
- Tokenizers 0.11.0
|
5cd1245a625cec4da8776e078c4b5daa
|
timm/levit_256.fb_dist_in1k
|
timm
| null | 4 | 754 |
timm
| 0 |
image-classification
| true | false | false |
apache-2.0
| null |
['imagenet-1k']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['image-classification', 'timm']
| false | true | true | 3,927 | false |
# Model card for levit_256.fb_dist_in1k
A LeViT image classification model using convolutional mode (using nn.Conv2d and nn.BatchNorm2d). Pretrained on ImageNet-1k using distillation by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 18.9
- GMACs: 1.1
- Activations (M): 4.2
- Image size: 224 x 224
- **Papers:**
- LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference: https://arxiv.org/abs/2104.01136
- **Original:** https://github.com/facebookresearch/LeViT
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(
urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'))
model = timm.create_model('levit_256.fb_dist_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(
urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'))
model = timm.create_model(
'levit_256.fb_dist_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled (ie.e a (batch_size, num_features, H, W) tensor
output = model.forward_head(output, pre_logits=True)
# output is (batch_size, num_features) tensor
```
## Model Comparison
|model |top1 |top5 |param_count|img_size|
|-----------------------------------|------|------|-----------|--------|
|levit_384.fb_dist_in1k |82.596|96.012|39.13 |224 |
|levit_conv_384.fb_dist_in1k |82.596|96.012|39.13 |224 |
|levit_256.fb_dist_in1k |81.512|95.48 |18.89 |224 |
|levit_conv_256.fb_dist_in1k |81.512|95.48 |18.89 |224 |
|levit_conv_192.fb_dist_in1k |79.86 |94.792|10.95 |224 |
|levit_192.fb_dist_in1k |79.858|94.792|10.95 |224 |
|levit_128.fb_dist_in1k |78.474|94.014|9.21 |224 |
|levit_conv_128.fb_dist_in1k |78.474|94.02 |9.21 |224 |
|levit_128s.fb_dist_in1k |76.534|92.864|7.78 |224 |
|levit_conv_128s.fb_dist_in1k |76.532|92.864|7.78 |224 |
## Citation
```bibtex
@InProceedings{Graham_2021_ICCV,
author = {Graham, Benjamin and El-Nouby, Alaaeldin and Touvron, Hugo and Stock, Pierre and Joulin, Armand and Jegou, Herve and Douze, Matthijs},
title = {LeViT: A Vision Transformer in ConvNet's Clothing for Faster Inference},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2021},
pages = {12259-12269}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/rwightman/pytorch-image-models}}
}
```
|
f776c3d6c293b3334c2930556b2623d8
|
cjbarrie/bert-base-multilingual-uncased-finetuned-masress
|
cjbarrie
|
bert
| 16 | 3 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,577 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-uncased-finetuned-masress
This model is a fine-tuned version of [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0946
- Accuracy: 0.5782
- F1: 0.5769
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.1646 | 1.0 | 151 | 1.0626 | 0.5588 | 0.5566 |
| 0.9281 | 2.0 | 302 | 0.9800 | 0.5869 | 0.5792 |
| 0.8269 | 3.0 | 453 | 1.0134 | 0.5911 | 0.5775 |
| 0.7335 | 4.0 | 604 | 1.0644 | 0.5861 | 0.5816 |
| 0.6786 | 5.0 | 755 | 1.0946 | 0.5782 | 0.5769 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
96c067601a3bd1bb9f3ec186630f606c
|
MarcNg/fastspeech2-vi-infore
|
MarcNg
| null | 5 | 0 |
tensorflowtts
| 1 |
text-to-speech
| false | false | false |
apache-2.0
|
['vi']
|
['infore']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['tensorflowtts', 'audio', 'text-to-speech', 'text-to-mel']
| false | true | true | 1,752 | false |
# Install TensorFlowTTS
```
pip install TensorFlowTTS
```
## Converting your Text to Mel Spectrogram
```python
import numpy as np
import soundfile as sf
import yaml
import IPython.display as ipd
import tensorflow as tf
from tensorflow_tts.inference import AutoProcessor
from tensorflow_tts.inference import TFAutoModel
processor = AutoProcessor.from_pretrained("MarcNg/fastspeech2-vi-infore")
fastspeech2 = TFAutoModel.from_pretrained("MarcNg/fastspeech2-vi-infore")
text = "xin chào đây là một ví dụ về chuyển đổi văn bản thành giọng nói"
input_ids = processor.text_to_sequence(text)
mel_before, mel_after, duration_outputs, _, _ = fastspeech2.inference(
input_ids=tf.expand_dims(tf.convert_to_tensor(input_ids, dtype=tf.int32), 0),
speaker_ids=tf.convert_to_tensor([0], dtype=tf.int32),
speed_ratios=tf.convert_to_tensor([1.0], dtype=tf.float32),
f0_ratios =tf.convert_to_tensor([1.0], dtype=tf.float32),
energy_ratios =tf.convert_to_tensor([1.0], dtype=tf.float32),
)
```
## Bonus: Convert Mel Spectrogram to Speech
```python
mb_melgan = TFAutoModel.from_pretrained("tensorspeech/tts-mb_melgan-ljspeech-en")
audio_before = mb_melgan.inference(mel_before)[0, :, 0]
audio_after = mb_melgan.inference(mel_after)[0, :, 0]
sf.write("audio_before.wav", audio_before, 22050, "PCM_16")
sf.write("audio_after.wav", audio_after, 22050, "PCM_16")
ipd.Audio('audio_after.wav')
```
#### Referencing FastSpeech2
```
@misc{ren2021fastspeech,
title={FastSpeech 2: Fast and High-Quality End-to-End Text to Speech},
author={Yi Ren and Chenxu Hu and Xu Tan and Tao Qin and Sheng Zhao and Zhou Zhao and Tie-Yan Liu},
year={2021},
eprint={2006.04558},
archivePrefix={arXiv},
primaryClass={eess.AS}
}
```
|
dc70db2ab510b5d21a60055c818302c3
|
therealcyberlord/fake-news-classification-distilbert
|
therealcyberlord
|
distilbert
| 7 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 406 | false |
# Fake News Classification Distilbert 🤗
This model was trained on 32,326 news articles from CLÉMENT BISAILLON's dataset on Kaggle. The goal is to classify fake news from real news.
0 : Fake News, 1 : Real News
# Sources
Dataset used: https://www.kaggle.com/datasets/clmentbisaillon/fake-and-real-news-dataset
Base Distilbert: https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english
|
93858cf5e618e944fa6d5241878d3724
|
gustavecortal/distilcamembert-cae-no-territory
|
gustavecortal
|
camembert
| 6 | 3 |
transformers
| 0 |
text-classification
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,678 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilcamembert-cae-no-territory
This model is a fine-tuned version of [cmarkea/distilcamembert-base](https://huggingface.co/cmarkea/distilcamembert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6885
- Precision: 0.7873
- Recall: 0.7848
- F1: 0.7855
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|
| 1.1796 | 1.0 | 40 | 0.9743 | 0.5640 | 0.4937 | 0.3731 |
| 0.8788 | 2.0 | 80 | 0.8037 | 0.7438 | 0.6709 | 0.6472 |
| 0.4982 | 3.0 | 120 | 0.7692 | 0.8264 | 0.7089 | 0.7558 |
| 0.2865 | 4.0 | 160 | 0.7676 | 0.7498 | 0.7215 | 0.7192 |
| 0.1502 | 5.0 | 200 | 0.6885 | 0.7873 | 0.7848 | 0.7855 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
b38e88760283c13466d6b8bc3c35884a
|
google/multiberts-seed_0-step_1200k
|
google
|
bert
| 8 | 13 |
transformers
| 0 | null | true | true | false |
apache-2.0
|
['en']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['multiberts', 'multiberts-seed_0', 'multiberts-seed_0-step_1200k']
| false | true | true | 3,527 | false |
# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 1200k
MultiBERTs is a collection of checkpoints and a statistical library to support
robust research on BERT. We provide 25 BERT-base models trained with
similar hyper-parameters as
[the original BERT model](https://github.com/google-research/bert) but
with different random seeds, which causes variations in the initial weights and order of
training instances. The aim is to distinguish findings that apply to a specific
artifact (i.e., a particular instance of the model) from those that apply to the
more general procedure.
We also provide 140 intermediate checkpoints captured
during the course of pre-training (we saved 28 checkpoints for the first 5 runs).
The models were originally released through
[http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our
paper
[The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163).
This is model #0, captured at step 1200k (max: 2000k, i.e., 2M steps).
## Model Description
This model was captured during a reproduction of
[BERT-base uncased](https://github.com/google-research/bert), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP)
objectives.
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [BERT-base uncased](https://github.com/google-research/bert). Two major
differences with the original model:
* We pre-trained the MultiBERTs models for 2 million steps using sequence
length 512 (instead of 1 million steps using sequence length 128 then 512).
* We used an alternative version of Wikipedia and Books Corpus, initially
collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962).
This is a best-effort reproduction, and so it is probable that differences with
the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original
BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).
See our [technical report](https://arxiv.org/abs/2106.16163) for more details.
### How to use
Using code from
[BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on
Tensorflow:
```
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_1200k')
model = TFBertModel.from_pretrained("google/multiberts-seed_0-step_1200k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
PyTorch version:
```
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_1200k')
model = BertModel.from_pretrained("google/multiberts-seed_0-step_1200k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
## Citation info
```bibtex
@article{sellam2021multiberts,
title={The MultiBERTs: BERT Reproductions for Robustness Analysis},
author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick},
journal={arXiv preprint arXiv:2106.16163},
year={2021}
}
```
|
6c1de90576785bc3d6fa900181b00d32
|
Helsinki-NLP/opus-mt-mk-en
|
Helsinki-NLP
|
marian
| 10 | 695 |
transformers
| 1 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 770 | false |
### opus-mt-mk-en
* source languages: mk
* target languages: en
* OPUS readme: [mk-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/mk-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/mk-en/opus-2019-12-18.zip)
* test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/mk-en/opus-2019-12-18.test.txt)
* test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/mk-en/opus-2019-12-18.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.mk.en | 59.8 | 0.720 |
|
359548f01a58ad54a46f1c6bd410ff8b
|
KoichiYasuoka/bert-base-japanese-char-extended
|
KoichiYasuoka
|
bert
| 8 | 8 |
transformers
| 0 |
fill-mask
| true | false | false |
cc-by-sa-4.0
|
['ja']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['japanese', 'masked-lm', 'wikipedia']
| false | true | true | 859 | false |
# bert-base-japanese-char-extended
## Model Description
This is a BERT model pre-trained on Japanese Wikipedia texts, derived from [bert-base-japanese-char-v2](https://huggingface.co/cl-tohoku/bert-base-japanese-char-v2). Character-embeddings are enhanced to include all 常用漢字/人名用漢字 characters using BertTokenizerFast. You can fine-tune `bert-base-japanese-char-extended` for downstream tasks, such as [POS-tagging](https://huggingface.co/KoichiYasuoka/bert-base-japanese-upos), [dependency-parsing](https://huggingface.co/KoichiYasuoka/bert-base-japanese-wikipedia-ud-head), and so on.
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForMaskedLM
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/bert-base-japanese-char-extended")
model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/bert-base-japanese-char-extended")
```
|
9a36186495b48574246ded47159d2226
|
BruceZJC/distilbert-base-uncased-finetuned-squad
|
BruceZJC
|
distilbert
| 34 | 3 |
transformers
| 0 |
question-answering
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,280 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7362
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7793 | 1.0 | 554 | 1.9337 |
| 1.4469 | 2.0 | 1108 | 1.7193 |
| 1.1585 | 3.0 | 1662 | 1.7362 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
8186a8530865e6858b5682aeba3fa100
|
rymaju/NL-RX-Synth-t5-small-finetuned-en-to-regex
|
rymaju
|
t5
| 32 | 3 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,728 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NL-RX-Synth-t5-small-finetuned-en-to-regex
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0131
- Semantic-accuracy: 0.36
- Gen Len: 18.24
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Semantic-accuracy | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-----------------:|:-------:|
| 0.2382 | 1.0 | 563 | 0.0431 | 0.322 | 18.224 |
| 0.0477 | 2.0 | 1126 | 0.0229 | 0.356 | 18.236 |
| 0.0305 | 3.0 | 1689 | 0.0259 | 0.34 | 18.266 |
| 0.0231 | 4.0 | 2252 | 0.0204 | 0.35 | 18.238 |
| 0.0197 | 5.0 | 2815 | 0.0162 | 0.352 | 18.232 |
| 0.02 | 6.0 | 3378 | 0.0162 | 0.354 | 18.238 |
| 0.0172 | 7.0 | 3941 | 0.0147 | 0.356 | 18.24 |
| 0.0145 | 8.0 | 4504 | 0.0259 | 0.34 | 18.246 |
| 0.0133 | 9.0 | 5067 | 0.0129 | 0.358 | 18.238 |
| 0.0131 | 10.0 | 5630 | 0.0121 | 0.366 | 18.242 |
| 0.0122 | 11.0 | 6193 | 0.0128 | 0.354 | 18.242 |
| 0.0123 | 12.0 | 6756 | 0.0129 | 0.356 | 18.222 |
| 0.0113 | 13.0 | 7319 | 0.0131 | 0.362 | 18.232 |
| 0.0095 | 14.0 | 7882 | 0.0124 | 0.358 | 18.238 |
| 0.0102 | 15.0 | 8445 | 0.0127 | 0.362 | 18.244 |
| 0.0089 | 16.0 | 9008 | 0.0126 | 0.358 | 18.242 |
| 0.0086 | 17.0 | 9571 | 0.0133 | 0.358 | 18.242 |
| 0.0084 | 17.76 | 10000 | 0.0131 | 0.36 | 18.24 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
c215c0040fd0d388ddc036f437e83175
|
cambridgeltl/SapBERT-from-PubMedBERT-fulltext
|
cambridgeltl
|
bert
| 9 | 91,300 |
transformers
| 9 |
feature-extraction
| true | true | true |
apache-2.0
|
['en']
| null | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 |
['biomedical', 'lexical semantics', 'bionlp', 'biology', 'science', 'embedding', 'entity linking']
| false | true | true | 2,592 | false |
---
datasets:
- UMLS
**[news]** A cross-lingual extension of SapBERT will appear in the main onference of **ACL 2021**! <br>
**[news]** SapBERT will appear in the conference proceedings of **NAACL 2021**!
### Expected input and output
The input should be a string of biomedical entity names, e.g., "covid infection" or "Hydroxychloroquine". The [CLS] embedding of the last layer is regarded as the output.
### SapBERT-PubMedBERT
SapBERT by [Liu et al. (2020)](https://arxiv.org/pdf/2010.11784.pdf). Trained with [UMLS](https://www.nlm.nih.gov/research/umls/licensedcontent/umlsknowledgesources.html) 2020AA (English only), using [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) as the base model.
### Citation
```bibtex
@inproceedings{liu-etal-2021-self,
title = "Self-Alignment Pretraining for Biomedical Entity Representations",
author = "Liu, Fangyu and
Shareghi, Ehsan and
Meng, Zaiqiao and
Basaldella, Marco and
Collier, Nigel",
booktitle = "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jun,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2021.naacl-main.334",
pages = "4228--4238",
abstract = "Despite the widespread success of self-supervised learning via masked language models (MLM), accurately capturing fine-grained semantic relationships in the biomedical domain remains a challenge. This is of paramount importance for entity-level tasks such as entity linking where the ability to model entity relations (especially synonymy) is pivotal. To address this challenge, we propose SapBERT, a pretraining scheme that self-aligns the representation space of biomedical entities. We design a scalable metric learning framework that can leverage UMLS, a massive collection of biomedical ontologies with 4M+ concepts. In contrast with previous pipeline-based hybrid systems, SapBERT offers an elegant one-model-for-all solution to the problem of medical entity linking (MEL), achieving a new state-of-the-art (SOTA) on six MEL benchmarking datasets. In the scientific domain, we achieve SOTA even without task-specific supervision. With substantial improvement over various domain-specific pretrained MLMs such as BioBERT, SciBERTand and PubMedBERT, our pretraining scheme proves to be both effective and robust.",
}
```
|
5eff842ef8adc4e28beb2f4594e7584a
|
AbhilashDatta/T5_qgen-squad-marco
|
AbhilashDatta
|
t5
| 7 | 1 |
transformers
| 0 |
text2text-generation
| true | false | false |
afl-3.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 946 | false |
# Question generation using T5 transformer
<h2> <i>Input format: context: "..." answer: "..." </i></h2>
Import the pretrained model as well as tokenizer:
```
from transformers import T5ForConditionalGeneration, T5Tokenizer
model = T5ForConditionalGeneration.from_pretrained('AbhilashDatta/T5_qgen-squad-marco')
tokenizer = T5Tokenizer.from_pretrained('AbhilashDatta/T5_qgen-squad-marco')
```
Then use the tokenizer to encode/decode and model to generate:
```
input = "context: My name is Abhilash Datta. answer: Abhilash"
batch = tokenizer(input, padding='longest', max_length=512, return_tensors='pt')
inputs_batch = batch['input_ids'][0]
inputs_batch = torch.unsqueeze(inputs_batch, 0)
ques_id = model.generate(inputs_batch, max_length=100, early_stopping=True)
ques_batch = [tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in ques_id]
print(ques_batch)
```
Output:
```
['what is my name']
```
|
bf049ffe3d650550dc10dc3e779cebce
|
BeardedJohn/bert-finetuned-ner-ubb-conll-endava-only-misc-v2
|
BeardedJohn
|
bert
| 8 | 45 |
transformers
| 0 |
token-classification
| false | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,447 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner-ubb-conll-endava-only-misc-v2
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0190
- Validation Loss: 0.0310
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1365, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.2091 | 0.0391 | 0 |
| 0.0336 | 0.0322 | 1 |
| 0.0190 | 0.0310 | 2 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
e7c56fd3543bcfbefae23af43ad9b853
|
Yehor/wav2vec2-xls-r-300m-uk-with-small-lm
|
Yehor
|
wav2vec2
| 13 | 208 |
transformers
| 4 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['uk']
|
['mozilla-foundation/common_voice_10_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 708 | false |
🇺🇦 Join Ukrainian Speech Recognition Community - https://t.me/speech_recognition_uk
⭐ See other Ukrainian models - https://github.com/egorsmkv/speech-recognition-uk
This model has apostrophes and hyphens.
The language model is trained on the texts of the Common Voice dataset, which is used during training.
Metrics:
| Dataset | CER | WER |
|-|-|-|
| CV7 (no LM) | 0.0432 | 0.2288 |
| CV7 (with LM) | 0.0169 | 0.0706 |
| CV10 (no LM) | 0.0412 | 0.2206 |
| CV10 (with LM) | 0.0118 | 0.0463 |
More:
- The same model, but trained on noisy data: https://huggingface.co/Yehor/wav2vec2-xls-r-300m-uk-with-small-lm-noisy
- Traced JIT version: https://huggingface.co/Yehor/wav2vec2-xls-r-300m-uk-traced-jit
|
9f48abc79a08aaecd66f1d074e91d223
|
l3cube-pune/marathi-tweets-bert
|
l3cube-pune
|
bert
| 8 | 38 |
transformers
| 0 |
fill-mask
| true | false | false |
cc-by-4.0
|
['mr']
|
['L3Cube-MahaCorpus']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 615 | false |
## MahaTweetBERT
A MahaBERT (l3cube-pune/marathi-bert-v2) model finetuned on Marathi Tweets.
More details on the dataset, models, and baseline results can be found in our [paper] (<a href='https://arxiv.org/abs/2210.04267'> link </a>)
Released under project: https://github.com/l3cube-pune/MarathiNLP
```
@article{gokhale2022spread,
title={Spread Love Not Hate: Undermining the Importance of Hateful Pre-training for Hate Speech Detection},
author={Gokhale, Omkar and Kane, Aditya and Patankar, Shantanu and Chavan, Tanmay and Joshi, Raviraj},
journal={arXiv preprint arXiv:2210.04267},
year={2022}
}
```
|
23309adf73dd766b3cad23f80d1f9cf5
|
keras-io/char-lstm-seq2seq
|
keras-io
| null | 9 | 4 |
keras
| 0 |
translation
| false | false | false |
['cc0-1.0']
|
['en', 'fr']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['seq2seq', 'translation']
| false | true | true | 913 | false |
## Keras Implementation of Character-level recurrent sequence-to-sequence model
This repo contains the model and the notebook [to this Keras example on Character-level recurrent sequence-to-sequence model](https://keras.io/examples/nlp/lstm_seq2seq/).
Full credits to: [fchollet](https://twitter.com/fchollet)
## Background Information
This example demonstrates how to implement a basic character-level recurrent sequence-to-sequence model. We apply it to translating short English sentences into short French sentences, character-by-character. Note that it is fairly unusual to do character-level machine translation, as word-level models are more common in this domain.
## Limitations
It works on text of length <= 15 characters
## Parameters needed for using the model
```python
latent_dim = 256
num_encoder_tokens = 71
max_encoder_seq_length = 15
num_decoder_tokens = 92
max_decoder_seq_length = 59
```
|
2e9c019787d20aa47d69da013b3ca23c
|
nandysoham16/5-clustered_aug
|
nandysoham16
|
distilbert
| 8 | 0 |
keras
| 0 | null | false | true | false |
mit
|
['en']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 4,773 | false |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
# Model Details
## Model Description
<!-- Provide a longer summary of what this model is. -->
['Daylight_saving_time', 'Chihuahua_(state)', 'United_States_dollar', 'Gregorian_calendar', 'Circadian_rhythm', 'Department_store', 'Planck_constant']
- **Developed by:** nandysoham
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** en
- **License:** mit
- **Finetuned from model [optional]:** [More Information Needed]
## Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
# Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
## Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
## Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
## Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
# Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
## Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
# Training Details
## Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
## Training Procedure [optional]
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
### Preprocessing
[More Information Needed]
### Speeds, Sizes, Times
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
# Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
## Testing Data, Factors & Metrics
### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
## Results
[More Information Needed]
### Summary
# Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
# Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
# Technical Specifications [optional]
## Model Architecture and Objective
[More Information Needed]
## Compute Infrastructure
[More Information Needed]
### Hardware
[More Information Needed]
### Software
[More Information Needed]
# Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
# Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
# More Information [optional]
[More Information Needed]
# Model Card Authors [optional]
[More Information Needed]
# Model Card Contact
[More Information Needed]
|
c91dd5702623a29aa93e98c39e377428
|
rwang5688/distilbert-base-uncased-finetuned-cola
|
rwang5688
|
distilbert
| 13 | 2 |
transformers
| 1 |
text-classification
| true | true | false |
apache-2.0
| null |
['glue']
| null | 1 | 0 | 1 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,568 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7166
- Matthews Correlation: 0.5422
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5239 | 1.0 | 535 | 0.5124 | 0.4240 |
| 0.3472 | 2.0 | 1070 | 0.4966 | 0.5180 |
| 0.2359 | 3.0 | 1605 | 0.6474 | 0.5174 |
| 0.1723 | 4.0 | 2140 | 0.7166 | 0.5422 |
| 0.1285 | 5.0 | 2675 | 0.8366 | 0.5367 |
### Framework versions
- Transformers 4.12.0
- Pytorch 1.8.1+cpu
- Datasets 2.4.0
- Tokenizers 0.10.3
|
4cb7c1458cb0c804c5507d16e3c0936b
|
Yanael/bert-finetuned-mrpc
|
Yanael
|
bert
| 18 | 6 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['glue']
| null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 918 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-mrpc
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.8.1+cu102
- Datasets 2.1.0
- Tokenizers 0.12.1
|
1504ccd05c6dbfca545248aeddff59d9
|
sd-dreambooth-library/nikeardilla
|
sd-dreambooth-library
| null | 58 | 145 |
diffusers
| 1 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['text-to-image', 'stable-diffusion']
| false | true | true | 6,629 | false |
### nikeardilla Dreambooth model trained by kukuhtw with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
https://linktr.ee/kukuhtw
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Overview : Who is Nike Ardilla ?
Raden Rara Nike Ratnadilla (27 December 1975 – 19 March 1995),
better known as Nike Ardilla (Indonesian pronunciation: [nikə ardila]),
was an Indonesian singer, actress, model, and philanthropist of Sundanese descent.
Usually referred to as the Lady Rocker and the Queen of Rock by the Indonesian media,
Ardilla was instrumental in the return of teen pop rock to the country's music scene and
had a dominant presence during the first half of the 1990s. At the height of her career and fame in 1995,
she was involved in a traffic incident that took her life at the age of 19. Her death prompted an outpouring of nationwide grief.
Source : Wikipedia
use keyword : <i>Nikeardilla</i>
sample prompt :
<i>portrait of Nikeardilla style studio ghibli</i>
<i>Nikeardilla . 3d model, unreal engine realistic render, 8 k, micro detail, intricate, elegant, highly detailed, centered, digital painting, artstation, smooth, sharp focus, illustration, artgerm, tomasz alen kopera, wlop</i>
<i>portrait of smiling Nikeardilla, digital painting, highly detailed, intricate, 3d model, unreal engine realistic render, 8 k, micro detail, intricate, elegant, highly detailed, centered, digital painting, artstation, smooth, sharp focus, illustration, artgerm, tomasz alen kopera, wlop</i>
<i>portrait of Nikeardilla in style pixar disney</i>
<i>portrait of Nikeardilla by greg rutkowski, trending artstation</i>
<i>portrait of Nikeardilla in style comic dc</i>
<i>portrait of Nikeardilla in marvel universe</i>
<i>portrait of Nikeardilla , low poly, colorfull</i>
<i>portrait of Nikeardilla in water oil made by davinci</i>
<i>portrait of Nikeardilla in water oil made by picasso</i>
<i>A detailed portrait of Nikeardilla illustrator, by justin gerard and greg rutkowski, digital art, realistic painting, dnd, character design, trending on artstation</i>
<i>young Nikeardilla person style yoji-shinkawa</i>
<i>Nikeardilla, portrait painting by richard schmid, edgar maxence, kehinde wiley, thomas moran, maxfield parrish, studio ghibli, loish, alphonse mucha, fashion photography </i>
<i>portrait Nikeardilla, photo realistic, highly detailed, perfect face, art by artgerm </i>
<i>Nikeardilla as a character from pixar, au naturel, PS2, PS1, hyper detailed, digital art, trending in artstation, cinematic lighting, studio quality, smooth render, unreal engine 5 rendered, octane rendered, art style by klimt and nixeu and ian sprigger and wlop and krenz cushart.</i>
Sample Results of this concept:















|
9df8a8c8aed929d3ce1174ae6d71ad0e
|
krinal214/xlm-3lang
|
krinal214
|
xlm-roberta
| 12 | 8 |
transformers
| 0 |
question-answering
| true | false | false |
mit
| null |
['tydiqa']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,134 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-eng-beng-tel
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the tydiqa dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7303
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.2927 | 1.0 | 810 | 0.7303 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.9.1
- Datasets 2.0.0
- Tokenizers 0.10.3
|
c2c72dcabdaa4cd73b7602ab24de4d1a
|
minhhoque/vit-base-patch16-224-in21k_ft-cifar10test
|
minhhoque
|
vit
| 7 | 3 |
transformers
| 0 |
image-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,062 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-in21k_ft-cifar10test
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0
- Datasets 2.7.1
- Tokenizers 0.13.2
|
d818d0b8b6d47e1ded598748f977d60e
|
clementchadebec/reproduced_iwae
|
clementchadebec
| null | 7 | 0 |
pythae
| 0 | null | false | false | false |
apache-2.0
|
['en']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['pythae', 'reproducibility']
| false | true | true | 655 | false |
This model was trained with pythae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from pythae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="clementchadebec/reproduced_iwae")
```
## Reproducibility
This trained model reproduces the results of Table 1 in [1].
| Model | Dataset | Metric | Obtained value | Reference value |
|:---:|:---:|:---:|:---:|:---:|
| IWAE (n_samples=5) | Binary MNIST | NLL (5000 IS) | 87.85 (0.01) | 87.6 |
| **IWAE (n_samples=50)** | Binary MNIST | NLL (5000 IS) | 86.82 (0.01) | 87.1 |
[1] Burda, Y. et al, *Importance Weighted Autoencoders*, ArXiv:1509.00519
|
a1698fd33f62feedb6ae6d9969126fdb
|
raedinkhaled/deit-base-mri
|
raedinkhaled
|
deit
| 16 | 6 |
transformers
| 0 |
image-classification
| true | false | false |
apache-2.0
| null |
['imagefolder']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['image-classification', 'generated_from_trainer']
| true | true | true | 1,342 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deit-base-mri
This model is a fine-tuned version of [facebook/deit-base-distilled-patch16-224](https://huggingface.co/facebook/deit-base-distilled-patch16-224) on the mriDataSet dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0657
- Accuracy: 0.9901
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0107 | 0.8 | 500 | 0.0782 | 0.9887 |
| 0.0065 | 1.6 | 1000 | 0.0657 | 0.9901 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
547a09de14b8ae58d5018ee33471cd07
|
ghatgetanuj/bert-large-uncased_cls_CR
|
ghatgetanuj
|
bert
| 12 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,520 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-uncased_cls_CR
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3385
- Accuracy: 0.9415
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.2
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 213 | 0.3553 | 0.8936 |
| No log | 2.0 | 426 | 0.3185 | 0.9069 |
| 0.2806 | 3.0 | 639 | 0.2679 | 0.9255 |
| 0.2806 | 4.0 | 852 | 0.2993 | 0.9441 |
| 0.0578 | 5.0 | 1065 | 0.3385 | 0.9415 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
46f7c8dbb6344097046231b87d4d68cc
|
clboetticher/mt5-small-finetuned-amazon-en-es
|
clboetticher
|
mt5
| 11 | 6 |
transformers
| 0 |
summarization
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['summarization', 'generated_from_trainer']
| true | true | true | 1,995 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0229
- Rouge1: 17.552
- Rouge2: 8.6159
- Rougel: 17.3207
- Rougelsum: 17.1968
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|
| 3.6836 | 1.0 | 1209 | 3.2362 | 17.2827 | 8.6322 | 16.7811 | 16.7223 |
| 3.6489 | 2.0 | 2418 | 3.0808 | 17.7206 | 8.7236 | 17.0749 | 16.9989 |
| 3.4263 | 3.0 | 3627 | 3.0574 | 17.9532 | 9.55 | 17.604 | 17.4782 |
| 3.3129 | 4.0 | 4836 | 3.0444 | 16.8908 | 8.1947 | 16.3227 | 16.2468 |
| 3.2353 | 5.0 | 6045 | 3.0449 | 17.0334 | 8.1498 | 16.8367 | 16.6738 |
| 3.1678 | 6.0 | 7254 | 3.0326 | 18.197 | 9.3959 | 18.0328 | 17.86 |
| 3.1365 | 7.0 | 8463 | 3.0276 | 17.8769 | 9.1995 | 17.5326 | 17.4261 |
| 3.1118 | 8.0 | 9672 | 3.0229 | 17.552 | 8.6159 | 17.3207 | 17.1968 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
540e2157e67fb14d9d48ee12d8133d89
|
grantsl/distilbert-base-uncased-finetuned-emotion-2
|
grantsl
|
distilbert
| 12 | 3 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,345 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion-2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3608
- Accuracy: 0.8433
- F1: 0.8433
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.4095 | 1.0 | 875 | 0.3667 | 0.8353 | 0.8351 |
| 0.3348 | 2.0 | 1750 | 0.3608 | 0.8433 | 0.8433 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
8771db0cf98c6044f2e657d01b822959
|
iamcharanhu/t5-small-finetuned-wikisql
|
iamcharanhu
|
t5
| 11 | 3 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null |
['wikisql']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,795 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-wikisql
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wikisql dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1245
- Rouge2 Precision: 0.8183
- Rouge2 Recall: 0.7262
- Rouge2 Fmeasure: 0.7625
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure |
|:-------------:|:-----:|:-----:|:---------------:|:----------------:|:-------------:|:---------------:|
| 0.1954 | 1.0 | 4049 | 0.1575 | 0.7935 | 0.7032 | 0.7386 |
| 0.1643 | 2.0 | 8098 | 0.1374 | 0.8084 | 0.7168 | 0.7528 |
| 0.1517 | 3.0 | 12147 | 0.1296 | 0.8136 | 0.7221 | 0.7581 |
| 0.1459 | 4.0 | 16196 | 0.1256 | 0.817 | 0.7254 | 0.7614 |
| 0.1414 | 5.0 | 20245 | 0.1245 | 0.8183 | 0.7262 | 0.7625 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
a6ba3a15d7ed5378caac6b55ff34fdcd
|
LeBenchmark/wav2vec-FR-1K-Female-base
|
LeBenchmark
|
wav2vec2
| 6 | 0 |
transformers
| 0 | null | true | false | false |
apache-2.0
|
['fr']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['wav2vec2']
| false | true | true | 2,098 | false |
# LeBenchmark: wav2vec2 base model trained on 1K hours of French *female-only* speech
LeBenchmark provides an ensemble of pretrained wav2vec2 models on different French datasets containing spontaneous, read, and broadcasted speech.
For more information about our gender study for SSL moddels, please refer to our paper at: [A Study of Gender Impact in Self-supervised Models for Speech-to-Text Systems](https://arxiv.org/abs/2204.01397)
## Model and data descriptions
We release four gender-specific models trained on 1K hours of speech.
- [wav2vec2-FR-1K-Male-large](https://huggingface.co/LeBenchmark/wav2vec-FR-1K-Male-large/)
- [wav2vec2-FR-1k-Male-base](https://huggingface.co/LeBenchmark/wav2vec-FR-1K-Male-base/)
- [wav2vec2-FR-1K-Female-large](https://huggingface.co/LeBenchmark/wav2vec-FR-1K-Female-large/)
- [wav2vec2-FR-1K-Female-base](https://huggingface.co/LeBenchmark/wav2vec-FR-1K-Female-base/)
## Intended uses & limitations
Pretrained wav2vec2 models are distributed under the Apache-2.0 license. Hence, they can be reused extensively without strict limitations. However, benchmarks and data may be linked to corpora that are not completely open-sourced.
## Referencing our gender-specific models
```
@inproceedings{boito22_interspeech,
author={Marcely Zanon Boito and Laurent Besacier and Natalia Tomashenko and Yannick Estève},
title={{A Study of Gender Impact in Self-supervised Models for Speech-to-Text Systems}},
year=2022,
booktitle={Proc. Interspeech 2022},
pages={1278--1282},
doi={10.21437/Interspeech.2022-353}
}
```
## Referencing LeBenchmark
```
@inproceedings{evain2021task,
title={Task agnostic and task specific self-supervised learning from speech with \textit{LeBenchmark}},
author={Evain, Sol{\`e}ne and Nguyen, Ha and Le, Hang and Boito, Marcely Zanon and Mdhaffar, Salima and Alisamir, Sina and Tong, Ziyi and Tomashenko, Natalia and Dinarelli, Marco and Parcollet, Titouan and others},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
year={2021}
}
```
|
d41586a5b7be5b5cf850fc10d0aa526e
|
parambharat/whisper-base-kn
|
parambharat
|
whisper
| 13 | 4 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['kn']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['whisper-event', 'generated_from_trainer']
| true | true | true | 1,849 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Base Kn - Bharat Ramanathan
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1974
- Wer: 30.8790
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 96
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.572 | 0.1 | 500 | 0.3198 | 50.3005 |
| 0.3153 | 0.2 | 1000 | 0.2464 | 37.2652 |
| 0.2533 | 0.3 | 1500 | 0.2298 | 36.5515 |
| 0.2212 | 1.04 | 2000 | 0.2157 | 34.5229 |
| 0.2013 | 1.14 | 2500 | 0.2090 | 32.6071 |
| 0.1881 | 1.24 | 3000 | 0.2043 | 32.7198 |
| 0.1784 | 1.34 | 3500 | 0.2014 | 30.8039 |
| 0.1715 | 2.08 | 4000 | 0.2014 | 31.5928 |
| 0.166 | 2.18 | 4500 | 0.1991 | 31.2547 |
| 0.1616 | 2.28 | 5000 | 0.1974 | 30.8790 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
d2bd0ead3fc13bd561f9cd217ef54a6d
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.