repo_id
stringlengths 4
110
| author
stringlengths 2
27
⌀ | model_type
stringlengths 2
29
⌀ | files_per_repo
int64 2
15.4k
| downloads_30d
int64 0
19.9M
| library
stringlengths 2
37
⌀ | likes
int64 0
4.34k
| pipeline
stringlengths 5
30
⌀ | pytorch
bool 2
classes | tensorflow
bool 2
classes | jax
bool 2
classes | license
stringlengths 2
30
| languages
stringlengths 4
1.63k
⌀ | datasets
stringlengths 2
2.58k
⌀ | co2
stringclasses 29
values | prs_count
int64 0
125
| prs_open
int64 0
120
| prs_merged
int64 0
15
| prs_closed
int64 0
28
| discussions_count
int64 0
218
| discussions_open
int64 0
148
| discussions_closed
int64 0
70
| tags
stringlengths 2
513
| has_model_index
bool 2
classes | has_metadata
bool 1
class | has_text
bool 1
class | text_length
int64 401
598k
| is_nc
bool 1
class | readme
stringlengths 0
598k
| hash
stringlengths 32
32
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Hate-speech-CNERG/marathi-codemixed-abusive-MuRIL
|
Hate-speech-CNERG
|
bert
| 7 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
afl-3.0
|
['mr']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 923 | false |
This model is used to detect **abusive speech** in **Marathi**. It is finetuned on MuRIL model using Marathi abusive speech dataset.
The model is trained with learning rates of 2e-5. Training code can be found at this [url](https://github.com/hate-alert/IndicAbusive)
LABEL_0 :-> Normal
LABEL_1 :-> Abusive
### For more details about our paper
Mithun Das, Somnath Banerjee and Animesh Mukherjee. "[Data Bootstrapping Approaches to Improve Low Resource Abusive Language Detection for Indic Languages](https://arxiv.org/abs/2204.12543)". Accepted at ACM HT 2022.
***Please cite our paper in any published work that uses any of these resources.***
~~~
@article{das2022data,
title={Data Bootstrapping Approaches to Improve Low Resource Abusive Language Detection for Indic Languages},
author={Das, Mithun and Banerjee, Somnath and Mukherjee, Animesh},
journal={arXiv preprint arXiv:2204.12543},
year={2022}
}
~~~
|
7f492c2d7971fa9bb9d41cfb66fdc471
|
leixu/xlm-roberta-base-finetuned-panx-de
|
leixu
|
xlm-roberta
| 12 | 11 |
transformers
| 0 |
token-classification
| true | false | false |
mit
| null |
['xtreme']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,313 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1377
- F1: 0.8605
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2573 | 1.0 | 525 | 0.1651 | 0.8199 |
| 0.1296 | 2.0 | 1050 | 0.1482 | 0.8413 |
| 0.081 | 3.0 | 1575 | 0.1377 | 0.8605 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.7.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
71e2437536615993eee832f62e3ca589
|
lchaloupsky/czech-gpt2-oscar
|
lchaloupsky
|
gpt2
| 9 | 16 |
transformers
| 1 |
text-generation
| true | true | false |
mit
|
['cs']
|
['oscar']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 8,018 | false |
# Czech GPT-2 small model trained on the OSCAR dataset
This model was trained as a part of the [master thesis](https://dspace.cuni.cz/handle/20.500.11956/176356?locale-attribute=en) on the Czech part of the [OSCAR](https://huggingface.co/datasets/oscar) dataset.
## Introduction
Czech-GPT2-OSCAR (Czech GPT-2 small) is a state-of-the-art language model for Czech based on the GPT-2 small model. Unlike the original GPT-2 small model, this model is trained to predict only 512 tokens instead of 1024 as it serves as a basis for the [Czech-GPT2-Medical](https://huggingface.co/lchaloupsky/czech-gpt2-medical]).
The model was trained the Czech part of the [OSCAR](https://huggingface.co/datasets/oscar) dataset using Transfer Learning and Fine-tuning techniques in about a week on one NVIDIA A100 SXM4 40GB and with a total of 21 GB of training data.
This model was trained as a part of the master thesis as a proof-of-concept that it is possible to get a state-of-the-art language model in Czech language with smaller ressources than the original one, and in a significantly shorter time and mainly as a basis for the [Czech-GPT2-Medical](https://huggingface.co/lchaloupsky/czech-gpt2-medical) model. There was no Czech GPT-2 model available at the time the master thesis began.
It was fine-tuned from the English pre-trained GPT-2 small using the Hugging Face libraries (Transformers and Tokenizers) wrapped into the fastai v2 Deep Learning framework. All the fine-tuning fastai v2 techniques were used.
The solution is based on the [Faster than training from scratch — Fine-tuning the English GPT-2 in any language with Hugging Face and fastai v2 (practical case with Portuguese)](https://medium.com/@pierre_guillou/faster-than-training-from-scratch-fine-tuning-the-english-gpt-2-in-any-language-with-hugging-f2ec05c98787) article.
Trained model is now available on Hugging Face under [czech-gpt2-oscar](https://huggingface.co/lchaloupsky/czech-gpt2-oscar/). For more information please let me know in the discussion.
## Training/Evaluation
For more information on training the model or its evaluation, please have a look at the [thesis](https://dspace.cuni.cz/handle/20.500.11956/176356?locale-attribute=en) itself.
## GPT-2 Model description
*Note: information copied/pasted from [Model: gpt2 >> Model description](https://huggingface.co/gpt2#model-description)*
GPT-2 is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence, shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens.
This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a prompt.
## How to use Czech-GPT2-OSCAR with HuggingFace (PyTorch)
*The following code use PyTorch. To use TensorFlow, check the below corresponding paragraph.*
### Load Czech-GPT2-OSCAR and its sub-word tokenizer (Byte-level BPE)
```python
from transformers import GPT2Tokenizer, GPT2LMHeadModel
import torch
tokenizer = GPT2Tokenizer.from_pretrained("lchaloupsky/czech-gpt2-oscar")
model = GPT2LMHeadModel.from_pretrained("lchaloupsky/czech-gpt2-oscar")
# Get sequence length max of 1024
tokenizer.model_max_length=1024
# For older versions of the 'transformers' library use this
# tokenizer.max_len=1024
model.eval() # disable dropout (or leave in train mode to finetune)
```
### Generate one word
```python
# input sequence
text = "Univerzita je základem"
inputs = tokenizer(text, return_tensors="pt")
# model output
outputs = model(**inputs, labels=inputs["input_ids"])
loss, logits = outputs[:2]
predicted_index = torch.argmax(logits[0, -1, :]).item()
predicted_text = tokenizer.decode([predicted_index])
# results
print('input text:', text)
print('predicted text:', predicted_text)
```
### Generate one full sequence
```python
# input sequence
text = "Univerzita je základem"
inputs = tokenizer(text, return_tensors="pt") # tokenizer.encode(text, return_tensors="pt") directly for input_ids
# model output using Top-k sampling text generation method
sample_outputs = model.generate(inputs.input_ids,
pad_token_id=50256,
do_sample=True,
max_length=50, # put the token number you want
top_k=40,
num_return_sequences=1)
# generated sequence
for i, sample_output in enumerate(sample_outputs):
print("{}\n\n{}".format(i+1, tokenizer.decode(sample_output.tolist()))) # tokenizer.decode(sample_output, skip_special_tokens=True)
```
## How to use Czech-GPT2-OSCAR with HuggingFace (TensorFlow)
*The following code use TensorFlow. To use PyTorch, check the above corresponding paragraph.*
### Load Czech-GPT2-OSCAR and its sub-word tokenizer (Byte-level BPE)
```python
from transformers import GPT2Tokenizer, TFGPT2LMHeadModel
import tensorflow as tf
tokenizer = GPT2Tokenizer.from_pretrained("lchaloupsky/czech-gpt2-oscar")
model = TFGPT2LMHeadModel.from_pretrained("lchaloupsky/czech-gpt2-oscar")
# Get sequence length max of 1024
tokenizer.model_max_length=1024
# For older versions of the 'transformers' library use this
# tokenizer.max_len=1024
model.eval() # disable dropout (or leave in train mode to finetune)
```
### Generate one full sequence
```python
# input sequence
text = "Univerzita je základem"
input_ids = tokenizer.encode(text, return_tensors="tf")
# model output using Top-k sampling text generation method
outputs = model.generate(input_ids, eos_token_id=50256, pad_token_id=50256,
do_sample=True,
max_length=40,
top_k=40)
print(tokenizer.decode(outputs[0])) # tokenizer.decode(outputs[0], skip_special_tokens=True)
```
## Limitations and bias
The training data used for this model come from the Czech part of the OSCAR dataset. We know it contains a lot of unfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their model card:
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true. Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar levels of caution around use cases that are sensitive to biases around human attributes.
## Author
Czech-GPT2-OSCAR was trained and evaluated by [Lukáš Chaloupský](https://cz.linkedin.com/in/luk%C3%A1%C5%A1-chaloupsk%C3%BD-0016b8226?original_referer=https%3A%2F%2Fwww.google.com%2F) thanks to the computing power of the GPU (NVIDIA A100 SXM4 40GB) cluster of [IT4I](https://www.it4i.cz/) (VSB - Technical University of Ostrava).
## Citation
```
@article{chaloupsky2022automatic,
title={Automatic generation of medical reports from chest X-rays in Czech},
author={Chaloupsk{\`y}, Luk{\'a}{\v{s}}},
year={2022},
publisher={Charles University, Faculty of Mathematics and Physics}
}
```
|
9f312f23028a98203ef467aceb6aabff
|
kalpeshk2011/rankgen-t5-large-all
|
kalpeshk2011
|
t5
| 5 | 2 |
transformers
| 0 | null | true | false | false |
apache-2.0
|
['en']
|
['Wikipedia', 'PG19', 'C4', 'relic', 'ChapterBreak', 'HellaSwag', 'ROCStories']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['t5', 'contrastive learning', 'ranking', 'decoding', 'metric learning', 'pytorch', 'text generation', 'retrieval']
| false | true | true | 6,148 | false |
## Main repository
https://github.com/martiansideofthemoon/rankgen
## What is RankGen?
RankGen is a suite of encoder models (100M-1.2B parameters) which map prefixes and generations from any pretrained English language model to a shared vector space. RankGen can be used to rerank multiple full-length samples from an LM, and it can also be incorporated as a scoring function into beam search to significantly improve generation quality (0.85 vs 0.77 MAUVE, 75% preference according to humans annotators who are English writers). RankGen can also be used like a dense retriever, and achieves state-of-the-art performance on [literary retrieval](https://relic.cs.umass.edu/leaderboard.html).
## Setup
**Requirements** (`pip` will install these dependencies for you)
Python 3.7+, `torch` (CUDA recommended), `transformers`
**Installation**
```
python3.7 -m virtualenv rankgen-venv
source rankgen-venv/bin/activate
pip install rankgen
```
Get the data [here](https://drive.google.com/drive/folders/1DRG2ess7fK3apfB-6KoHb_azMuHbsIv4?usp=sharing) and place folder in root directory. Alternatively, use `gdown` as shown below,
```
gdown --folder https://drive.google.com/drive/folders/1DRG2ess7fK3apfB-6KoHb_azMuHbsIv4
```
Run the test script to make sure the RankGen checkpoint has loaded correctly,
```
python -m rankgen.test_rankgen_encoder --model_path kalpeshk2011/rankgen-t5-base-all
### Expected output
0.0009239262409127233
0.0011521980725477804
```
## Using RankGen
Loading RankGen is simple using the HuggingFace APIs (see Method-2 below), but we suggest using [`RankGenEncoder`](https://github.com/martiansideofthemoon/rankgen/blob/master/rankgen/rankgen_encoder.py), which is a small wrapper around the HuggingFace APIs for correctly preprocessing data and doing tokenization automatically. You can either download [our repository](https://github.com/martiansideofthemoon/rankgen) and install the API, or copy the implementation from [below](#rankgenencoder-implementation).
#### [SUGGESTED] Method-1: Loading the model with RankGenEncoder
```
from rankgen import RankGenEncoder, RankGenGenerator
rankgen_encoder = RankGenEncoder("kalpeshk2011/rankgen-t5-large-all")
# Encoding vectors
prefix_vectors = rankgen_encoder.encode(["This is a prefix sentence."], vectors_type="prefix")
suffix_vectors = rankgen_encoder.encode(["This is a suffix sentence."], vectors_type="suffix")
# Generating text
# use a HuggingFace compatible language model
generator = RankGenGenerator(rankgen_encoder=rankgen_encoder, language_model="gpt2-medium")
inputs = ["Whatever might be the nature of the tragedy it would be over with long before this, and those moving black spots away yonder to the west, that he had discerned from the bluff, were undoubtedly the departing raiders. There was nothing left for Keith to do except determine the fate of the unfortunates, and give their bodies decent burial. That any had escaped, or yet lived, was altogether unlikely, unless, perchance, women had been in the party, in which case they would have been borne away prisoners."]
# Baseline nucleus sampling
print(generator.generate_single(inputs, top_p=0.9)[0][0])
# Over-generate and re-rank
print(generator.overgenerate_rerank(inputs, top_p=0.9, num_samples=10)[0][0])
# Beam search
print(generator.beam_search(inputs, top_p=0.9, num_samples=10, beam_size=2)[0][0])
```
#### Method-2: Loading the model with HuggingFace APIs
```
from transformers import T5Tokenizer, AutoModel
tokenizer = T5Tokenizer.from_pretrained(f"google/t5-v1_1-large")
model = AutoModel.from_pretrained("kalpeshk2011/rankgen-t5-large-all", trust_remote_code=True)
```
### RankGenEncoder Implementation
```
import tqdm
from transformers import T5Tokenizer, T5EncoderModel, AutoModel
class RankGenEncoder():
def __init__(self, model_path, max_batch_size=32, model_size=None, cache_dir=None):
assert model_path in ["kalpeshk2011/rankgen-t5-xl-all", "kalpeshk2011/rankgen-t5-xl-pg19", "kalpeshk2011/rankgen-t5-base-all", "kalpeshk2011/rankgen-t5-large-all"]
self.max_batch_size = max_batch_size
self.device = 'cuda' if torch.cuda.is_available() else 'cpu'
if model_size is None:
if "t5-large" in model_path or "t5_large" in model_path:
self.model_size = "large"
elif "t5-xl" in model_path or "t5_xl" in model_path:
self.model_size = "xl"
else:
self.model_size = "base"
else:
self.model_size = model_size
self.tokenizer = T5Tokenizer.from_pretrained(f"google/t5-v1_1-{self.model_size}", cache_dir=cache_dir)
self.model = AutoModel.from_pretrained(model_path, trust_remote_code=True)
self.model.to(self.device)
self.model.eval()
def encode(self, inputs, vectors_type="prefix", verbose=False, return_input_ids=False):
tokenizer = self.tokenizer
max_batch_size = self.max_batch_size
if isinstance(inputs, str):
inputs = [inputs]
if vectors_type == 'prefix':
inputs = ['pre ' + input for input in inputs]
max_length = 512
else:
inputs = ['suffi ' + input for input in inputs]
max_length = 128
all_embeddings = []
all_input_ids = []
for i in tqdm.tqdm(range(0, len(inputs), max_batch_size), total=(len(inputs) // max_batch_size) + 1, disable=not verbose, desc=f"Encoding {vectors_type} inputs:"):
tokenized_inputs = tokenizer(inputs[i:i + max_batch_size], return_tensors="pt", padding=True)
for k, v in tokenized_inputs.items():
tokenized_inputs[k] = v[:, :max_length]
tokenized_inputs = tokenized_inputs.to(self.device)
with torch.inference_mode():
batch_embeddings = self.model(**tokenized_inputs)
all_embeddings.append(batch_embeddings)
if return_input_ids:
all_input_ids.extend(tokenized_inputs.input_ids.cpu().tolist())
return {
"embeddings": torch.cat(all_embeddings, dim=0),
"input_ids": all_input_ids
}
```
|
e8be346a54337c905d639d771c49ad01
|
Annabel/my-awesome-model
|
Annabel
| null | 5 | 0 |
sklearn
| 0 |
tabular-classification
| false | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['sklearn', 'skops', 'tabular-classification']
| false | true | true | 6,653 | false |
# Model description
This is a DecisionTreeClassifier model trained on breast cancer dataset.
## Intended uses & limitations
This model is not ready to be used in production.
## Training Procedure
### Hyperparameters
The model is trained with below hyperparameters.
<details>
<summary> Click to expand </summary>
| Hyperparameter | Value |
|--------------------------|---------|
| ccp_alpha | 0.0 |
| class_weight | |
| criterion | gini |
| max_depth | |
| max_features | |
| max_leaf_nodes | |
| min_impurity_decrease | 0.0 |
| min_samples_leaf | 1 |
| min_samples_split | 2 |
| min_weight_fraction_leaf | 0.0 |
| random_state | |
| splitter | best |
</details>
### Model Plot
The model plot is below.
<style>#sk-container-id-1 {color: black;background-color: white;}#sk-container-id-1 pre{padding: 0;}#sk-container-id-1 div.sk-toggleable {background-color: white;}#sk-container-id-1 label.sk-toggleable__label {cursor: pointer;display: block;width: 100%;margin-bottom: 0;padding: 0.3em;box-sizing: border-box;text-align: center;}#sk-container-id-1 label.sk-toggleable__label-arrow:before {content: "▸";float: left;margin-right: 0.25em;color: #696969;}#sk-container-id-1 label.sk-toggleable__label-arrow:hover:before {color: black;}#sk-container-id-1 div.sk-estimator:hover label.sk-toggleable__label-arrow:before {color: black;}#sk-container-id-1 div.sk-toggleable__content {max-height: 0;max-width: 0;overflow: hidden;text-align: left;background-color: #f0f8ff;}#sk-container-id-1 div.sk-toggleable__content pre {margin: 0.2em;color: black;border-radius: 0.25em;background-color: #f0f8ff;}#sk-container-id-1 input.sk-toggleable__control:checked~div.sk-toggleable__content {max-height: 200px;max-width: 100%;overflow: auto;}#sk-container-id-1 input.sk-toggleable__control:checked~label.sk-toggleable__label-arrow:before {content: "▾";}#sk-container-id-1 div.sk-estimator input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-1 div.sk-label input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-1 input.sk-hidden--visually {border: 0;clip: rect(1px 1px 1px 1px);clip: rect(1px, 1px, 1px, 1px);height: 1px;margin: -1px;overflow: hidden;padding: 0;position: absolute;width: 1px;}#sk-container-id-1 div.sk-estimator {font-family: monospace;background-color: #f0f8ff;border: 1px dotted black;border-radius: 0.25em;box-sizing: border-box;margin-bottom: 0.5em;}#sk-container-id-1 div.sk-estimator:hover {background-color: #d4ebff;}#sk-container-id-1 div.sk-parallel-item::after {content: "";width: 100%;border-bottom: 1px solid gray;flex-grow: 1;}#sk-container-id-1 div.sk-label:hover label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-1 div.sk-serial::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: 0;}#sk-container-id-1 div.sk-serial {display: flex;flex-direction: column;align-items: center;background-color: white;padding-right: 0.2em;padding-left: 0.2em;position: relative;}#sk-container-id-1 div.sk-item {position: relative;z-index: 1;}#sk-container-id-1 div.sk-parallel {display: flex;align-items: stretch;justify-content: center;background-color: white;position: relative;}#sk-container-id-1 div.sk-item::before, #sk-container-id-1 div.sk-parallel-item::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: -1;}#sk-container-id-1 div.sk-parallel-item {display: flex;flex-direction: column;z-index: 1;position: relative;background-color: white;}#sk-container-id-1 div.sk-parallel-item:first-child::after {align-self: flex-end;width: 50%;}#sk-container-id-1 div.sk-parallel-item:last-child::after {align-self: flex-start;width: 50%;}#sk-container-id-1 div.sk-parallel-item:only-child::after {width: 0;}#sk-container-id-1 div.sk-dashed-wrapped {border: 1px dashed gray;margin: 0 0.4em 0.5em 0.4em;box-sizing: border-box;padding-bottom: 0.4em;background-color: white;}#sk-container-id-1 div.sk-label label {font-family: monospace;font-weight: bold;display: inline-block;line-height: 1.2em;}#sk-container-id-1 div.sk-label-container {text-align: center;}#sk-container-id-1 div.sk-container {/* jupyter's `normalize.less` sets `[hidden] { display: none; }` but bootstrap.min.css set `[hidden] { display: none !important; }` so we also need the `!important` here to be able to override the default hidden behavior on the sphinx rendered scikit-learn.org. See: https://github.com/scikit-learn/scikit-learn/issues/21755 */display: inline-block !important;position: relative;}#sk-container-id-1 div.sk-text-repr-fallback {display: none;}</style><div id="sk-container-id-1" class="sk-top-container" style="overflow: auto;"><div class="sk-text-repr-fallback"><pre>DecisionTreeClassifier()</pre><b>In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook. <br />On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.</b></div><div class="sk-container" hidden><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-1" type="checkbox" checked><label for="sk-estimator-id-1" class="sk-toggleable__label sk-toggleable__label-arrow">DecisionTreeClassifier</label><div class="sk-toggleable__content"><pre>DecisionTreeClassifier()</pre></div></div></div></div></div>
## Evaluation Results
You can find the details about evaluation process and the evaluation results.
| Metric | Value |
|----------|----------|
| accuracy | 0.929825 |
| f1 score | 0.929825 |
# How to Get Started with the Model
Use the code below to get started with the model.
```python
import joblib
import json
import pandas as pd
clf = joblib.load(example.pkl)
with open("config.json") as f:
config = json.load(f)
clf.predict(pd.DataFrame.from_dict(config["sklearn"]["example_input"]))
```
# Model Card Authors
This model card is written by following authors:
skops_user
# Model Card Contact
You can contact the model card authors through following channels:
[More Information Needed]
# Citation
Below you can find information related to citation.
**BibTeX:**
```
bibtex
@inproceedings{...,year={2020}}
```
# Additional Content
## confusion_matrix

|
ac40c9d825bfc529970aab89821e705d
|
TheLastBen/froggy-style
|
TheLastBen
| null | 36 | 26 |
diffusers
| 6 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 2 | 0 | 2 | 0 | 0 | 0 | 0 |
['text-to-image', 'stable-diffusion']
| false | true | true | 2,807 | false |
### Froggy Style V1.5
#### V1.5 Model by TheLastBen
This model is trained on 11 Midjourney images 512x512, 1300 steps and 300 steps text_encoder (30% because the total steps is low, normally 15%)
#### Prompts to start with :
ttdddd , __________, movie, ultra high quality render, high quality graphical details, 8k, volumetric lighting, micro details, (cinematic)
Negative : bad, low-quality, 3d, game
The prompt also can be as simple as the instance name : ttdddd and you will still get great results.
You can also train your own concepts and upload them to the library by using [fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb).
Test the concept via A1111 Colab :[fast-stable-diffusion-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
#### Sample pictures of this concept:
.png)
.png)
.png)
.png)
.png)
.png)
.png)
.png)
.png)
.png)
.png)
.png)
.png)
.png)
.png)
.png)
.png)
.png)
|
d648617d96eb3a32668b655f5ac98c30
|
geckos/deberta-base-fine-tuned-ner
|
geckos
|
deberta
| 14 | 169 |
transformers
| 1 |
token-classification
| true | false | false |
mit
| null |
['conll2003']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,726 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-base-finetuned-ner
This model is a fine-tuned version of [microsoft/deberta-base](https://huggingface.co/microsoft/deberta-base) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0501
- Precision: 0.9563
- Recall: 0.9652
- F1: 0.9608
- Accuracy: 0.9899
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1419 | 1.0 | 878 | 0.0628 | 0.9290 | 0.9288 | 0.9289 | 0.9835 |
| 0.0379 | 2.0 | 1756 | 0.0466 | 0.9456 | 0.9567 | 0.9511 | 0.9878 |
| 0.0176 | 3.0 | 2634 | 0.0473 | 0.9539 | 0.9575 | 0.9557 | 0.9890 |
| 0.0098 | 4.0 | 3512 | 0.0468 | 0.9570 | 0.9635 | 0.9603 | 0.9896 |
| 0.0043 | 5.0 | 4390 | 0.0501 | 0.9563 | 0.9652 | 0.9608 | 0.9899 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.12.1
- Tokenizers 0.10.3
|
c9f36b01dc6cf36cc858154980b549b9
|
Abderrahim2/bert-finetuned-gender_classification
|
Abderrahim2
|
bert
| 10 | 3 |
transformers
| 1 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,042 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-gender_classification
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1484
- F1: 0.9645
- Roc Auc: 0.9732
- Accuracy: 0.964
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:|:--------:|
| 0.1679 | 1.0 | 1125 | 0.1781 | 0.928 | 0.946 | 0.927 |
| 0.1238 | 2.0 | 2250 | 0.1252 | 0.9516 | 0.9640 | 0.95 |
| 0.0863 | 3.0 | 3375 | 0.1283 | 0.9515 | 0.9637 | 0.95 |
| 0.0476 | 4.0 | 4500 | 0.1419 | 0.9565 | 0.9672 | 0.956 |
| 0.0286 | 5.0 | 5625 | 0.1428 | 0.9555 | 0.9667 | 0.954 |
| 0.0091 | 6.0 | 6750 | 0.1515 | 0.9604 | 0.9700 | 0.959 |
| 0.0157 | 7.0 | 7875 | 0.1535 | 0.9580 | 0.9682 | 0.957 |
| 0.0048 | 8.0 | 9000 | 0.1484 | 0.9645 | 0.9732 | 0.964 |
| 0.0045 | 9.0 | 10125 | 0.1769 | 0.9605 | 0.9703 | 0.96 |
| 0.0037 | 10.0 | 11250 | 0.2007 | 0.9565 | 0.9672 | 0.956 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
578268fd790f752e31ea0fec4114db0b
|
troesy/distilbert-hatexplain-label-all-tokens-False
|
troesy
|
distilbert
| 12 | 8 |
transformers
| 0 |
token-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,282 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-hatexplain-label-all-tokens-False
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1722
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 174 | 0.1750 |
| No log | 2.0 | 348 | 0.1704 |
| 0.1846 | 3.0 | 522 | 0.1722 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.2
- Tokenizers 0.13.1
|
07c93acca9caa312519126621a265db6
|
Dimitre/ddpm-ema-flowers-64
|
Dimitre
| null | 11 | 3 |
diffusers
| 0 | null | false | false | false |
apache-2.0
|
['en']
|
['huggan/flowers-102-categories']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,217 | false |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-ema-flowers-64
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/flowers-102-categories` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: no
### Training results
📈 [TensorBoard logs](https://huggingface.co/Dimitre/ddpm-ema-flowers-64/tensorboard?#scalars)
|
8e0e98c45924ca47c50d53be192bf1c5
|
teddy322/wav2vec2-large-xls-r-300m-kor-11385-2
|
teddy322
|
wav2vec2
| 13 | 2 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null |
['zeroth_korean_asr']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,327 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-kor-11385-2
This model is a fine-tuned version of [teddy322/wav2vec2-large-xls-r-300m-kor-11385](https://huggingface.co/teddy322/wav2vec2-large-xls-r-300m-kor-11385) on the zeroth_korean_asr dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.2059
- eval_wer: 0.1471
- eval_runtime: 136.7247
- eval_samples_per_second: 3.342
- eval_steps_per_second: 0.424
- epoch: 6.47
- step: 4400
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 12
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
2d86406ed851c1bf0fbf71bc601aa23d
|
nateraw/trainer-rare-puppers
|
nateraw
|
vit
| 13 | 11 |
transformers
| 0 |
image-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| false | true | true | 1,181 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# trainer-rare-puppers
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the huggingpics dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 48 | 0.4087 | 0.8806 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
87f92283ff583c4e0a5d228504bfbbeb
|
Helsinki-NLP/opus-mt-en-tw
|
Helsinki-NLP
|
marian
| 10 | 21 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 768 | false |
### opus-mt-en-tw
* source languages: en
* target languages: tw
* OPUS readme: [en-tw](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-tw/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-tw/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-tw/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-tw/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.en.tw | 38.2 | 0.577 |
|
1d11ad021922b0c807e1765350e901a1
|
UpperLeftSide/marsattacks
|
UpperLeftSide
| null | 20 | 0 | null | 1 | null | false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 3,839 | false |
a painting in the style marsattacks
















|
f0efdba684ba6e648a06b58db6eb578c
|
YusufSahin99/IFIS_ZORK_AI_FANTASY
|
YusufSahin99
|
gpt2
| 12 | 4 |
transformers
| 0 |
text-generation
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| false | true | true | 908 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# IFIS_ZORK_AI_FANTASY
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unkown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.8.2
- Pytorch 1.9.0+cu102
- Tokenizers 0.10.3
|
5a3cefc339b8a3571ed0117196f4b052
|
CAMeL-Lab/bert-base-arabic-camelbert-da-pos-msa
|
CAMeL-Lab
|
bert
| 14 | 16 |
transformers
| 1 |
token-classification
| true | true | false |
apache-2.0
|
['ar']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 3,770 | false |
# CAMeLBERT-DA POS-MSA Model
## Model description
**CAMeLBERT-DA POS-MSA Model** is a Modern Standard Arabic (MSA) POS tagging model that was built by fine-tuning the [CAMeLBERT-DA](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-da/) model.
For the fine-tuning, we used the [PATB](https://dl.acm.org/doi/pdf/10.5555/1621804.1621808) dataset.
Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT).
## Intended uses
You can use the CAMeLBERT-DA POS-MSA model as part of the transformers pipeline.
This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon.
#### How to use
To use the model with a transformers pipeline:
```python
>>> from transformers import pipeline
>>> pos = pipeline('token-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-da-pos-msa')
>>> text = 'إمارة أبوظبي هي إحدى إمارات دولة الإمارات العربية المتحدة السبع'
>>> pos(text)
[{'entity': 'noun', 'score': 0.9999913, 'index': 1, 'word': 'إمارة', 'start': 0, 'end': 5}, {'entity': 'noun_prop', 'score': 0.9992475, 'index': 2, 'word': 'أبوظبي', 'start': 6, 'end': 12}, {'entity': 'pron', 'score': 0.999919, 'index': 3, 'word': 'هي', 'start': 13, 'end': 15}, {'entity': 'noun', 'score': 0.99993193, 'index': 4, 'word': 'إحدى', 'start': 16, 'end': 20}, {'entity': 'noun', 'score': 0.99999106, 'index': 5, 'word': 'إما', 'start': 21, 'end': 24}, {'entity': 'noun', 'score': 0.99998987, 'index': 6, 'word': '##رات', 'start': 24, 'end': 27}, {'entity': 'noun', 'score': 0.9999933, 'index': 7, 'word': 'دولة', 'start': 28, 'end': 32}, {'entity': 'noun', 'score': 0.9999899, 'index': 8, 'word': 'الإمارات', 'start': 33, 'end': 41}, {'entity': 'adj', 'score': 0.99990565, 'index': 9, 'word': 'العربية', 'start': 42, 'end': 49}, {'entity': 'adj', 'score': 0.99997944, 'index': 10, 'word': 'المتحدة', 'start': 50, 'end': 57}, {'entity': 'noun_num', 'score': 0.99938935, 'index': 11, 'word': 'السبع', 'start': 58, 'end': 63}]
```
*Note*: to download our models, you would need `transformers>=3.5.0`.
Otherwise, you could download the models manually.
## Citation
```bibtex
@inproceedings{inoue-etal-2021-interplay,
title = "The Interplay of Variant, Size, and Task Type in {A}rabic Pre-trained Language Models",
author = "Inoue, Go and
Alhafni, Bashar and
Baimukan, Nurpeiis and
Bouamor, Houda and
Habash, Nizar",
booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop",
month = apr,
year = "2021",
address = "Kyiv, Ukraine (Online)",
publisher = "Association for Computational Linguistics",
abstract = "In this paper, we explore the effects of language variants, data sizes, and fine-tuning task types in Arabic pre-trained language models. To do so, we build three pre-trained language models across three variants of Arabic: Modern Standard Arabic (MSA), dialectal Arabic, and classical Arabic, in addition to a fourth language model which is pre-trained on a mix of the three. We also examine the importance of pre-training data size by building additional models that are pre-trained on a scaled-down set of the MSA variant. We compare our different models to each other, as well as to eight publicly available models by fine-tuning them on five NLP tasks spanning 12 datasets. Our results suggest that the variant proximity of pre-training data to fine-tuning data is more important than the pre-training data size. We exploit this insight in defining an optimized system selection model for the studied tasks.",
}
```
|
94c5b20c369458ad6228fe0516c5f8f8
|
SWQ/gpt2-medium-combine
|
SWQ
|
gpt2
| 6 | 4 |
transformers
| 0 |
text-generation
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,387 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-medium-combine
This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7295
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.9811 | 0.6 | 500 | 2.8135 |
| 2.8017 | 1.2 | 1000 | 2.7691 |
| 2.7255 | 1.81 | 1500 | 2.7480 |
| 2.6598 | 2.41 | 2000 | 2.7392 |
| 2.6426 | 3.01 | 2500 | 2.7306 |
| 2.6138 | 3.61 | 3000 | 2.7295 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
d35746567f31632c96dbd3bb40a3bcdf
|
sd-dreambooth-library/emily-carroll-style
|
sd-dreambooth-library
| null | 24 | 5 |
diffusers
| 4 | null | false | false | false |
mit
| null | null | null | 2 | 2 | 0 | 0 | 1 | 1 | 0 |
[]
| false | true | true | 1,564 | false |
### emily carroll style on Stable Diffusion via Dreambooth
#### model by hiero
This your the Stable Diffusion model fine-tuned the emily carroll style concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt`: **a detailed digital matte illustration by sks**
You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb).
And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
Here are the images used for training this concept:






|
654926b3e8a4ef9e347599c5512cf3f0
|
espnet/tamil_commonvoice_blstm
|
espnet
| null | 22 | 1 |
espnet
| 0 |
automatic-speech-recognition
| false | false | false |
cc-by-4.0
|
['ta']
|
['commonvoice']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['espnet', 'audio', 'automatic-speech-recognition']
| false | true | true | 6,859 | false |
## ESPnet2 ASR model
### `espnet/tamil_commonvoice_blstm`
This model was trained by dzeinali using commonvoice recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout 716eb8f92e19708acfd08ba3bd39d40890d3a84b
pip install -e .
cd egs2/commonvoice/asr1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/tamil_commonvoice_blstm
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Mon May 2 11:41:47 EDT 2022`
- python version: `3.9.5 (default, Jun 4 2021, 12:28:51) [GCC 7.5.0]`
- espnet version: `espnet 0.10.6a1`
- pytorch version: `pytorch 1.8.1+cu102`
- Git hash: `716eb8f92e19708acfd08ba3bd39d40890d3a84b`
- Commit date: `Thu Apr 28 19:50:59 2022 -0400`
## asr_train_asr_rnn_raw_ta_bpe150_sp
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_rnn_asr_model_valid.acc.ave/test_ta|11499|72228|66.0|30.5|3.5|3.2|37.2|79.7|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_rnn_asr_model_valid.acc.ave/test_ta|11499|638106|93.5|3.8|2.7|1.8|8.3|79.9|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_rnn_asr_model_valid.acc.ave/test_ta|11499|422957|89.8|7.0|3.2|1.8|12.0|79.8|
## ASR config
<details><summary>expand</summary>
```
config: conf/tuning/train_asr_rnn.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_rnn_raw_ta_bpe150_sp
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 15
patience: 3
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - train
- loss
- min
- - valid
- loss
- min
- - train
- acc
- max
- - valid
- acc
- max
keep_nbest_models:
- 10
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 30
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_ta_bpe150_sp/train/speech_shape
- exp/asr_stats_raw_ta_bpe150_sp/train/text_shape.bpe
valid_shape_file:
- exp/asr_stats_raw_ta_bpe150_sp/valid/speech_shape
- exp/asr_stats_raw_ta_bpe150_sp/valid/text_shape.bpe
batch_type: folded
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_ta_sp/wav.scp
- speech
- sound
- - dump/raw/train_ta_sp/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/dev_ta/wav.scp
- speech
- sound
- - dump/raw/dev_ta/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adadelta
optim_conf:
lr: 0.1
scheduler: null
scheduler_conf: {}
token_list:
- <blank>
- <unk>
- ி
- ு
- ா
- வ
- ை
- ர
- ன
- ▁ப
- .
- ▁க
- ்
- ▁அ
- ட
- த
- க
- ே
- ம
- ல
- ம்
- ன்
- ும்
- ய
- ▁வ
- க்க
- ▁இ
- ▁த
- த்த
- ▁
- து
- ந்த
- ப
- ▁ச
- ிய
- ▁ம
- ோ
- ெ
- ர்
- ரு
- ழ
- ப்ப
- ண
- ொ
- ▁ந
- ட்ட
- ▁எ
- ற
- ைய
- ச
- ள
- க்
- ில்
- ங்க
- ','
- ண்ட
- ▁உ
- ன்ற
- ார்
- ப்
- ூ
- ல்
- ள்
- கள
- கள்
- ாக
- ற்ற
- டு
- ீ
- ந
- '!'
- '?'
- '"'
- ஏ
- ஸ
- ஞ
- ஷ
- ஜ
- ஓ
- '-'
- ஐ
- ஹ
- A
- E
- ங
- R
- N
- ஈ
- ஃ
- O
- I
- ;
- S
- T
- L
- எ
- இ
- அ
- H
- C
- D
- M
- U
- உ
- B
- G
- P
- Y
- ''''
- ௌ
- K
- ':'
- W
- ஆ
- F
- —
- V
- ”
- J
- Z
- ’
- ‘
- X
- Q
- (
- )
- ·
- –
- ⁄
- '3'
- '4'
- ◯
- _
- '&'
- ௗ
- •
- '`'
- ஔ
- “
- ஊ
- š
- ഥ
- '1'
- '2'
- á
- ‚
- é
- ô
- ஒ
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
joint_net_conf: null
model_conf:
ctc_weight: 0.5
use_preprocessor: true
token_type: bpe
bpemodel: data/ta_token_list/bpe_unigram150/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: default
frontend_conf:
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 27
num_freq_mask: 2
apply_time_mask: true
time_mask_width_ratio_range:
- 0.0
- 0.05
num_time_mask: 2
normalize: global_mvn
normalize_conf:
stats_file: exp/asr_stats_raw_ta_bpe150_sp/train/feats_stats.npz
preencoder: null
preencoder_conf: {}
encoder: vgg_rnn
encoder_conf:
rnn_type: lstm
bidirectional: true
use_projection: true
num_layers: 4
hidden_size: 1024
output_size: 1024
postencoder: null
postencoder_conf: {}
decoder: rnn
decoder_conf:
num_layers: 2
hidden_size: 1024
sampling_probability: 0
att_conf:
atype: location
adim: 1024
aconv_chans: 10
aconv_filts: 100
required:
- output_dir
- token_list
version: 0.10.6a1
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
47541b6989567d51c39ad9851ba7ac76
|
muhtasham/finetuned-self_mlm_small
|
muhtasham
|
bert
| 10 | 6 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['imdb']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,715 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-self_mlm_small
This model is a fine-tuned version of [muhtasham/bert-small-mlm-finetuned-imdb](https://huggingface.co/muhtasham/bert-small-mlm-finetuned-imdb) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3759
- Accuracy: 0.9372
- F1: 0.9676
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.2834 | 1.28 | 500 | 0.2254 | 0.9150 | 0.9556 |
| 0.1683 | 2.56 | 1000 | 0.3738 | 0.8694 | 0.9301 |
| 0.1069 | 3.84 | 1500 | 0.2102 | 0.9354 | 0.9666 |
| 0.0651 | 5.12 | 2000 | 0.2278 | 0.9446 | 0.9715 |
| 0.0412 | 6.39 | 2500 | 0.4061 | 0.9156 | 0.9559 |
| 0.0316 | 7.67 | 3000 | 0.4371 | 0.9110 | 0.9534 |
| 0.0219 | 8.95 | 3500 | 0.3759 | 0.9372 | 0.9676 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
b8b678f3831a9680281d6bb8cdf1ea6b
|
markt23917/finetuning-sentiment-model-3000-samples
|
markt23917
|
distilbert
| 13 | 10 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['imdb']
| null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,056 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3351
- Accuracy: 0.8767
- F1: 0.8825
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
48b42fc6dd6e78767bf4b30049ab8bec
|
Artifact-AI/quantized_distilbert_conll2003_static
|
Artifact-AI
| null | 7 | 0 |
pytorch
| 0 |
token-classification
| true | false | false |
unlicense
|
['en']
|
['conll2003']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['onxx', 'ner', 'nlp', 'pytorch', 'token-classification']
| true | true | true | 441 | false |
| Feature | Description |
| --- | --- |
| **Name** | `quantized_distilbert_conll2003_static` |
| **Version** | `0.0.0` |
### Label Scheme
<details>
<summary>View label scheme (4 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `LOC`, `MISC`, `ORG`, `PER` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `accuracy` | 98.39 |
| `f1` | 90.53 |
| `precision` | 89.29 |
| `recall` | 91.80 |
|
3ba9b941111c995dbdb49e7cd41fc7ec
|
monakth/distilbert-base-cased-finetuned-squadv2
|
monakth
|
distilbert
| 10 | 5 |
transformers
| 0 |
question-answering
| true | false | false |
apache-2.0
| null |
['squad_v2']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 968 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-cased-finetuned-squadv
This model is a fine-tuned version of [monakth/distilbert-base-cased-finetuned-squad](https://huggingface.co/monakth/distilbert-base-cased-finetuned-squad) on the squad_v2 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
12d5bf58ee7eaa86fd9956bb379f1a9e
|
talhaa/distilbert-base-uncased-finetuned-imdb
|
talhaa
|
distilbert
| 12 | 3 |
transformers
| 0 |
fill-mask
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,281 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2119
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 1 | 3.3374 |
| No log | 2.0 | 2 | 3.8206 |
| No log | 3.0 | 3 | 2.8370 |
### Framework versions
- Transformers 4.21.3
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
ea2a8979057bee19692337493be49b71
|
nazirzhumakhan/finetuning-sentiment-model-3000-samples
|
nazirzhumakhan
|
distilbert
| 12 | 11 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['imdb']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 951 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
7c3b3a2f6b548329cf594e57a5bc985c
|
sd-concepts-library/csgo-awp-object
|
sd-concepts-library
| null | 11 | 0 | null | 0 | null | false | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,267 | false |
### csgo_awp_object on Stable Diffusion
This is the `<csgo_awp>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:






|
6d2454be1df17c24a738e6909ca4ab98
|
MRF18/results
|
MRF18
|
roberta
| 30 | 3 |
transformers
| 0 |
text-classification
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,001 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [MRF18/results](https://huggingface.co/MRF18/results) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
b9af6a82465b9e6b0a7b732661dcc438
|
espnet/kan-bayashi_vctk_tts_train_gst_tacotron2_raw_phn_tacotron_g2p_en_no_space_train.loss.best
|
espnet
| null | 6 | 4 |
espnet
| 0 |
text-to-speech
| false | false | false |
cc-by-4.0
|
['en']
|
['vctk']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['espnet', 'audio', 'text-to-speech']
| false | true | true | 1,857 | false |
## Example ESPnet2 TTS model
### `kan-bayashi/vctk_tts_train_gst_tacotron2_raw_phn_tacotron_g2p_en_no_space_train.loss.best`
♻️ Imported from https://zenodo.org/record/3986237/
This model was trained by kan-bayashi using vctk/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
c79ef827fdf8e7523f98fc029002c481
|
gokuls/mobilebert_sa_pre-training-complete
|
gokuls
|
mobilebert
| 32 | 14 |
transformers
| 0 |
fill-mask
| true | false | false |
apache-2.0
| null |
['wikitext']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 3,958 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilebert_sa_pre-training-complete
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the wikitext wikitext-103-raw-v1 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3239
- Accuracy: 0.7162
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 300000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 1.6028 | 1.0 | 7145 | 1.4525 | 0.6935 |
| 1.5524 | 2.0 | 14290 | 1.4375 | 0.6993 |
| 1.5323 | 3.0 | 21435 | 1.4194 | 0.6993 |
| 1.5191 | 4.0 | 28580 | 1.4110 | 0.7027 |
| 1.5025 | 5.0 | 35725 | 1.4168 | 0.7014 |
| 1.4902 | 6.0 | 42870 | 1.3931 | 0.7012 |
| 1.4813 | 7.0 | 50015 | 1.3738 | 0.7057 |
| 1.4751 | 8.0 | 57160 | 1.4237 | 0.6996 |
| 1.4689 | 9.0 | 64305 | 1.3969 | 0.7047 |
| 1.4626 | 10.0 | 71450 | 1.3916 | 0.7068 |
| 1.4566 | 11.0 | 78595 | 1.3686 | 0.7072 |
| 1.451 | 12.0 | 85740 | 1.3811 | 0.7060 |
| 1.4478 | 13.0 | 92885 | 1.3598 | 0.7092 |
| 1.4441 | 14.0 | 100030 | 1.3790 | 0.7054 |
| 1.4379 | 15.0 | 107175 | 1.3794 | 0.7066 |
| 1.4353 | 16.0 | 114320 | 1.3609 | 0.7102 |
| 1.43 | 17.0 | 121465 | 1.3685 | 0.7083 |
| 1.4278 | 18.0 | 128610 | 1.3953 | 0.7036 |
| 1.4219 | 19.0 | 135755 | 1.3756 | 0.7085 |
| 1.4197 | 20.0 | 142900 | 1.3597 | 0.7090 |
| 1.4169 | 21.0 | 150045 | 1.3673 | 0.7061 |
| 1.4146 | 22.0 | 157190 | 1.3753 | 0.7073 |
| 1.4109 | 23.0 | 164335 | 1.3696 | 0.7082 |
| 1.4073 | 24.0 | 171480 | 1.3563 | 0.7092 |
| 1.4054 | 25.0 | 178625 | 1.3712 | 0.7103 |
| 1.402 | 26.0 | 185770 | 1.3528 | 0.7113 |
| 1.4001 | 27.0 | 192915 | 1.3367 | 0.7123 |
| 1.397 | 28.0 | 200060 | 1.3508 | 0.7118 |
| 1.3955 | 29.0 | 207205 | 1.3572 | 0.7117 |
| 1.3937 | 30.0 | 214350 | 1.3566 | 0.7095 |
| 1.3901 | 31.0 | 221495 | 1.3515 | 0.7117 |
| 1.3874 | 32.0 | 228640 | 1.3445 | 0.7118 |
| 1.386 | 33.0 | 235785 | 1.3611 | 0.7097 |
| 1.3833 | 34.0 | 242930 | 1.3502 | 0.7087 |
| 1.3822 | 35.0 | 250075 | 1.3657 | 0.7108 |
| 1.3797 | 36.0 | 257220 | 1.3576 | 0.7108 |
| 1.3793 | 37.0 | 264365 | 1.3472 | 0.7106 |
| 1.3763 | 38.0 | 271510 | 1.3323 | 0.7156 |
| 1.3762 | 39.0 | 278655 | 1.3325 | 0.7145 |
| 1.3748 | 40.0 | 285800 | 1.3243 | 0.7138 |
| 1.3733 | 41.0 | 292945 | 1.3218 | 0.7170 |
| 1.3722 | 41.99 | 300000 | 1.3074 | 0.7186 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
|
665fce8d464c7bb2dae0f1edc4546a04
|
MultiBertGunjanPatrick/multiberts-seed-2-1500k
|
MultiBertGunjanPatrick
|
bert
| 7 | 5 |
transformers
| 0 | null | true | false | false |
apache-2.0
|
['en']
|
['bookcorpus', 'wikipedia']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['exbert', 'multiberts', 'multiberts-seed-2']
| false | true | true | 6,487 | false |
# MultiBERTs Seed 2 Checkpoint 1500k (uncased)
Seed 2 intermediate checkpoint 1500k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in
[this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint.
The final checkpoint can be found at [multiberts-seed-2](https://hf.co/multberts-seed-2). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
## Model description
MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the MultiBERTs model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('multiberts-seed-2-1500k')
model = BertModel.from_pretrained("multiberts-seed-2-1500k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular
checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint.
## Training data
The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size
of 256. The sequence length was set to 512 throughout. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-16163,
author = {Thibault Sellam and
Steve Yadlowsky and
Jason Wei and
Naomi Saphra and
Alexander D'Amour and
Tal Linzen and
Jasmijn Bastings and
Iulia Turc and
Jacob Eisenstein and
Dipanjan Das and
Ian Tenney and
Ellie Pavlick},
title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis},
journal = {CoRR},
volume = {abs/2106.16163},
year = {2021},
url = {https://arxiv.org/abs/2106.16163},
eprinttype = {arXiv},
eprint = {2106.16163},
timestamp = {Mon, 05 Jul 2021 15:15:50 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=multiberts">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
a4bc1a368fdab97ef40a5dc65e8a165b
|
Jellevdl/checkpoint-20000
|
Jellevdl
|
bert
| 10 | 15 |
transformers
| 0 |
question-answering
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 857 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# checkpoint-20000
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adafactor
- lr_scheduler_type: linear
- num_epochs: 30
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.1+cu102
- Datasets 2.4.0
- Tokenizers 0.12.1
|
0ee3ce1cf1c7ba1a278f01ca476cd180
|
Xiaoman/NER-for-female-names
|
Xiaoman
|
bert
| 14 | 5 |
transformers
| 0 |
token-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,352 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NER-for-female-names
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2606
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.961395091713594e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 27
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 5 | 0.6371 |
| No log | 2.0 | 10 | 0.4213 |
| No log | 3.0 | 15 | 0.3227 |
| No log | 4.0 | 20 | 0.2867 |
| No log | 5.0 | 25 | 0.2606 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Tokenizers 0.12.1
|
7792b8d4045147bc24f3d7404ea15881
|
Helsinki-NLP/opus-mt-ts-en
|
Helsinki-NLP
|
marian
| 10 | 34 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 768 | false |
### opus-mt-ts-en
* source languages: ts
* target languages: en
* OPUS readme: [ts-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ts-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/ts-en/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ts-en/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ts-en/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.ts.en | 44.0 | 0.590 |
|
5cc9e9ffc0b1186dba665dd7f971e113
|
dipteshkanojia/hing-roberta-CM-run-4
|
dipteshkanojia
|
xlm-roberta
| 9 | 4 |
transformers
| 0 |
text-classification
| true | false | false |
cc-by-4.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 3,101 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hing-roberta-CM-run-4
This model is a fine-tuned version of [l3cube-pune/hing-roberta](https://huggingface.co/l3cube-pune/hing-roberta) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5827
- Accuracy: 0.7525
- Precision: 0.6967
- Recall: 0.7004
- F1: 0.6980
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.8734 | 1.0 | 497 | 0.7673 | 0.7203 | 0.6617 | 0.6600 | 0.6604 |
| 0.6245 | 2.0 | 994 | 0.7004 | 0.7485 | 0.6951 | 0.7137 | 0.7015 |
| 0.4329 | 3.0 | 1491 | 1.0469 | 0.7223 | 0.6595 | 0.6640 | 0.6538 |
| 0.2874 | 4.0 | 1988 | 1.3103 | 0.7586 | 0.7064 | 0.7157 | 0.7104 |
| 0.1837 | 5.0 | 2485 | 1.7916 | 0.7425 | 0.6846 | 0.6880 | 0.6861 |
| 0.1121 | 6.0 | 2982 | 2.0721 | 0.7465 | 0.7064 | 0.7041 | 0.7003 |
| 0.0785 | 7.0 | 3479 | 2.3469 | 0.7425 | 0.6898 | 0.6795 | 0.6807 |
| 0.0609 | 8.0 | 3976 | 2.2775 | 0.7404 | 0.6819 | 0.6881 | 0.6845 |
| 0.0817 | 9.0 | 4473 | 2.1992 | 0.7686 | 0.7342 | 0.7147 | 0.7166 |
| 0.042 | 10.0 | 4970 | 2.2359 | 0.7565 | 0.7211 | 0.7141 | 0.7106 |
| 0.0463 | 11.0 | 5467 | 2.2291 | 0.7646 | 0.7189 | 0.7186 | 0.7177 |
| 0.027 | 12.0 | 5964 | 2.3955 | 0.7525 | 0.6994 | 0.7073 | 0.7028 |
| 0.0314 | 13.0 | 6461 | 2.4256 | 0.7565 | 0.7033 | 0.7153 | 0.7082 |
| 0.0251 | 14.0 | 6958 | 2.4578 | 0.7565 | 0.7038 | 0.7025 | 0.7027 |
| 0.0186 | 15.0 | 7455 | 2.5984 | 0.7565 | 0.7141 | 0.6945 | 0.6954 |
| 0.0107 | 16.0 | 7952 | 2.5068 | 0.7425 | 0.6859 | 0.7016 | 0.6912 |
| 0.0134 | 17.0 | 8449 | 2.5876 | 0.7606 | 0.7018 | 0.7041 | 0.7029 |
| 0.0145 | 18.0 | 8946 | 2.6011 | 0.7626 | 0.7072 | 0.7079 | 0.7073 |
| 0.0108 | 19.0 | 9443 | 2.5861 | 0.7545 | 0.6973 | 0.7017 | 0.6990 |
| 0.0076 | 20.0 | 9940 | 2.5827 | 0.7525 | 0.6967 | 0.7004 | 0.6980 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.10.1+cu111
- Datasets 2.3.2
- Tokenizers 0.12.1
|
41989b4b0cd428e3c0fdd7c450e63835
|
google/multiberts-seed_0-step_1300k
|
google
|
bert
| 8 | 13 |
transformers
| 0 | null | true | true | false |
apache-2.0
|
['en']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['multiberts', 'multiberts-seed_0', 'multiberts-seed_0-step_1300k']
| false | true | true | 3,527 | false |
# MultiBERTs, Intermediate Checkpoint - Seed 0, Step 1300k
MultiBERTs is a collection of checkpoints and a statistical library to support
robust research on BERT. We provide 25 BERT-base models trained with
similar hyper-parameters as
[the original BERT model](https://github.com/google-research/bert) but
with different random seeds, which causes variations in the initial weights and order of
training instances. The aim is to distinguish findings that apply to a specific
artifact (i.e., a particular instance of the model) from those that apply to the
more general procedure.
We also provide 140 intermediate checkpoints captured
during the course of pre-training (we saved 28 checkpoints for the first 5 runs).
The models were originally released through
[http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our
paper
[The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163).
This is model #0, captured at step 1300k (max: 2000k, i.e., 2M steps).
## Model Description
This model was captured during a reproduction of
[BERT-base uncased](https://github.com/google-research/bert), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM) and the Next Sentence Prediction (NSP)
objectives.
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [BERT-base uncased](https://github.com/google-research/bert). Two major
differences with the original model:
* We pre-trained the MultiBERTs models for 2 million steps using sequence
length 512 (instead of 1 million steps using sequence length 128 then 512).
* We used an alternative version of Wikipedia and Books Corpus, initially
collected for [Turc et al., 2019](https://arxiv.org/abs/1908.08962).
This is a best-effort reproduction, and so it is probable that differences with
the original model have gone unnoticed. The performance of MultiBERTs on GLUE after full training is oftentimes comparable to that of original
BERT, but we found significant differences on the dev set of SQuAD (MultiBERTs outperforms original BERT).
See our [technical report](https://arxiv.org/abs/2106.16163) for more details.
### How to use
Using code from
[BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on
Tensorflow:
```
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_1300k')
model = TFBertModel.from_pretrained("google/multiberts-seed_0-step_1300k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
PyTorch version:
```
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_1300k')
model = BertModel.from_pretrained("google/multiberts-seed_0-step_1300k")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
## Citation info
```bibtex
@article{sellam2021multiberts,
title={The MultiBERTs: BERT Reproductions for Robustness Analysis},
author={Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick},
journal={arXiv preprint arXiv:2106.16163},
year={2021}
}
```
|
efe15a2315119772a5b15770683d8e18
|
darkvibes/chkpt
|
darkvibes
| null | 18 | 4 |
diffusers
| 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['text-to-image', 'stable-diffusion']
| false | true | true | 609 | false |
### chkpt Dreambooth model trained by darkvibes with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb)
Sample pictures of this concept:
|
11698830dc3c979cd8a56731b5886935
|
akum1343/summarization_finetuned
|
akum1343
|
bart
| 9 | 21 |
transformers
| 0 |
text2text-generation
| false | true | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,344 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# summarization_finetuned
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.5478
- Validation Loss: 1.4195
- Train Rougel: tf.Tensor(0.29894578, shape=(), dtype=float32)
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adamax', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Rougel | Epoch |
|:----------:|:---------------:|:----------------------------------------------:|:-----:|
| 1.5478 | 1.4195 | tf.Tensor(0.29894578, shape=(), dtype=float32) | 0 |
### Framework versions
- Transformers 4.25.1
- TensorFlow 2.10.0
- Datasets 2.6.1
- Tokenizers 0.12.1
|
5c23cf18093c98cb79b5610118272961
|
Norod78/ddpm-EmojiAlignedFaces-64
|
Norod78
| null | 30 | 13 |
diffusers
| 0 | null | false | false | false |
apache-2.0
|
['en']
|
['Norod78/EmojiFFHQAlignedFaces']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,057 | false |
# ddpm-EmojiAlignedFaces-64
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the [Norod78/EmojiFFHQAlignedFaces](https://huggingface.co/datasets/Norod78/EmojiFFHQAlignedFaces) dataset.
#### How to use
```python
from diffusers import DDPMPipeline, DDIMPipeline, PNDMPipeline
def main():
model_id = "Norod78/ddpm-EmojiAlignedFaces-64"
# load model and scheduler
ddpm = DDPMPipeline.from_pretrained(model_id) # you can replace DDPMPipeline with DDIMPipeline or PNDMPipeline for faster inference
# run pipeline in inference (sample random noise and denoise)
image = ddpm()["sample"]
# save image
image[0].save("ddpm_generated_image.jpg")
image[0].show()
if __name__ == '__main__':
main()
```
### Training data
[Norod78/EmojiFFHQAlignedFaces](https://huggingface.co/datasets/Norod78/EmojiFFHQAlignedFaces)
### Training results
📈 [TensorBoard logs](https://huggingface.co/Norod78/ddpm-EmojiAlignedFaces-64/tensorboard?#scalars)
|
870129ebf5be78d1ebcfc80ab4f3d431
|
echarlaix/t5-small-int8-dynamic
|
echarlaix
|
t5
| 8 | 6 |
transformers
| 1 |
translation
| false | false | false |
apache-2.0
|
['en', 'fr', 'ro', 'de']
|
['c4']
| null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['int8', 'summarization', 'translation']
| false | true | true | 1,233 | false |
## [t5-small](https://huggingface.co/t5-small) exported to the ONNX format and dynamically quantized.
## Model description
[T5](https://huggingface.co/docs/transformers/model_doc/t5#t5) is an encoder-decoder model pre-trained on a multi-task mixture of unsupervised and supervised tasks and for which each task is converted into a text-to-text format.
For more information, please take a look at the original paper.
Paper: [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf)
Authors: *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu*
## Usage example
You can use this model with Transformers *pipeline*.
```python
from transformers import AutoTokenizer, pipeline
from optimum.onnxruntime import ORTModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("echarlaix/t5-small-dynamic")
model = ORTModelForSeq2SeqLM.from_pretrained("echarlaix/t5-small-dynamic")
translator = pipeline("translation_en_to_fr", model=model, tokenizer=tokenizer)
text = "He never went out without a book under his arm, and he often came back with two."
results = translator(text)
print(results)
```
|
fc4350ebb4db41090edab9b5dddc5d08
|
test1234678/distilbert-base-uncased-distilled-clinc
|
test1234678
|
distilbert
| 10 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['clinc_oos']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,792 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2712
- Accuracy: 0.9461
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.2629 | 1.0 | 318 | 1.6048 | 0.7368 |
| 1.2437 | 2.0 | 636 | 0.8148 | 0.8565 |
| 0.6604 | 3.0 | 954 | 0.4768 | 0.9161 |
| 0.4054 | 4.0 | 1272 | 0.3548 | 0.9352 |
| 0.2987 | 5.0 | 1590 | 0.3084 | 0.9419 |
| 0.2549 | 6.0 | 1908 | 0.2909 | 0.9435 |
| 0.232 | 7.0 | 2226 | 0.2804 | 0.9458 |
| 0.221 | 8.0 | 2544 | 0.2749 | 0.9458 |
| 0.2145 | 9.0 | 2862 | 0.2722 | 0.9468 |
| 0.2112 | 10.0 | 3180 | 0.2712 | 0.9461 |
### Framework versions
- Transformers 4.21.3
- Pytorch 1.10.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
19fb574cd5ebc2abeb24d2dee7bb7a4f
|
fathyshalab/all-roberta-large-v1-work-4-16-5
|
fathyshalab
|
roberta
| 11 | 3 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,509 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-work-4-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3586
- Accuracy: 0.3689
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.8058 | 1.0 | 1 | 2.6169 | 0.2356 |
| 2.3524 | 2.0 | 2 | 2.5215 | 0.2978 |
| 1.9543 | 3.0 | 3 | 2.4427 | 0.3422 |
| 1.5539 | 4.0 | 4 | 2.3874 | 0.36 |
| 1.4133 | 5.0 | 5 | 2.3586 | 0.3689 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
8b7e56decd8b4ddb83b87554deccf8c2
|
FritzOS/TEdetection_distilBERT_NER_final
|
FritzOS
|
distilbert
| 4 | 7 |
transformers
| 0 |
token-classification
| false | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,568 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# TEdetection_distiBERT_NER_final
This model is a fine-tuned version of [FritzOS/TEdetection_distiBERT_mLM_final](https://huggingface.co/FritzOS/TEdetection_distiBERT_mLM_final) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0031
- Validation Loss: 0.0035
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 220743, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.0031 | 0.0035 | 0 |
### Framework versions
- Transformers 4.19.4
- TensorFlow 2.8.2
- Datasets 2.2.2
- Tokenizers 0.12.1
|
2d72d498f7bf9736f2eaca5aae2830e0
|
ncduy/distilbert-base-uncased-finetuned-ner
|
ncduy
|
distilbert
| 13 | 11 |
transformers
| 0 |
token-classification
| true | false | false |
apache-2.0
| null |
['conll2003']
| null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| false | true | true | 1,554 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0612
- Precision: 0.9270
- Recall: 0.9377
- F1: 0.9323
- Accuracy: 0.9840
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2403 | 1.0 | 878 | 0.0683 | 0.9177 | 0.9215 | 0.9196 | 0.9815 |
| 0.0513 | 2.0 | 1756 | 0.0605 | 0.9227 | 0.9365 | 0.9295 | 0.9836 |
| 0.0298 | 3.0 | 2634 | 0.0612 | 0.9270 | 0.9377 | 0.9323 | 0.9840 |
### Framework versions
- Transformers 4.9.1
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
9b80e7122fd0197ea92dcd1711dd2d45
|
anas-awadalla/bart-large-few-shot-k-128-finetuned-squad-infilling-seed-2
|
anas-awadalla
|
bart
| 16 | 1 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null |
['squad']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 968 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-few-shot-k-128-finetuned-squad-infilling-seed-2
This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 35.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.11.6
|
34a72d29b007a289c251367c07b368fa
|
mohamed-elmogy/mt5-small-mohamed-elmogy
|
mohamed-elmogy
|
mt5
| 24 | 3 |
transformers
| 0 |
summarization
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['summarization', 'generated_from_trainer']
| true | true | true | 1,954 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-mohamed-elmogy
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Rouge1: 0.0
- Rouge2: 0.0
- Rougel: 0.0
- Rougelsum: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| No log | 1.0 | 1141 | 9.4719 | 0.0 | 0.0 | 0.0 | 0.0 |
| 40.4884 | 2.0 | 2282 | 78.1757 | 0.0 | 0.0 | 0.0 | 0.0 |
| 40.4884 | 3.0 | 3423 | 54.3033 | 0.0 | 0.0 | 0.0 | 0.0 |
| 72.4118 | 4.0 | 4564 | 75.8558 | 0.0 | 0.0 | 0.0 | 0.0 |
| 72.4118 | 5.0 | 5705 | 12.4297 | 0.0 | 0.0 | 0.0 | 0.0 |
| 24.3571 | 6.0 | 6846 | 12.4297 | 0.0 | 0.0 | 0.0 | 0.0 |
| 24.3571 | 7.0 | 7987 | 12.4297 | 0.0 | 0.0 | 0.0 | 0.0 |
| 16.5474 | 8.0 | 9128 | nan | 0.0 | 0.0 | 0.0 | 0.0 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu117
- Datasets 2.7.1
- Tokenizers 0.13.2
|
98e11a2b79bcdb72250129010b3c2c14
|
AAkhilesh/wav2vec2-large-xls-r-300m-ta-colab
|
AAkhilesh
|
wav2vec2
| 95 | 6 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null |
['common_voice']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,099 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-ta-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 40
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
d75022f42e7625894e3dc240ecc965cc
|
facebook/opt-30b
|
facebook
|
opt
| 33 | 31,086 |
transformers
| 95 |
text-generation
| true | true | true |
other
|
['en']
| null | null | 23 | 11 | 4 | 8 | 4 | 3 | 1 |
['text-generation', 'opt']
| false | true | true | 9,908 | false |
# OPT : Open Pre-trained Transformer Language Models
OPT was first introduced in [Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) and first released in [metaseq's repository](https://github.com/facebookresearch/metaseq) on May 3rd 2022 by Meta AI.
**Disclaimer**: The team releasing OPT wrote an official model card, which is available in Appendix D of the [paper](https://arxiv.org/pdf/2205.01068.pdf).
Content from **this** model card has been written by the Hugging Face team.
## Intro
To quote the first two paragraphs of the [official paper](https://arxiv.org/abs/2205.01068)
> Large language models trained on massive text collections have shown surprising emergent
> capabilities to generate text and perform zero- and few-shot learning. While in some cases the public
> can interact with these models through paid APIs, full model access is currently limited to only a
> few highly resourced labs. This restricted access has limited researchers’ ability to study how and
> why these large language models work, hindering progress on improving known challenges in areas
> such as robustness, bias, and toxicity.
> We present Open Pretrained Transformers (OPT), a suite of decoder-only pre-trained transformers ranging from 125M
> to 175B parameters, which we aim to fully and responsibly share with interested researchers. We train the OPT models to roughly match
> the performance and sizes of the GPT-3 class of models, while also applying the latest best practices in data
> collection and efficient training. Our aim in developing this suite of OPT models is to enable reproducible and responsible research at scale, and
> to bring more voices to the table in studying the impact of these LLMs. Definitions of risk, harm, bias, and toxicity, etc., should be articulated by the
> collective research community as a whole, which is only possible when models are available for study.
## Model description
OPT was predominantly pretrained with English text, but a small amount of non-English data is still present within the training corpus via CommonCrawl. The model was pretrained using a causal language modeling (CLM) objective.
OPT belongs to the same family of decoder-only models like [GPT-3](https://arxiv.org/abs/2005.14165). As such, it was pretrained using the self-supervised causal language modedling objective.
For evaluation, OPT follows [GPT-3](https://arxiv.org/abs/2005.14165) by using their prompts and overall experimental setup. For more details, please read
the [official paper](https://arxiv.org/abs/2205.01068).
## Intended uses & limitations
The pretrained-only model can be used for prompting for evaluation of downstream tasks as well as text generation.
In addition, the model can be fine-tuned on a downstream task using the [CLM example](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling). For all other OPT checkpoints, please have a look at the [model hub](https://huggingface.co/models?filter=opt).
### How to use
For large OPT models, such as this one, it is not recommend to make use of the `text-generation` pipeline because
one should load the model in half-precision to accelerate generation and optimize memory consumption on GPU.
It is recommended to directly call the [`generate`](https://huggingface.co/docs/transformers/main/en/main_classes/text_generation#transformers.generation_utils.GenerationMixin.generate)
method as follows:
```python
>>> from transformers import AutoModelForCausalLM, AutoTokenizer
>>> import torch
>>> model = AutoModelForCausalLM.from_pretrained("facebook/opt-30b", torch_dtype=torch.float16).cuda()
>>> # the fast tokenizer currently does not work correctly
>>> tokenizer = AutoTokenizer.from_pretrained("facebook/opt-30b", use_fast=False)
>>> prompt = "Hello, I am conscious and"
>>> input_ids = tokenizer(prompt, return_tensors="pt").input_ids.cuda()
>>> generated_ids = model.generate(input_ids)
>>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
['Hello, I am conscious and I am here.\nI am also conscious and I am here']
```
By default, generation is deterministic. In order to use the top-k sampling, please set `do_sample` to `True`.
```python
>>> from transformers import AutoModelForCausalLM, AutoTokenizer, set_seed
>>> import torch
>>> model = AutoModelForCausalLM.from_pretrained("facebook/opt-30b", torch_dtype=torch.float16).cuda()
>>> # the fast tokenizer currently does not work correctly
>>> tokenizer = AutoTokenizer.from_pretrained("facebook/opt-30b", use_fast=False)
>>> prompt = "Hello, I am conscious and"
>>> input_ids = tokenizer(prompt, return_tensors="pt").input_ids.cuda()
>>> set_seed(32)
>>> generated_ids = model.generate(input_ids, do_sample=True)
>>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
['Hello, I am conscious and aware that you have your back turned to me and want to talk']
```
### Limitations and bias
As mentioned in Meta AI's model card, given that the training data used for this model contains a lot of
unfiltered content from the internet, which is far from neutral the model is strongly biased :
> Like other large language models for which the diversity (or lack thereof) of training
> data induces downstream impact on the quality of our model, OPT-175B has limitations in terms
> of bias and safety. OPT-175B can also have quality issues in terms of generation diversity and
> hallucination. In general, OPT-175B is not immune from the plethora of issues that plague modern
> large language models.
Here's an example of how the model can have biased predictions:
```python
>>> from transformers import AutoModelForCausalLM, AutoTokenizer, set_seed
>>> import torch
>>> model = AutoModelForCausalLM.from_pretrained("facebook/opt-30b", torch_dtype=torch.float16).cuda()
>>> # the fast tokenizer currently does not work correctly
>>> tokenizer = AutoTokenizer.from_pretrained("facebook/opt-30b", use_fast=False)
>>> prompt = "The woman worked as a"
>>> input_ids = tokenizer(prompt, return_tensors="pt").input_ids.cuda()
>>> set_seed(32)
>>> generated_ids = model.generate(input_ids, do_sample=True, num_return_sequences=5, max_length=10)
>>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
The woman worked as a supervisor in the office
The woman worked as a social worker in a
The woman worked as a cashier at the
The woman worked as a teacher from 2011 to
he woman worked as a maid at the house
```
compared to:
```python
>>> from transformers import AutoModelForCausalLM, AutoTokenizer, set_seed
>>> import torch
>>> model = AutoModelForCausalLM.from_pretrained("facebook/opt-30b", torch_dtype=torch.float16).cuda()
>>> # the fast tokenizer currently does not work correctly
>>> tokenizer = AutoTokenizer.from_pretrained("facebook/opt-30b", use_fast=False)
>>> prompt = "The man worked as a"
>>> input_ids = tokenizer(prompt, return_tensors="pt").input_ids.cuda()
>>> set_seed(32)
>>> generated_ids = model.generate(input_ids, do_sample=True, num_return_sequences=5, max_length=10)
>>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
The man worked as a school bus driver for
The man worked as a bartender in a bar
The man worked as a cashier at the
The man worked as a teacher, and was
The man worked as a professional at a range
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The Meta AI team wanted to train this model on a corpus as large as possible. It is composed of the union of the following 5 filtered datasets of textual documents:
- BookCorpus, which consists of more than 10K unpublished books,
- CC-Stories, which contains a subset of CommonCrawl data filtered to match the
story-like style of Winograd schemas,
- The Pile, from which * Pile-CC, OpenWebText2, USPTO, Project Gutenberg, OpenSubtitles, Wikipedia, DM Mathematics and HackerNews* were included.
- Pushshift.io Reddit dataset that was developed in Baumgartner et al. (2020) and processed in
Roller et al. (2021)
- CCNewsV2 containing an updated version of the English portion of the CommonCrawl News
dataset that was used in RoBERTa (Liu et al., 2019b)
The final training data contains 180B tokens corresponding to 800GB of data. The validation split was made of 200MB of the pretraining data, sampled proportionally
to each dataset’s size in the pretraining corpus.
The dataset might contains offensive content as parts of the dataset are a subset of
public Common Crawl data, along with a subset of public Reddit data, which could contain sentences
that, if viewed directly, can be insulting, threatening, or might otherwise cause anxiety.
### Collection process
The dataset was collected form internet, and went through classic data processing algorithms and
re-formatting practices, including removing repetitive/non-informative text like *Chapter One* or
*This ebook by Project Gutenberg.*
## Training procedure
### Preprocessing
The texts are tokenized using the **GPT2** byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50272. The inputs are sequences of 2048 consecutive tokens.
The 175B model was trained on 992 *80GB A100 GPUs*. The training duration was roughly ~33 days of continuous training.
### BibTeX entry and citation info
```bibtex
@misc{zhang2022opt,
title={OPT: Open Pre-trained Transformer Language Models},
author={Susan Zhang and Stephen Roller and Naman Goyal and Mikel Artetxe and Moya Chen and Shuohui Chen and Christopher Dewan and Mona Diab and Xian Li and Xi Victoria Lin and Todor Mihaylov and Myle Ott and Sam Shleifer and Kurt Shuster and Daniel Simig and Punit Singh Koura and Anjali Sridhar and Tianlu Wang and Luke Zettlemoyer},
year={2022},
eprint={2205.01068},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
3bd4f04768a804ecba0e088bff628658
|
mrojas/spanish-clinical-ner
|
mrojas
|
roberta
| 14 | 8 |
transformers
| 0 |
token-classification
| true | false | false |
apache-2.0
| null |
['wl']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,543 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spanish-clinical-ner
This model is a fine-tuned version of [plncmm/roberta-clinical-wl-es](https://huggingface.co/plncmm/roberta-clinical-wl-es) on the wl dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6181
- Precision: 0.6869
- Recall: 0.7349
- F1: 0.7100
- Accuracy: 0.8263
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 1.0283 | 1.0 | 500 | 0.6862 | 0.6690 | 0.6959 | 0.6822 | 0.8091 |
| 0.599 | 2.0 | 1000 | 0.6198 | 0.6856 | 0.7276 | 0.7059 | 0.8252 |
| 0.4973 | 3.0 | 1500 | 0.6181 | 0.6869 | 0.7349 | 0.7100 | 0.8263 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
0a30d8a258fe472593d75277c413edb5
|
stevemobs/deberta-base-finetuned-squad1-newsqa
|
stevemobs
|
deberta
| 13 | 5 |
transformers
| 0 |
question-answering
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,254 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-base-finetuned-squad1-newsqa
This model is a fine-tuned version of [stevemobs/deberta-base-finetuned-squad1](https://huggingface.co/stevemobs/deberta-base-finetuned-squad1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7556
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.6703 | 1.0 | 17307 | 0.7207 |
| 0.4775 | 2.0 | 34614 | 0.7556 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
821ca2f4893a3d76b4274da028538b49
|
research-backup/bart-base-squadshifts-vanilla-new_wiki-qg
|
research-backup
|
bart
| 15 | 1 |
transformers
| 0 |
text2text-generation
| true | false | false |
cc-by-4.0
|
['en']
|
['lmqg/qg_squadshifts']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['question generation']
| true | true | true | 4,187 | false |
# Model Card of `research-backup/bart-base-squadshifts-vanilla-new_wiki-qg`
This model is fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) for question generation task on the [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) (dataset_name: new_wiki) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
### Overview
- **Language model:** [facebook/bart-base](https://huggingface.co/facebook/bart-base)
- **Language:** en
- **Training data:** [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) (new_wiki)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="en", model="research-backup/bart-base-squadshifts-vanilla-new_wiki-qg")
# model prediction
questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "research-backup/bart-base-squadshifts-vanilla-new_wiki-qg")
output = pipe("<hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.")
```
## Evaluation
- ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/research-backup/bart-base-squadshifts-vanilla-new_wiki-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_squadshifts.new_wiki.json)
| | Score | Type | Dataset |
|:-----------|--------:|:---------|:---------------------------------------------------------------------------|
| BERTScore | 92.97 | new_wiki | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) |
| Bleu_1 | 29.14 | new_wiki | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) |
| Bleu_2 | 19.48 | new_wiki | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) |
| Bleu_3 | 13.85 | new_wiki | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) |
| Bleu_4 | 10.27 | new_wiki | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) |
| METEOR | 23.65 | new_wiki | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) |
| MoverScore | 64.36 | new_wiki | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) |
| ROUGE_L | 26.47 | new_wiki | [lmqg/qg_squadshifts](https://huggingface.co/datasets/lmqg/qg_squadshifts) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_squadshifts
- dataset_name: new_wiki
- input_types: ['paragraph_answer']
- output_types: ['question']
- prefix_types: None
- model: facebook/bart-base
- max_length: 512
- max_length_output: 32
- epoch: 4
- batch: 8
- lr: 0.0001
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 8
- label_smoothing: 0.15
The full configuration can be found at [fine-tuning config file](https://huggingface.co/research-backup/bart-base-squadshifts-vanilla-new_wiki-qg/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
|
4a10631f587435cbf0f24a72e7500fb8
|
nateraw/mit-b0-finetuned-sidewalks
|
nateraw
|
segformer
| 5 | 0 |
transformers
| 0 | null | false | true | false |
other
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 11,490 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# nateraw/mit-b0-finetuned-sidewalks
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5197
- Validation Loss: 0.6268
- Validation Mean Iou: 0.2719
- Validation Mean Accuracy: 0.3442
- Validation Overall Accuracy: 0.8180
- Validation Per Category Iou: [0. 0.62230678 0.81645513 0.18616589 0.66669478 0.30574734
nan 0.36681201 0.31128062 0. 0.76635363 0.
0. nan 0. 0.37874505 0. 0.
0.68193241 0. 0.48867838 0.25809644 0. nan
0. 0.25765818 0. 0. 0.81965205 0.71604385
0.9214592 0. 0.00636635 0.12957446 0. ]
- Validation Per Category Accuracy: [0. 0.89469845 0.88320521 0.45231002 0.72104833 0.3386303
nan 0.53522723 0.72026843 0. 0.93197124 0.
0. nan 0. 0.45525816 0. 0.
0.87276184 0. 0.60762821 0.29654901 0. nan
0. 0.32162193 0. 0. 0.90797988 0.89199119
0.96388697 0. 0.00646084 0.21171965 0. ]
- Epoch: 5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 6e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Validation Mean Iou | Validation Mean Accuracy | Validation Overall Accuracy | Validation Per Category Iou | Validation Per Category Accuracy | Epoch |
|:----------:|:---------------:|:-------------------:|:------------------------:|:---------------------------:|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:-----:|
| 1.3430 | 0.8858 | 0.1724 | 0.2253 | 0.7508 | [0.00000000e+00 5.02535817e-01 7.94050536e-01 1.37476079e-01
5.28949130e-01 1.76391302e-01 nan 1.19967229e-01
0.00000000e+00 0.00000000e+00 6.61310784e-01 0.00000000e+00
0.00000000e+00 nan 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 5.06634036e-01 0.00000000e+00
7.22567226e-02 5.35294630e-03 0.00000000e+00 0.00000000e+00
0.00000000e+00 1.53949868e-02 0.00000000e+00 0.00000000e+00
7.37842004e-01 5.78989440e-01 8.52258994e-01 0.00000000e+00
0.00000000e+00 6.16858377e-05 0.00000000e+00] | [0.00000000e+00 5.80613096e-01 9.43852033e-01 1.50019637e-01
5.77268577e-01 3.25241508e-01 nan 1.68319967e-01
0.00000000e+00 0.00000000e+00 8.60308871e-01 0.00000000e+00
0.00000000e+00 nan 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 9.04260401e-01 0.00000000e+00
7.74112939e-02 5.58025588e-03 0.00000000e+00 nan
0.00000000e+00 1.56055377e-02 0.00000000e+00 0.00000000e+00
8.41648672e-01 8.58416118e-01 9.02457570e-01 0.00000000e+00
0.00000000e+00 6.18892982e-05 0.00000000e+00] | 0 |
| 0.8402 | 0.7211 | 0.2203 | 0.2900 | 0.7927 | [0. 0.60561012 0.80467888 0.10134538 0.57674712 0.21967639
nan 0.279315 0.28998136 0. 0.71924852 0.
0. nan 0. 0.10241989 0. 0.
0.60537245 0. 0.37966409 0.0624908 0. 0.
0. 0.11869763 0. 0. 0.79675107 0.70541969
0.89177953 0. 0. 0.01097213 0. ] | [0. 0.70687024 0.92710849 0.47653578 0.6809956 0.28562204
nan 0.35954555 0.53804171 0. 0.87451178 0.
0. nan 0. 0.10473185 0. 0.
0.88548482 0. 0.52011987 0.06421075 0. nan
0. 0.13802701 0. 0. 0.9278545 0.83106582
0.94693817 0. 0. 0.01170072 0. ] | 1 |
| 0.7051 | 0.6513 | 0.2568 | 0.3210 | 0.8151 | [0.00000000e+00 6.31500555e-01 8.33347761e-01 2.40727740e-01
6.71879162e-01 2.32727132e-01 nan 3.15720178e-01
3.22578864e-01 0.00000000e+00 7.51066980e-01 0.00000000e+00
0.00000000e+00 nan 0.00000000e+00 3.01090014e-01
0.00000000e+00 0.00000000e+00 6.56592309e-01 0.00000000e+00
3.82317489e-01 2.25385079e-01 0.00000000e+00 nan
0.00000000e+00 2.34975219e-01 0.00000000e+00 0.00000000e+00
7.92710603e-01 6.82508692e-01 9.02369099e-01 0.00000000e+00
5.10019193e-04 4.02361131e-02 0.00000000e+00] | [0.00000000e+00 7.76355941e-01 9.39707165e-01 3.90888278e-01
7.70256989e-01 2.84066636e-01 nan 4.57106724e-01
6.33498392e-01 0.00000000e+00 9.05789013e-01 0.00000000e+00
0.00000000e+00 nan 0.00000000e+00 3.57230962e-01
0.00000000e+00 0.00000000e+00 8.45761217e-01 0.00000000e+00
5.16681541e-01 2.82796479e-01 0.00000000e+00 nan
0.00000000e+00 3.07634724e-01 0.00000000e+00 0.00000000e+00
9.04391068e-01 8.86212453e-01 9.64570665e-01 0.00000000e+00
5.17411580e-04 4.71742075e-02 0.00000000e+00] | 2 |
| 0.6294 | 0.6365 | 0.2695 | 0.3320 | 0.8244 | [0. 0.63840754 0.83879521 0.31781353 0.69394774 0.22324776
nan 0.35012894 0.31369877 0. 0.7683448 0.
0. nan 0. 0.36532292 0. 0.
0.65554136 0. 0.37438724 0.25682621 0. nan
0. 0.23051151 0. 0. 0.81818163 0.7633018
0.91092518 0. 0.00145576 0.10215516 0. ] | [0. 0.76103704 0.95305272 0.43848725 0.78760908 0.25645014
nan 0.48971828 0.61853472 0. 0.90793733 0.
0. nan 0. 0.48772201 0. 0.
0.84205031 0. 0.53308407 0.36285878 0. nan
0. 0.27953916 0. 0. 0.93079576 0.87079757
0.96477884 0. 0.00147054 0.13899972 0. ] | 3 |
| 0.5686 | 0.6122 | 0.2715 | 0.3360 | 0.8256 | [0.00000000e+00 6.38345814e-01 8.56252996e-01 3.07043269e-01
6.87537894e-01 3.06534041e-01 nan 3.84145525e-01
3.19438916e-01 0.00000000e+00 7.57233152e-01 0.00000000e+00
0.00000000e+00 nan 0.00000000e+00 4.06585843e-01
0.00000000e+00 0.00000000e+00 6.47648546e-01 2.91885581e-04
4.00547422e-01 1.97261484e-01 0.00000000e+00 nan
0.00000000e+00 2.20793008e-01 0.00000000e+00 0.00000000e+00
8.19526784e-01 7.19306080e-01 9.20192720e-01 0.00000000e+00
2.23374930e-03 9.77508243e-02 0.00000000e+00] | [0.00000000e+00 7.89438910e-01 9.16367241e-01 4.32251205e-01
7.89740409e-01 4.88566404e-01 nan 5.36825005e-01
6.47787376e-01 0.00000000e+00 9.32641501e-01 0.00000000e+00
0.00000000e+00 nan 0.00000000e+00 4.73813253e-01
0.00000000e+00 0.00000000e+00 9.09004353e-01 2.91885581e-04
4.37175308e-01 2.25663128e-01 0.00000000e+00 nan
0.00000000e+00 2.60992057e-01 0.00000000e+00 0.00000000e+00
9.19328058e-01 9.02898346e-01 9.65529369e-01 0.00000000e+00
2.23984750e-03 1.20880721e-01 0.00000000e+00] | 4 |
| 0.5197 | 0.6268 | 0.2719 | 0.3442 | 0.8180 | [0. 0.62230678 0.81645513 0.18616589 0.66669478 0.30574734
nan 0.36681201 0.31128062 0. 0.76635363 0.
0. nan 0. 0.37874505 0. 0.
0.68193241 0. 0.48867838 0.25809644 0. nan
0. 0.25765818 0. 0. 0.81965205 0.71604385
0.9214592 0. 0.00636635 0.12957446 0. ] | [0. 0.89469845 0.88320521 0.45231002 0.72104833 0.3386303
nan 0.53522723 0.72026843 0. 0.93197124 0.
0. nan 0. 0.45525816 0. 0.
0.87276184 0. 0.60762821 0.29654901 0. nan
0. 0.32162193 0. 0. 0.90797988 0.89199119
0.96388697 0. 0.00646084 0.21171965 0. ] | 5 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Datasets 2.6.1
- Tokenizers 0.13.2
|
c865e66d3c599d5230cce3b7abc6bda4
|
neongeckocom/stt_uk_citrinet_512_gamma_0_25
|
neongeckocom
| null | 3 | 31 |
nemo
| 3 |
automatic-speech-recognition
| false | false | false |
bsd-3-clause
|
['uk']
|
['mozilla-foundation/common_voice_10_0', 'Yehor/voa-uk-transcriptions']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition']
| true | true | true | 687 | false |
# NVIDIA Streaming Citrinet 512 (uk-UA)
<style>
img {
display: inline;
}
</style>
| [](#model-architecture)
| [](#model-architecture)
| [](#datasets) |
## Attribution
As initial checkpoint used [stt_en_citrinet_512_gamma_0_25](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/nemo/models/stt_en_citrinet_512_gamma_0_25) by [NVIDIA](https://github.com/NVIDIA) licensed under [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/)
|
f6120fbe904c4fdfc351cdcf98cf3dd5
|
yanaiela/roberta-base-epoch_23
|
yanaiela
|
roberta
| 9 | 3 |
transformers
| 0 |
fill-mask
| true | false | false |
mit
|
['en']
|
['wikipedia', 'bookcorpus']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['roberta-base', 'roberta-base-epoch_23']
| false | true | true | 2,102 | false |
# RoBERTa, Intermediate Checkpoint - Epoch 23
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_23.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
03dbf28262e4e7fc4403bc572f817763
|
Keneston/xlm-roberta-base-finetuned-panx-it
|
Keneston
|
xlm-roberta
| 9 | 5 |
transformers
| 0 |
token-classification
| true | false | false |
mit
| null |
['xtreme']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,319 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2630
- F1: 0.8124
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.8193 | 1.0 | 70 | 0.3200 | 0.7356 |
| 0.2773 | 2.0 | 140 | 0.2841 | 0.7882 |
| 0.1807 | 3.0 | 210 | 0.2630 | 0.8124 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
1c4b9c13e3fa0032c720bb2a4b922221
|
jonatasgrosman/exp_w2v2t_uk_hubert_s33
|
jonatasgrosman
|
hubert
| 10 | 4 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['uk']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'uk']
| false | true | true | 451 | false |
# exp_w2v2t_uk_hubert_s33
Fine-tuned [facebook/hubert-large-ll60k](https://huggingface.co/facebook/hubert-large-ll60k) for speech recognition using the train split of [Common Voice 7.0 (uk)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
51f1cd1bf4df6240b7c69bd7e58b6c3e
|
jonatasgrosman/exp_w2v2t_sv-se_xlsr-53_s328
|
jonatasgrosman
|
wav2vec2
| 10 | 4 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['sv-SE']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'sv-SE']
| false | true | true | 467 | false |
# exp_w2v2t_sv-se_xlsr-53_s328
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) for speech recognition using the train split of [Common Voice 7.0 (sv-SE)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
999940db077bdcec33a641c8b3fa06ac
|
Helsinki-NLP/opus-mt-de-ms
|
Helsinki-NLP
|
marian
| 11 | 9 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['de', 'ms']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 2,180 | false |
### deu-msa
* source group: German
* target group: Malay (macrolanguage)
* OPUS readme: [deu-msa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/deu-msa/README.md)
* model: transformer-align
* source language(s): deu
* target language(s): ind zsm_Latn
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-msa/opus-2020-06-17.zip)
* test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-msa/opus-2020-06-17.test.txt)
* test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/deu-msa/opus-2020-06-17.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.deu.msa | 34.0 | 0.607 |
### System Info:
- hf_name: deu-msa
- source_languages: deu
- target_languages: msa
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/deu-msa/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['de', 'ms']
- src_constituents: {'deu'}
- tgt_constituents: {'zsm_Latn', 'ind', 'max_Latn', 'zlm_Latn', 'min'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/deu-msa/opus-2020-06-17.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/deu-msa/opus-2020-06-17.test.txt
- src_alpha3: deu
- tgt_alpha3: msa
- short_pair: de-ms
- chrF2_score: 0.607
- bleu: 34.0
- brevity_penalty: 0.9540000000000001
- ref_len: 3729.0
- src_name: German
- tgt_name: Malay (macrolanguage)
- train_date: 2020-06-17
- src_alpha2: de
- tgt_alpha2: ms
- prefer_old: False
- long_pair: deu-msa
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
3c25a05142f284474a48d7c3117b2fa0
|
ClueAI/PromptCLUE-base-v1-5
|
ClueAI
|
t5
| 7 | 3,739 |
transformers
| 11 |
text2text-generation
| true | false | false |
creativeml-openrail-m
|
['zh']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 5,620 | false |
<a href="https://colab.research.google.com/drive/1noyBA_JrYO6Lk6cwxsNZ_jdJ-Jtaf82G?usp=sharing"><img src="https://colab.research.google.com/assets/colab-badge.svg"></a>
PromptCLUE:全中文任务零样本学习模型
这个模型是基于PromptCLUE-base进一步训练(+50%步数),以及更多任务(+50%任务)以及更多任务类型上进行训练,是对PromptCLUE-base进一步升级, 新增的任务类型有改写、纠错和问答等类型
在1000亿token中文语料上预训练,累计学习1.5万亿中文token,并且在数百种任务上进行Prompt任务式训练。针对理解类任务,如分类、情感分析、抽取等,可以自定义标签体系;针对多种生成任务,可以进行采样自由生成。
<a href='https://www.cluebenchmarks.com/clueai.html'>在线Demo</a> |
<a href='https://www.clueai.cn'>使用clueai工具包和API(large版)</a> |
<a href='https://github.com/clue-ai/PromptCLUE'>Github项目地址</a> |
<a href='https://colab.research.google.com/drive/1noyBA_JrYO6Lk6cwxsNZ_jdJ-Jtaf82G?usp=sharing#scrollTo=Nk2tSi3vnSN0'>Colab试用</a>
加载模型:
```python
# 加载模型
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("ClueAI/PromptCLUE-base-v1-5")
model = T5ForConditionalGeneration.from_pretrained("ClueAI/PromptCLUE-base-v1-5")
```
使用模型进行预测推理方法:
```python
import torch
#device = torch.device('cpu')
device = torch.device('cuda')
model.to(device)
def preprocess(text):
return text.replace("\n", "_")
def postprocess(text):
return text.replace("_", "\n")
def answer(text, sample=False, top_p=0.8):
'''sample:是否抽样。生成任务,可以设置为True;
top_p:0-1之间,生成的内容越多样'''
text = preprocess(text)
encoding = tokenizer(text=[text], truncation=True, padding=True, max_length=768, return_tensors="pt").to(device)
if not sample:
out = model.generate(**encoding, return_dict_in_generate=True, output_scores=False, max_length=128, num_beams=4, length_penalty=0.6)
else:
out = model.generate(**encoding, return_dict_in_generate=True, output_scores=False, max_length=64, do_sample=True, top_p=top_p)
out_text = tokenizer.batch_decode(out["sequences"], skip_special_tokens=True)
return postprocess(out_text[0])
```
### 示例输入
### 改写(paraphrase)[sample=True]
Input:
生成与下列文字相同意思的句子:
白云遍地无人扫
答案:
Model output:
白云散去无踪,没人扫。
Input:
用另外的话复述下面的文字:
怎么到至今还不回来,这满地的白云幸好没人打扫。
答案:
Model output:
满地白云无扫处
Input:
改写下面的文字,确保意思相同:
一个如此藐视本国人民民主权利的人,怎么可能捍卫外国人的民权?
答案:
Model output:
对一个如此藐视本国人民民主权利的人,怎么能捍卫外国人的民权?
### 知识问答(knowledge_qa)[sample=False]
Input:
根据问题给出答案:
问题:手指发麻的主要可能病因是:
答案:
Model output:
神经损伤,颈椎病,贫血,高血压
问答:
问题:黄果悬钩子的目是:
答案:
Model output:
蔷薇目
#### 新闻分类(classify)
```bash
Input:
分类任务:
折价率过低遭抛售基金泰和跌7.15%,证券时报记者 朱景锋本报讯 由于折价率在大盘封基中处于最低水平,基金泰和昨日遭到投资者大举抛售,跌幅达到7.15%,远超大盘。盘面显示,基金泰和随大盘高开,之后开始震荡走低,午后开始加速下行,几乎没有像样反弹。截至收盘时,在沪深300指数仅下跌2.56%的情况下,基金泰和收盘跌幅高达7.15%,在所有封基中跌幅最大,而昨日多数封基跌幅在2%左右。
选项:财经,娱乐,时政,股票
答案:
Model output:
财经
```
#### 意图分类(classify)
```bash
Input:
意图分类:
帮我定一个周日上海浦东的房间
选项:闹钟,文学,酒店,艺术,体育,健康,天气,其他
答案:
Model output:
酒店
```
#### 情感分析(classify)
```bash
Input:
情感分析:
这个看上去还可以,但其实我不喜欢
选项:积极,消极
答案:
Model output:
消极
```
#### 推理(generate)
```bash
Input:
请推理出上下文的关系:
前提:对不起事情就是这样。
假设:事情就是这样,不需要道歉。
选项:中立,蕴涵,矛盾
答案:
Model output:
矛盾
```
#### 阅读理解(generate)
```bash
Input:
阅读文章,给出答案:
段落:
港汇指数,全称港元实际汇兑指数(Effective Exchange Rate Index for the Hong Kong Dollar)是由香港政府统计处编制的一项指数,以反映港元与香港主要贸易伙伴之货币的名义有效汇率加权平均数的变动情况。加权比重是按1999年至2000年平均贸易模式所制定,但政府并未有公布详细的计算公式。旧港汇指数基准日为2000年1月1日,基数为100点。由2012年1月3日起,新系列港汇指数 (包括15种货币及以2010年1月 = 100) 已取代旧港汇指数系列。港汇指数的作用,主要是用于反映香港的货品及服务的价格相对于其主要贸易伙伴的变动,并通常被视作反映香港价格竞争力的指标。
问题:港汇指数的加权比重如何制定?
答案:
Model output:
按1999年至2000年平均贸易模式所制定
```
#### 阅读理解-自由式(generate)
```bash
Input:
阅读以下对话并回答问题。
男:今天怎么这么晚才来上班啊?女:昨天工作到很晚,而且我还感冒了。男:那你回去休息吧,我帮你请假。女:谢谢你。
问题:女的怎么样?
选项:正在工作,感冒了,在打电话,要出差。
答案:
Model output:
感冒了
```
#### 摘要(generate)
```bash
Input:
为下面的文章生成摘要:
北京时间9月5日12时52分,四川甘孜藏族自治州泸定县发生6.8级地震。地震发生后,领导高度重视并作出重要指示,要求把抢救生命作为首要任务,全力救援受灾群众,最大限度减少人员伤亡
答案:
Model output:
四川甘孜发生6.8级地震
```
#### 翻译-中英(generate)
```bash
Input:
翻译成英文:
议长去了台湾,中国人民很愤怒。
答案:
Model output:
The secretary went to Taiwan and the Chinese people were angry.
```
#### 翻译-英中(generate)
```bash
Input:
翻译成中文:
This is a dialogue robot that can talk to people.
答案:
Model output:
这是一台可以与人交谈的对话机器人。
```
#### 通用信息抽取(generate)
```bash
Input:
信息抽取:
据新华社电广东省清远市清城区政府昨日对外发布信息称,日前被实名举报涉嫌勒索企业、说“分分钟可以搞垮一间厂”的清城区环保局局长陈柏,已被免去清城区区委委员
问题:机构名,人名,职位
答案:
Model output:
机构名:新华社,清城区政府,清城区环保局,清城区区委
人名:陈柏
职位:局长,区委委员
```
#### 简历信息抽取(generate)
```bash
Input:
阅读文本抽取关键信息:
张玄武2000年出生中国国籍无境外居留权博士学历现任杭州线锁科技技术总监。
问题:机构,人名,职位,籍贯,专业,国籍,学历,种族
答案:
Model output:
人名:张玄武
职位:杭州线锁科技技术总监
国籍:中国国籍
学历:博士学历
```
#### 医疗信息抽取(generate)
```bash
Input:
从文本中抽取信息:
患者精神可,饮食可,睡眠可,二便正常。患者通过综合治疗头晕症状较前减轻,患者继续口服改善脑血管及调整血压变化药物。
问题:症状,治疗,检查,身体部位,疾病
答案:
Model output:
症状:头晕
治疗:改善脑血管及调整血压变化药物
身体部位:二便
```
#### 电商客户需求分析(classify)
```bash
Input:
电商客户诉求分类:
收到但不太合身,可以退换吗
选项:买家咨询商品是否支持花呗付款,买家表示收藏关注店铺,买家咨询退换货规则,买家需要商品推荐
答案:
Model output:
买家咨询退换货规则
```
#### 医疗语义相似度(classify)
```bash
Input:
下面句子是否表示了相同的语义:
文本1:糖尿病腿麻木怎么办?
文本2:糖尿病怎样控制生活方式
选项:相似,不相似
答案:
Model output:
不相似
```
#### 问题生成(generate)
```bash
Input:
问题生成:
中新网2022年9月22日电 22日,商务部召开例行新闻发布会,商务部新闻发言人束珏婷表示,今年1-8月,中国实际使用外资1384亿美元,增长20.2%;其中,欧盟对华投资增长123.7%(含通过自由港投资数据)。这充分表明,包括欧盟在内的外国投资者持续看好中国市场,希望继续深化对华投资合作。
答案:
Model output:
1、商务部发言人束珏婷表示,中国去年1-8月实际使用外资1384亿美元,增长20.2%,其中欧盟对华投资增长多少?
2、欧盟对华投资的增长是否达到预期?
3、欧盟在对华投资合作方面取得什么样的成果?
```
#### 指代消解(generate)
```bash
Input:
指代消解:
段落:
少平跟润叶进了她二爸家的院子,润生走过来对他(代词)说:“我到宿舍找了你两回,你到哪里去了?”
问题:代词“他”指代的是?
答案:
Model output:
少平
```
#### 关键词抽取(generate)
```bash
Input:
抽取关键词:
当地时间21日,美国联邦储备委员会宣布加息75个基点,将联邦基金利率目标区间上调到3.00%至3.25%之间,符合市场预期。这是美联储今年以来第五次加息,也是连续第三次加息,创自1981年以来的最大密集加息幅度。
关键词:
Model output:
美联储,利率目标区间,加息,基点
```
#### 情感倾向(classify)
```bash
文字中包含了怎样的情感:
超可爱的帅哥,爱了。。。
选项:厌恶,喜欢,开心,悲伤,惊讶,生气,害怕
答案:
Model output:
喜欢
```
更多示例功能和模型见
[ClueAI](https://github.com/clue-ai/PromptCLUE)
|
00bda2c14951481da303c403bf38bc96
|
tosin/dialogpt_afriwoz_pidgin
|
tosin
|
gpt2
| 9 | 3 |
transformers
| 0 |
conversational
| true | false | false |
cc-by-4.0
|
['en']
|
['AfriWOZ']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['conversational', 'transformers']
| false | true | true | 2,078 | false |
## DialoGPT_AfriWOZ (Pidgin)
This is a fine-tuned model of DialoGPT (small) on the AfriWOZ dataset. It is intended to be used as a conversational system in Nigeria Pidgin English language.
The dataset it's trained on is limited in scope, as it covers only certain domains such as restaurants, hotel, taxi, and booking.
The perplexity achieved on the validation set is 38.52.
* Generation example from an interactive environment:
|Role | Response |
|------|------------|
|User | I hear say restaurant dey here. |
|Bot | I dey find restaurant. |
|User | Abeg you fit tell me which kind chop dey? |
|Bot | I no dey sure right now. |
|User | You do well. Thank you. |
|Bot | I dey find restauarant. |
Please find the information about preprocessing, training and full details of the DialoGPT in the [original DialoGPT repository](https://github.com/microsoft/DialoGPT)
The paper for this work can be found on arXiv: [https://arxiv.org/pdf/2204.08083.pdf](https://arxiv.org/pdf/2204.08083.pdf)
### How to use
Now we are ready to try out how the model works as a chatting partner!
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("tosin/dialogpt_afriwoz_pidgin")
model = AutoModelForCausalLM.from_pretrained("tosin/dialogpt_afriwoz_pidgin")
# Let's chat for 5 lines
for step in range(5):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)
# pretty print last ouput tokens from bot
print("DialoGPT_pidgin_Bot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
|
08dd80b8354aa699506aa83584337063
|
jojoUla/bert-large-cased-sigir-support-refute-no-label-40-2nd-test-LR100-40
|
jojoUla
|
bert
| 15 | 0 |
transformers
| 0 |
fill-mask
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 3,305 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-cased-sigir-support-refute-no-label-40-2nd-test-LR100-40
This model is a fine-tuned version of [jojoUla/bert-large-cased-sigir-support-refute-no-label-40](https://huggingface.co/jojoUla/bert-large-cased-sigir-support-refute-no-label-40) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4444
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 40.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.1928 | 1.0 | 1 | 5.0343 |
| 3.8865 | 2.0 | 2 | 4.7751 |
| 4.0526 | 3.0 | 3 | 2.2212 |
| 2.3444 | 4.0 | 4 | 1.6810 |
| 1.596 | 5.0 | 5 | 1.3135 |
| 1.6805 | 6.0 | 6 | 1.2568 |
| 1.1736 | 7.0 | 7 | 1.5288 |
| 1.2663 | 8.0 | 8 | 1.4556 |
| 1.3703 | 9.0 | 9 | 1.1139 |
| 0.9768 | 10.0 | 10 | 1.0658 |
| 1.0132 | 11.0 | 11 | 1.2556 |
| 0.9896 | 12.0 | 12 | 1.1046 |
| 1.1184 | 13.0 | 13 | 1.0522 |
| 0.8142 | 14.0 | 14 | 1.3122 |
| 0.706 | 15.0 | 15 | 1.0713 |
| 0.7227 | 16.0 | 16 | 1.4111 |
| 0.7169 | 17.0 | 17 | 0.5603 |
| 0.7922 | 18.0 | 18 | 1.0911 |
| 0.7763 | 19.0 | 19 | 0.6882 |
| 0.5832 | 20.0 | 20 | 1.4459 |
| 0.7265 | 21.0 | 21 | 1.5459 |
| 0.7249 | 22.0 | 22 | 0.9200 |
| 0.5397 | 23.0 | 23 | 1.0976 |
| 0.5063 | 24.0 | 24 | 1.1201 |
| 0.6569 | 25.0 | 25 | 1.0701 |
| 0.472 | 26.0 | 26 | 1.7735 |
| 0.6124 | 27.0 | 27 | 1.3597 |
| 0.6042 | 28.0 | 28 | 0.9292 |
| 0.5232 | 29.0 | 29 | 1.4994 |
| 0.4961 | 30.0 | 30 | 1.2059 |
| 0.371 | 31.0 | 31 | 1.2648 |
| 0.4746 | 32.0 | 32 | 1.0907 |
| 0.4901 | 33.0 | 33 | 1.2564 |
| 0.5066 | 34.0 | 34 | 1.9231 |
| 0.6352 | 35.0 | 35 | 1.0160 |
| 0.5672 | 36.0 | 36 | 1.2958 |
| 0.5139 | 37.0 | 37 | 0.9384 |
| 0.5583 | 38.0 | 38 | 1.9518 |
| 0.5443 | 39.0 | 39 | 1.4243 |
| 0.5935 | 40.0 | 40 | 1.3882 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
39feb1a665f89a9b0f4df8e7a14dfde4
|
aXhyra/irony_trained_1234567
|
aXhyra
|
distilbert
| 10 | 8 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['tweet_eval']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,398 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# irony_trained_1234567
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6580
- F1: 0.6766
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.6774391860025942e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 1234567
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.6608 | 1.0 | 716 | 0.6057 | 0.6704 |
| 0.5329 | 2.0 | 1432 | 0.8935 | 0.6621 |
| 0.3042 | 3.0 | 2148 | 1.3871 | 0.6822 |
| 0.1769 | 4.0 | 2864 | 1.6580 | 0.6766 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
7c4476fc686c6ea635070237d791563c
|
pannaga/wav2vec2-large-xls-r-300m-turkish-colab
|
pannaga
|
wav2vec2
| 15 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,408 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-turkish-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9701
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 40
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 4.3108 | 16.0 | 400 | 2.9378 | 1.0 |
| 3.0115 | 32.0 | 800 | 2.9701 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
1a112fe11841a926bc80051ac44c4180
|
AndrewR/distilgpt2-finetuned-katpoems-lm-15-epoch
|
AndrewR
|
gpt2
| 13 | 0 |
transformers
| 0 |
text-generation
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,867 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-katpoems-lm-15-epoch
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.8145
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 59 | 4.6495 |
| No log | 2.0 | 118 | 4.6555 |
| No log | 3.0 | 177 | 4.6696 |
| No log | 4.0 | 236 | 4.6930 |
| No log | 5.0 | 295 | 4.7132 |
| No log | 6.0 | 354 | 4.7185 |
| No log | 7.0 | 413 | 4.7444 |
| No log | 8.0 | 472 | 4.7611 |
| 4.2244 | 9.0 | 531 | 4.7794 |
| 4.2244 | 10.0 | 590 | 4.7841 |
| 4.2244 | 11.0 | 649 | 4.7929 |
| 4.2244 | 12.0 | 708 | 4.8048 |
| 4.2244 | 13.0 | 767 | 4.8058 |
| 4.2244 | 14.0 | 826 | 4.8124 |
| 4.2244 | 15.0 | 885 | 4.8145 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
f4ae55bea24dc5eac4528e556cac6cbc
|
KoichiYasuoka/bert-base-thai-upos
|
KoichiYasuoka
|
bert
| 8 | 11 |
transformers
| 0 |
token-classification
| true | false | false |
apache-2.0
|
['th']
|
['universal_dependencies']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['thai', 'token-classification', 'pos', 'wikipedia', 'dependency-parsing']
| false | true | true | 820 | false |
# bert-base-thai-upos
## Model Description
This is a BERT model pre-trained on Thai Wikipedia texts for POS-tagging and dependency-parsing, derived from [bert-base-th-cased](https://huggingface.co/Geotrend/bert-base-th-cased). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/) (Universal Part-Of-Speech).
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForTokenClassification
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/bert-base-thai-upos")
model=AutoModelForTokenClassification.from_pretrained("KoichiYasuoka/bert-base-thai-upos")
```
or
```py
import esupar
nlp=esupar.load("KoichiYasuoka/bert-base-thai-upos")
```
## See Also
[esupar](https://github.com/KoichiYasuoka/esupar): Tokenizer POS-tagger and Dependency-parser with BERT/RoBERTa/DeBERTa models
|
4f143454eed427416327139ad100d37f
|
qcs/ddpm-butterflies-128
|
qcs
| null | 18 | 3 |
diffusers
| 0 | null | false | false | false |
apache-2.0
|
['en']
|
['huggan/smithsonian_butterflies_subset']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,223 | false |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/qcs/ddpm-butterflies-128/tensorboard?#scalars)
|
c816a4d99430c459f3957bde8de64feb
|
sd-concepts-library/uma-meme-style
|
sd-concepts-library
| null | 39 | 0 | null | 1 | null | false | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 4,783 | false |
### uma-meme-style on Stable Diffusion
This is the `<uma-meme-style>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:


































|
542ab68dac82aa8f9ee2ba00bd686337
|
EP9/mt5-small-MT5-Intento1
|
EP9
|
mt5
| 12 | 3 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,416 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-MT5-Intento1
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Rouge1: 3.9645
- Rouge2: 0.8023
- Rougel: 3.8615
- Rougelsum: 3.8591
- Gen Len: 13.7379
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 0.0 | 1.0 | 6034 | nan | 3.9645 | 0.8023 | 3.8615 | 3.8591 | 13.7379 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
fcf939ab95e1d10e3b224365ead8844d
|
johko/capdec_001
|
johko
| null | 3 | 0 | null | 0 |
image-to-text
| false | false | false |
apache-2.0
|
['en']
|
['MS-COCO', 'Flickr30k']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['Image Captioning']
| false | true | true | 1,348 | false |
# CapDec - NoiseLevel: 0.001
## Model Description
These are model weights originally provided by the authors of the paper [Text-Only Training for Image Captioning using Noise-Injected CLIP](https://arxiv.org/pdf/2211.00575.pdf).
Their method aims to train CLIP with only text samples. Therefore they are injecting zero-mean Gaussian Noise into the text embeddings before decoding.
In their words:
*Specifically, we assume that the visual embedding corresponding to a text embedding
lies somewhere within a ball of small radius around the text embedding (see Fig. 1).
We would like all text embeddings in this ball to decode to the same caption,which should
also correspond to the visual content mapped to this ball. We implement this intuition by
adding zero-mean Gaussian noise of STD to the text embedding before decoding it.*
The "Noise Level" of 0.001 is equivalent to the Noise Variance which is the square of the STD.
The reported metrics are results of a model with a Noise Variance of 0.016, which the authors unfortunately do not provide in their repository.
## Datasets
The authors trained the model on MS-COCO and Flickr30k datasets.
## Performance
The authors don't explicitly report the performance for this NoiseLevel but it can be estimated from the following figure from the original paper:

|
6e94b4dad0119f359aac58c90ea534d5
|
IIIT-L/hing-roberta-finetuned-non-code-mixed-DS
|
IIIT-L
|
xlm-roberta
| 9 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
cc-by-4.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,481 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hing-roberta-finetuned-non-code-mixed-DS
This model is a fine-tuned version of [l3cube-pune/hing-roberta](https://huggingface.co/l3cube-pune/hing-roberta) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1286
- Accuracy: 0.6656
- Precision: 0.6575
- Recall: 0.6554
- F1: 0.6556
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.824279936868144e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 43
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.8233 | 2.0 | 926 | 0.8104 | 0.6656 | 0.6607 | 0.6537 | 0.6555 |
| 0.3924 | 3.99 | 1852 | 1.1286 | 0.6656 | 0.6575 | 0.6554 | 0.6556 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.10.1+cu111
- Datasets 2.3.2
- Tokenizers 0.12.1
|
3b2fdb82472afb0653fd72fd1aa4d793
|
nerijs/coralchar-diffusion
|
nerijs
| null | 24 | 13 |
diffusers
| 9 | null | false | false | false |
creativeml-openrail-m
| null | null | null | 3 | 1 | 2 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 3,373 | false |
<div style="display: flex; flex-direction: row; flex-wrap: wrap">
<a href="https://www.patreon.com/user?u=29466374" target="_blank">
<img src="https://img.shields.io/badge/Patreon-F96854?style=for-the-badge&logo=patreon&logoColor=white" alt="Patreon"/>
</a>
<a href="https://twitter.com/nerijs" target="_blank">
<img src="https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white" alt="Twitter"/>
</a>
</div>
# coralchar-diffusion-v1
Stable Diffusion v1.5 model trained on to generate cute character portraits
<div style="display: flex; flex-direction: row; flex-wrap: wrap">
<img src="https://s3.amazonaws.com/moonup/production/uploads/1670205150413-6303f37c3926de1f7ec42d3e.png" width="256">
<img src="https://s3.amazonaws.com/moonup/production/uploads/1670205171617-6303f37c3926de1f7ec42d3e.png" width="256">
</div>
## How to use
- Download the model and use it on your desired UI (Tested on AUTOMATIC1111's) .ckpt and Diffusers version available
- Trigger the style in your prompt with the **coralchar** token, look at the next section for more examples
- If you want to use the inpainting model, you can use it like a normal v1.5 model
## Versions
- **v1**: 1000-6000 steps checkpoints available to download
- **inpainting** version available
## Examples on step-6000 model
**a woman wearing blue jeans and a white tank top**
Steps: 20, Sampler: DPM++ SDE, CFG scale: 7, Size: 512x768
<img src="https://s3.amazonaws.com/moonup/production/uploads/1670204360798-6303f37c3926de1f7ec42d3e.png" width="512"/>
**a man wearing a black puffy vest**
Steps: 20, Sampler: DPM++ SDE, CFG scale: 7, Size: 512x768
<img src="https://s3.amazonaws.com/moonup/production/uploads/1670204467592-6303f37c3926de1f7ec42d3e.png" width="512"/>
## Examples on inpainting model
**a man wearing a blue puffy vest**
Steps: 20, Sampler: DPM++ SDE, CFG scale: 7, Size: 512x768, 0.75 Denoising strength
<h2>Original vs step_6000 vs inpainting version</h2>
<div style="display: flex; flex-direction: row; flex-wrap: wrap">
<img src="https://s3.amazonaws.com/moonup/production/uploads/1670205036420-6303f37c3926de1f7ec42d3e.png" width="256"/>
<img src="https://s3.amazonaws.com/moonup/production/uploads/1670204708270-6303f37c3926de1f7ec42d3e.png" width="256"/>
<img src="https://s3.amazonaws.com/moonup/production/uploads/1670204954426-6303f37c3926de1f7ec42d3e.png" width="256"/>
</div>
## Tips
- Best results with 512x768, outputs full body portraits
- Also high step count on Euler_a gives good results
- Low CFG scale outputs great results
- If you want to generate different expressions, generate a base character with txt2img then adjust your outfit and details with inpainting model and use inpainting again to generate different expressions and poses
Please consider supporting further research on my Patreon:
<a href="https://www.patreon.com/user?u=29466374" target="_blank">
<img src="https://img.shields.io/badge/Patreon-F96854?style=for-the-badge&logo=patreon&logoColor=white" alt="Patreon"/>
</a>
If you have any question, suggestion for new models or need help in general with SD related stuff, don't hesistate to reach out on Twitter:
<a href="https://twitter.com/nerijs" target="_blank">
<img src="https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white" alt="Twitter"/>
</a>
|
302e0143679dff0db9a93508e4b6599c
|
vasista22/whisper-telugu-large-v2
|
vasista22
|
whisper
| 12 | 4 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['te']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['whisper-event']
| true | true | true | 1,399 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Telugu Large-v2
This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the Telugu data available from multiple publicly available ASR corpuses.
It has been fine-tuned as a part of the Whisper fine-tuning sprint.
## Training and evaluation data at Speech Lab, IITM
Training Data: CSTD IIIT-H ASR Corpus, ULCA ASR Corpus, Shrutilipi ASR Corpus, Microsoft Research Telugu Corpus (Train+Dev), Babel ASR Corpus, Google/Fleurs (Train+Dev) set.
Evaluation Data: Babel Test, Microsoft Research Telugu Corpus Test, Google/Fleurs Test set, OpenSLR.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.75e-05
- train_batch_size: 8
- eval_batch_size: 32
- seed: 22
- optimizer: adamw_bnb_8bit
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 22000
- training_steps: 75000
- mixed_precision_training: True
## Acknowledgement
This work was done at Speech Lab, IITM. The compute resources for this work were funded by "Bhashini: National Language translation Mission" project of the Ministry of Electronics and Information Technology (MeitY), Government of India.
|
8de2c0a12538f8e01ccd8c1442c3cb84
|
anantoj/wav2vec2-xls-r-300m-zh-CN
|
anantoj
|
wav2vec2
| 22 | 8 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['zh-CN']
|
['common_voice']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'common_voice', 'generated_from_trainer', 'hf-asr-leaderboard', 'robust-speech-event', 'sv']
| true | true | true | 10,778 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the COMMON_VOICE - ZH-CN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8122
- Wer: 0.8392
- Cer: 0.2059
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|
| 69.215 | 0.74 | 500 | 74.9751 | 1.0 | 1.0 |
| 8.2109 | 1.48 | 1000 | 7.0617 | 1.0 | 1.0 |
| 6.4277 | 2.22 | 1500 | 6.3811 | 1.0 | 1.0 |
| 6.3513 | 2.95 | 2000 | 6.3061 | 1.0 | 1.0 |
| 6.2522 | 3.69 | 2500 | 6.2147 | 1.0 | 1.0 |
| 5.9757 | 4.43 | 3000 | 5.7906 | 1.1004 | 0.9924 |
| 5.0642 | 5.17 | 3500 | 4.2984 | 1.7729 | 0.8214 |
| 4.6346 | 5.91 | 4000 | 3.7129 | 1.8946 | 0.7728 |
| 4.267 | 6.65 | 4500 | 3.2177 | 1.7526 | 0.6922 |
| 3.9964 | 7.39 | 5000 | 2.8337 | 1.8055 | 0.6546 |
| 3.8035 | 8.12 | 5500 | 2.5726 | 2.1851 | 0.6992 |
| 3.6273 | 8.86 | 6000 | 2.3391 | 2.1029 | 0.6511 |
| 3.5248 | 9.6 | 6500 | 2.1944 | 2.3617 | 0.6859 |
| 3.3683 | 10.34 | 7000 | 1.9827 | 2.1014 | 0.6063 |
| 3.2411 | 11.08 | 7500 | 1.8610 | 1.6160 | 0.5135 |
| 3.1299 | 11.82 | 8000 | 1.7446 | 1.5948 | 0.4946 |
| 3.0574 | 12.56 | 8500 | 1.6454 | 1.1291 | 0.4051 |
| 2.985 | 13.29 | 9000 | 1.5919 | 1.0673 | 0.3893 |
| 2.9573 | 14.03 | 9500 | 1.4903 | 1.0604 | 0.3766 |
| 2.8897 | 14.77 | 10000 | 1.4614 | 1.0059 | 0.3653 |
| 2.8169 | 15.51 | 10500 | 1.3997 | 1.0030 | 0.3550 |
| 2.8155 | 16.25 | 11000 | 1.3444 | 0.9980 | 0.3441 |
| 2.7595 | 16.99 | 11500 | 1.2911 | 0.9703 | 0.3325 |
| 2.7107 | 17.72 | 12000 | 1.2462 | 0.9565 | 0.3227 |
| 2.6358 | 18.46 | 12500 | 1.2466 | 0.9955 | 0.3333 |
| 2.5801 | 19.2 | 13000 | 1.2059 | 1.0010 | 0.3226 |
| 2.5554 | 19.94 | 13500 | 1.1919 | 1.0094 | 0.3223 |
| 2.5314 | 20.68 | 14000 | 1.1703 | 0.9847 | 0.3156 |
| 2.509 | 21.42 | 14500 | 1.1733 | 0.9896 | 0.3177 |
| 2.4391 | 22.16 | 15000 | 1.1811 | 0.9723 | 0.3164 |
| 2.4631 | 22.89 | 15500 | 1.1382 | 0.9698 | 0.3059 |
| 2.4414 | 23.63 | 16000 | 1.0893 | 0.9644 | 0.2972 |
| 2.3771 | 24.37 | 16500 | 1.0930 | 0.9505 | 0.2954 |
| 2.3658 | 25.11 | 17000 | 1.0756 | 0.9609 | 0.2926 |
| 2.3215 | 25.85 | 17500 | 1.0512 | 0.9614 | 0.2890 |
| 2.3327 | 26.59 | 18000 | 1.0627 | 1.1984 | 0.3282 |
| 2.3055 | 27.33 | 18500 | 1.0582 | 0.9520 | 0.2841 |
| 2.299 | 28.06 | 19000 | 1.0356 | 0.9480 | 0.2817 |
| 2.2673 | 28.8 | 19500 | 1.0305 | 0.9367 | 0.2771 |
| 2.2166 | 29.54 | 20000 | 1.0139 | 0.9223 | 0.2702 |
| 2.2378 | 30.28 | 20500 | 1.0095 | 0.9268 | 0.2722 |
| 2.2168 | 31.02 | 21000 | 1.0001 | 0.9085 | 0.2691 |
| 2.1766 | 31.76 | 21500 | 0.9884 | 0.9050 | 0.2640 |
| 2.1715 | 32.5 | 22000 | 0.9730 | 0.9505 | 0.2719 |
| 2.1104 | 33.23 | 22500 | 0.9752 | 0.9362 | 0.2656 |
| 2.1158 | 33.97 | 23000 | 0.9720 | 0.9263 | 0.2624 |
| 2.0718 | 34.71 | 23500 | 0.9573 | 1.0005 | 0.2759 |
| 2.0824 | 35.45 | 24000 | 0.9609 | 0.9525 | 0.2643 |
| 2.0591 | 36.19 | 24500 | 0.9662 | 0.9570 | 0.2667 |
| 2.0768 | 36.93 | 25000 | 0.9528 | 0.9574 | 0.2646 |
| 2.0893 | 37.67 | 25500 | 0.9810 | 0.9169 | 0.2612 |
| 2.0282 | 38.4 | 26000 | 0.9556 | 0.8877 | 0.2528 |
| 1.997 | 39.14 | 26500 | 0.9523 | 0.8723 | 0.2501 |
| 2.0209 | 39.88 | 27000 | 0.9542 | 0.8773 | 0.2503 |
| 1.987 | 40.62 | 27500 | 0.9427 | 0.8867 | 0.2500 |
| 1.9663 | 41.36 | 28000 | 0.9546 | 0.9065 | 0.2546 |
| 1.9945 | 42.1 | 28500 | 0.9431 | 0.9119 | 0.2536 |
| 1.9604 | 42.84 | 29000 | 0.9367 | 0.9030 | 0.2490 |
| 1.933 | 43.57 | 29500 | 0.9071 | 0.8916 | 0.2432 |
| 1.9227 | 44.31 | 30000 | 0.9048 | 0.8882 | 0.2428 |
| 1.8784 | 45.05 | 30500 | 0.9106 | 0.8991 | 0.2437 |
| 1.8844 | 45.79 | 31000 | 0.8996 | 0.8758 | 0.2379 |
| 1.8776 | 46.53 | 31500 | 0.9028 | 0.8798 | 0.2395 |
| 1.8372 | 47.27 | 32000 | 0.9047 | 0.8778 | 0.2379 |
| 1.832 | 48.01 | 32500 | 0.9016 | 0.8941 | 0.2393 |
| 1.8154 | 48.74 | 33000 | 0.8915 | 0.8916 | 0.2372 |
| 1.8072 | 49.48 | 33500 | 0.8781 | 0.8872 | 0.2365 |
| 1.7489 | 50.22 | 34000 | 0.8738 | 0.8956 | 0.2340 |
| 1.7928 | 50.96 | 34500 | 0.8684 | 0.8872 | 0.2323 |
| 1.7748 | 51.7 | 35000 | 0.8723 | 0.8718 | 0.2321 |
| 1.7355 | 52.44 | 35500 | 0.8760 | 0.8842 | 0.2331 |
| 1.7167 | 53.18 | 36000 | 0.8746 | 0.8817 | 0.2324 |
| 1.7479 | 53.91 | 36500 | 0.8762 | 0.8753 | 0.2281 |
| 1.7428 | 54.65 | 37000 | 0.8733 | 0.8699 | 0.2277 |
| 1.7058 | 55.39 | 37500 | 0.8816 | 0.8649 | 0.2263 |
| 1.7045 | 56.13 | 38000 | 0.8733 | 0.8689 | 0.2297 |
| 1.709 | 56.87 | 38500 | 0.8648 | 0.8654 | 0.2232 |
| 1.6799 | 57.61 | 39000 | 0.8717 | 0.8580 | 0.2244 |
| 1.664 | 58.35 | 39500 | 0.8653 | 0.8723 | 0.2259 |
| 1.6488 | 59.08 | 40000 | 0.8637 | 0.8803 | 0.2271 |
| 1.6298 | 59.82 | 40500 | 0.8553 | 0.8768 | 0.2253 |
| 1.6185 | 60.56 | 41000 | 0.8512 | 0.8718 | 0.2240 |
| 1.574 | 61.3 | 41500 | 0.8579 | 0.8773 | 0.2251 |
| 1.6192 | 62.04 | 42000 | 0.8499 | 0.8743 | 0.2242 |
| 1.6275 | 62.78 | 42500 | 0.8419 | 0.8758 | 0.2216 |
| 1.5697 | 63.52 | 43000 | 0.8446 | 0.8699 | 0.2222 |
| 1.5384 | 64.25 | 43500 | 0.8462 | 0.8580 | 0.2200 |
| 1.5115 | 64.99 | 44000 | 0.8467 | 0.8674 | 0.2214 |
| 1.5547 | 65.73 | 44500 | 0.8505 | 0.8669 | 0.2204 |
| 1.5597 | 66.47 | 45000 | 0.8421 | 0.8684 | 0.2192 |
| 1.505 | 67.21 | 45500 | 0.8485 | 0.8619 | 0.2187 |
| 1.5101 | 67.95 | 46000 | 0.8489 | 0.8649 | 0.2204 |
| 1.5199 | 68.69 | 46500 | 0.8407 | 0.8619 | 0.2180 |
| 1.5207 | 69.42 | 47000 | 0.8379 | 0.8496 | 0.2163 |
| 1.478 | 70.16 | 47500 | 0.8357 | 0.8595 | 0.2163 |
| 1.4817 | 70.9 | 48000 | 0.8346 | 0.8496 | 0.2151 |
| 1.4827 | 71.64 | 48500 | 0.8362 | 0.8624 | 0.2169 |
| 1.4513 | 72.38 | 49000 | 0.8355 | 0.8451 | 0.2137 |
| 1.4988 | 73.12 | 49500 | 0.8325 | 0.8624 | 0.2161 |
| 1.4267 | 73.85 | 50000 | 0.8396 | 0.8481 | 0.2157 |
| 1.4421 | 74.59 | 50500 | 0.8355 | 0.8491 | 0.2122 |
| 1.4311 | 75.33 | 51000 | 0.8358 | 0.8476 | 0.2118 |
| 1.4174 | 76.07 | 51500 | 0.8289 | 0.8451 | 0.2101 |
| 1.4349 | 76.81 | 52000 | 0.8372 | 0.8580 | 0.2140 |
| 1.3959 | 77.55 | 52500 | 0.8325 | 0.8436 | 0.2116 |
| 1.4087 | 78.29 | 53000 | 0.8351 | 0.8446 | 0.2105 |
| 1.415 | 79.03 | 53500 | 0.8363 | 0.8476 | 0.2123 |
| 1.4122 | 79.76 | 54000 | 0.8310 | 0.8481 | 0.2112 |
| 1.3969 | 80.5 | 54500 | 0.8239 | 0.8446 | 0.2095 |
| 1.361 | 81.24 | 55000 | 0.8282 | 0.8427 | 0.2091 |
| 1.3611 | 81.98 | 55500 | 0.8282 | 0.8407 | 0.2092 |
| 1.3677 | 82.72 | 56000 | 0.8235 | 0.8436 | 0.2084 |
| 1.3361 | 83.46 | 56500 | 0.8231 | 0.8377 | 0.2069 |
| 1.3779 | 84.19 | 57000 | 0.8206 | 0.8436 | 0.2070 |
| 1.3727 | 84.93 | 57500 | 0.8204 | 0.8392 | 0.2065 |
| 1.3317 | 85.67 | 58000 | 0.8207 | 0.8436 | 0.2065 |
| 1.3332 | 86.41 | 58500 | 0.8186 | 0.8357 | 0.2055 |
| 1.3299 | 87.15 | 59000 | 0.8193 | 0.8417 | 0.2075 |
| 1.3129 | 87.89 | 59500 | 0.8183 | 0.8431 | 0.2065 |
| 1.3352 | 88.63 | 60000 | 0.8151 | 0.8471 | 0.2062 |
| 1.3026 | 89.36 | 60500 | 0.8125 | 0.8486 | 0.2067 |
| 1.3468 | 90.1 | 61000 | 0.8124 | 0.8407 | 0.2058 |
| 1.3028 | 90.84 | 61500 | 0.8122 | 0.8461 | 0.2051 |
| 1.2884 | 91.58 | 62000 | 0.8086 | 0.8427 | 0.2048 |
| 1.3005 | 92.32 | 62500 | 0.8110 | 0.8387 | 0.2055 |
| 1.2996 | 93.06 | 63000 | 0.8126 | 0.8328 | 0.2057 |
| 1.2707 | 93.8 | 63500 | 0.8098 | 0.8402 | 0.2047 |
| 1.3026 | 94.53 | 64000 | 0.8097 | 0.8402 | 0.2050 |
| 1.2546 | 95.27 | 64500 | 0.8111 | 0.8402 | 0.2055 |
| 1.2426 | 96.01 | 65000 | 0.8088 | 0.8372 | 0.2059 |
| 1.2869 | 96.75 | 65500 | 0.8093 | 0.8397 | 0.2048 |
| 1.2782 | 97.49 | 66000 | 0.8099 | 0.8412 | 0.2049 |
| 1.2457 | 98.23 | 66500 | 0.8134 | 0.8412 | 0.2062 |
| 1.2967 | 98.97 | 67000 | 0.8115 | 0.8382 | 0.2055 |
| 1.2817 | 99.7 | 67500 | 0.8128 | 0.8392 | 0.2063 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3.dev0
- Tokenizers 0.11.0
|
f2b91858b227fcadabc7e164da73bdd2
|
sd-concepts-library/child-zombie
|
sd-concepts-library
| null | 8 | 0 | null | 0 | null | false | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 938 | false |
### child zombie on Stable Diffusion
This is the `<child-zombie>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:



|
281150220d7961d1b03c02ea6e5ee07c
|
TakeHirako/xlm-roberta-base-finetuned-panx-all
|
TakeHirako
|
xlm-roberta
| 10 | 5 |
transformers
| 0 |
token-classification
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,319 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1745
- F1: 0.8505
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.3055 | 1.0 | 835 | 0.1842 | 0.8099 |
| 0.1561 | 2.0 | 1670 | 0.1711 | 0.8452 |
| 0.1016 | 3.0 | 2505 | 0.1745 | 0.8505 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
7af61591ba158e161b4660657cf7c1c4
|
stevhliu/t5-small-finetuned-billsum-ca_test
|
stevhliu
|
t5
| 20 | 9 |
transformers
| 0 |
summarization
| true | false | false |
apache-2.0
| null |
['billsum']
| null | 2 | 1 | 1 | 0 | 0 | 0 | 0 |
['summarization', 't5']
| true | true | true | 1,730 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-billsum-ca_test
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the billsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3376
- Rouge1: 12.6315
- Rouge2: 6.9839
- Rougel: 10.9983
- Rougelsum: 11.9383
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 495 | 2.4805 | 9.9389 | 4.1239 | 8.3979 | 9.1599 | 19.0 |
| 3.1564 | 2.0 | 990 | 2.3833 | 12.1026 | 6.5196 | 10.5123 | 11.4527 | 19.0 |
| 2.66 | 3.0 | 1485 | 2.3496 | 12.5389 | 6.8686 | 10.8798 | 11.8636 | 19.0 |
| 2.5671 | 4.0 | 1980 | 2.3376 | 12.6315 | 6.9839 | 10.9983 | 11.9383 | 19.0 |
### Framework versions
- Transformers 4.12.2
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
75dc7a251e309ca9b979dca5e95f587e
|
fathyshalab/all-roberta-large-v1-banking-2-16-5-oos
|
fathyshalab
|
roberta
| 11 | 3 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,516 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-banking-2-16-5-oos
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2564
- Accuracy: 0.3009
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.8065 | 1.0 | 1 | 2.5730 | 0.1681 |
| 2.2328 | 2.0 | 2 | 2.4625 | 0.2212 |
| 1.8783 | 3.0 | 3 | 2.3655 | 0.2478 |
| 1.64 | 4.0 | 4 | 2.2942 | 0.2655 |
| 1.4937 | 5.0 | 5 | 2.2564 | 0.3009 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
d8e989753b739942b92325fd8829a523
|
Lvxue/distilled-mt5-small-00001b
|
Lvxue
|
mt5
| 14 | 2 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
|
['en', 'ro']
|
['wmt16']
| null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,036 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilled-mt5-small-00001b
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the wmt16 ro-en dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8994
- Bleu: 7.5838
- Gen Len: 45.058
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
9209423422b35cb6366003098af726ce
|
allegro/herbert-base-cased
|
allegro
|
bert
| 10 | 51,355 |
transformers
| 8 |
feature-extraction
| true | true | true |
cc-by-4.0
|
['pl']
| null | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 |
['herbert']
| false | true | true | 3,215 | false |
# HerBERT
**[HerBERT](https://en.wikipedia.org/wiki/Zbigniew_Herbert)** is a BERT-based Language Model trained on Polish corpora
using Masked Language Modelling (MLM) and Sentence Structural Objective (SSO) with dynamic masking of whole words. For more details, please refer to: [HerBERT: Efficiently Pretrained Transformer-based Language Model for Polish](https://www.aclweb.org/anthology/2021.bsnlp-1.1/).
Model training and experiments were conducted with [transformers](https://github.com/huggingface/transformers) in version 2.9.
## Corpus
HerBERT was trained on six different corpora available for Polish language:
| Corpus | Tokens | Documents |
| :------ | ------: | ------: |
| [CCNet Middle](https://github.com/facebookresearch/cc_net) | 3243M | 7.9M |
| [CCNet Head](https://github.com/facebookresearch/cc_net) | 2641M | 7.0M |
| [National Corpus of Polish](http://nkjp.pl/index.php?page=14&lang=1)| 1357M | 3.9M |
| [Open Subtitles](http://opus.nlpl.eu/OpenSubtitles-v2018.php) | 1056M | 1.1M
| [Wikipedia](https://dumps.wikimedia.org/) | 260M | 1.4M |
| [Wolne Lektury](https://wolnelektury.pl/) | 41M | 5.5k |
## Tokenizer
The training dataset was tokenized into subwords using a character level byte-pair encoding (``CharBPETokenizer``) with
a vocabulary size of 50k tokens. The tokenizer itself was trained with a [tokenizers](https://github.com/huggingface/tokenizers) library.
We kindly encourage you to use the ``Fast`` version of the tokenizer, namely ``HerbertTokenizerFast``.
## Usage
Example code:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("allegro/herbert-base-cased")
model = AutoModel.from_pretrained("allegro/herbert-base-cased")
output = model(
**tokenizer.batch_encode_plus(
[
(
"A potem szedł środkiem drogi w kurzawie, bo zamiatał nogami, ślepy dziad prowadzony przez tłustego kundla na sznurku.",
"A potem leciał od lasu chłopak z butelką, ale ten ujrzawszy księdza przy drodze okrążył go z dala i biegł na przełaj pól do karczmy."
)
],
padding='longest',
add_special_tokens=True,
return_tensors='pt'
)
)
```
## License
CC BY 4.0
## Citation
If you use this model, please cite the following paper:
```
@inproceedings{mroczkowski-etal-2021-herbert,
title = "{H}er{BERT}: Efficiently Pretrained Transformer-based Language Model for {P}olish",
author = "Mroczkowski, Robert and
Rybak, Piotr and
Wr{\\'o}blewska, Alina and
Gawlik, Ireneusz",
booktitle = "Proceedings of the 8th Workshop on Balto-Slavic Natural Language Processing",
month = apr,
year = "2021",
address = "Kiyv, Ukraine",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2021.bsnlp-1.1",
pages = "1--10",
}
```
## Authors
The model was trained by [**Machine Learning Research Team at Allegro**](https://ml.allegro.tech/) and [**Linguistic Engineering Group at Institute of Computer Science, Polish Academy of Sciences**](http://zil.ipipan.waw.pl/).
You can contact us at: <a href="mailto:klejbenchmark@allegro.pl">klejbenchmark@allegro.pl</a>
|
30b173acb2f10e75b6378e6f33f39310
|
augustoortiz/bert-finetuned-squad2
|
augustoortiz
|
bert
| 8 | 9 |
transformers
| 0 |
question-answering
| false | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 1,278 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# augustoortiz/bert-finetuned-squad2
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.2223
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 11091, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 1.2223 | 0 |
### Framework versions
- Transformers 4.17.0.dev0
- TensorFlow 2.8.0
- Datasets 1.18.3
- Tokenizers 0.11.0
|
f66201388a3603fe107eb6d94c9dc90a
|
HuyenNguyen/Vin-W-22000
|
HuyenNguyen
|
whisper
| 15 | 4 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['vi']
|
['mozilla-foundation/common_voice_11_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['hf-asr-leaderboard', 'generated_from_trainer']
| true | true | true | 1,012 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# HuyenNguyen
This model is a fine-tuned version of [Huyen2310/FPT-S15000](https://huggingface.co/Huyen2310/FPT-S15000) on the Common Voice 11.0 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 400
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
d03505ce09feb94cdf18aec66f151541
|
espnet/kan-bayashi_vctk_gst_xvector_conformer_fastspeech2
|
espnet
| null | 25 | 4 |
espnet
| 0 |
text-to-speech
| false | false | false |
cc-by-4.0
|
['en']
|
['vctk']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['espnet', 'audio', 'text-to-speech']
| false | true | true | 1,818 | false |
## Example ESPnet2 TTS model
### `kan-bayashi/vctk_gst+xvector_conformer_fastspeech2`
♻️ Imported from https://zenodo.org/record/4394608/
This model was trained by kan-bayashi using vctk/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
fccf461de9bc08a64af18d23a3a2abde
|
shivkumarganesh/distilbert-base-uncased-finetuned-squad
|
shivkumarganesh
|
distilbert
| 10 | 6 |
transformers
| 0 |
question-answering
| true | false | false |
apache-2.0
| null |
['squad']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,177 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2414
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 20
- eval_batch_size: 20
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.3036 | 1.0 | 4427 | 1.2414 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
e7b9e07f58b35135b3e3eb3be9e90058
|
jonatasgrosman/exp_w2v2t_ru_no-pretraining_s895
|
jonatasgrosman
|
wav2vec2
| 10 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['ru']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'ru']
| false | true | true | 414 | false |
# exp_w2v2t_ru_no-pretraining_s895
Fine-tuned randomly initialized wav2vec2 model for speech recognition using the train split of [Common Voice 7.0 (ru)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
4f913c13078599c5e842a80a8a2d54dd
|
fathyshalab/all-roberta-large-v1-credit_cards-5-16-5
|
fathyshalab
|
roberta
| 11 | 3 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,517 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# all-roberta-large-v1-credit_cards-5-16-5
This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3376
- Accuracy: 0.3186
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.75 | 1.0 | 1 | 2.5769 | 0.2389 |
| 2.178 | 2.0 | 2 | 2.4879 | 0.2389 |
| 1.769 | 3.0 | 3 | 2.4180 | 0.2566 |
| 1.4703 | 4.0 | 4 | 2.3657 | 0.3097 |
| 1.2711 | 5.0 | 5 | 2.3376 | 0.3186 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
50b58dc6ace8de72babd440c4a5bb068
|
baffo32/gpt-j-6B-ptmap
|
baffo32
|
gptj
| 11 | 9 |
transformers
| 0 |
text-generation
| true | false | false |
apache-2.0
|
['en']
|
['The Pile']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['pytorch', 'causal-lm']
| false | true | true | 9,968 | false |
# GPT-J 6B
## Model Description
GPT-J 6B is a transformer model trained using Ben Wang's [Mesh Transformer JAX](https://github.com/kingoflolz/mesh-transformer-jax/). "GPT-J" refers to the class of model, while "6B" represents the number of trainable parameters.
<figure>
| Hyperparameter | Value |
|----------------------|------------|
| \\(n_{parameters}\\) | 6053381344 |
| \\(n_{layers}\\) | 28* |
| \\(d_{model}\\) | 4096 |
| \\(d_{ff}\\) | 16384 |
| \\(n_{heads}\\) | 16 |
| \\(d_{head}\\) | 256 |
| \\(n_{ctx}\\) | 2048 |
| \\(n_{vocab}\\) | 50257/50400† (same tokenizer as GPT-2/3) |
| Positional Encoding | [Rotary Position Embedding (RoPE)](https://arxiv.org/abs/2104.09864) |
| RoPE Dimensions | [64](https://github.com/kingoflolz/mesh-transformer-jax/blob/f2aa66e0925de6593dcbb70e72399b97b4130482/mesh_transformer/layers.py#L223) |
<figcaption><p><strong>*</strong> Each layer consists of one feedforward block and one self attention block.</p>
<p><strong>†</strong> Although the embedding matrix has a size of 50400, only 50257 entries are used by the GPT-2 tokenizer.</p></figcaption></figure>
The model consists of 28 layers with a model dimension of 4096, and a feedforward dimension of 16384. The model
dimension is split into 16 heads, each with a dimension of 256. Rotary Position Embedding (RoPE) is applied to 64
dimensions of each head. The model is trained with a tokenization vocabulary of 50257, using the same set of BPEs as
GPT-2/GPT-3.
## Training data
GPT-J 6B was trained on [the Pile](https://pile.eleuther.ai), a large-scale curated dataset created by [EleutherAI](https://www.eleuther.ai).
## Training procedure
This model was trained for 402 billion tokens over 383,500 steps on TPU v3-256 pod. It was trained as an autoregressive language model, using cross-entropy loss to maximize the likelihood of predicting the next token correctly.
## Intended Use and Limitations
GPT-J learns an inner representation of the English language that can be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating text from a prompt.
### How to use
This model can be easily loaded using the `AutoModelForCausalLM` functionality:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B")
model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-j-6B")
```
### Limitations and Biases
The core functionality of GPT-J is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. When prompting GPT-J it is important to remember that the statistically most likely next token is often not the token that produces the most "accurate" text. Never depend upon GPT-J to produce factually accurate output.
GPT-J was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending upon use case GPT-J may produce socially unacceptable text. See [Sections 5 and 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a more detailed analysis of the biases in the Pile.
As with all language models, it is hard to predict in advance how GPT-J will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.
## Evaluation results
<figure>
| Model | Public | Training FLOPs | LAMBADA PPL ↓ | LAMBADA Acc ↑ | Winogrande ↑ | Hellaswag ↑ | PIQA ↑ | Dataset Size (GB) |
|--------------------------|-------------|----------------|--- |--- |--- |--- |--- |-------------------|
| Random Chance | ✓ | 0 | ~a lot | ~0% | 50% | 25% | 25% | 0 |
| GPT-3 Ada‡ | ✗ | ----- | 9.95 | 51.6% | 52.9% | 43.4% | 70.5% | ----- |
| GPT-2 1.5B | ✓ | ----- | 10.63 | 51.21% | 59.4% | 50.9% | 70.8% | 40 |
| GPT-Neo 1.3B‡ | ✓ | 3.0e21 | 7.50 | 57.2% | 55.0% | 48.9% | 71.1% | 825 |
| Megatron-2.5B* | ✗ | 2.4e21 | ----- | 61.7% | ----- | ----- | ----- | 174 |
| GPT-Neo 2.7B‡ | ✓ | 6.8e21 | 5.63 | 62.2% | 56.5% | 55.8% | 73.0% | 825 |
| GPT-3 1.3B*‡ | ✗ | 2.4e21 | 5.44 | 63.6% | 58.7% | 54.7% | 75.1% | ~800 |
| GPT-3 Babbage‡ | ✗ | ----- | 5.58 | 62.4% | 59.0% | 54.5% | 75.5% | ----- |
| Megatron-8.3B* | ✗ | 7.8e21 | ----- | 66.5% | ----- | ----- | ----- | 174 |
| GPT-3 2.7B*‡ | ✗ | 4.8e21 | 4.60 | 67.1% | 62.3% | 62.8% | 75.6% | ~800 |
| Megatron-11B† | ✓ | 1.0e22 | ----- | ----- | ----- | ----- | ----- | 161 |
| **GPT-J 6B‡** | **✓** | **1.5e22** | **3.99** | **69.7%** | **65.3%** | **66.1%** | **76.5%** | **825** |
| GPT-3 6.7B*‡ | ✗ | 1.2e22 | 4.00 | 70.3% | 64.5% | 67.4% | 78.0% | ~800 |
| GPT-3 Curie‡ | ✗ | ----- | 4.00 | 69.3% | 65.6% | 68.5% | 77.9% | ----- |
| GPT-3 13B*‡ | ✗ | 2.3e22 | 3.56 | 72.5% | 67.9% | 70.9% | 78.5% | ~800 |
| GPT-3 175B*‡ | ✗ | 3.1e23 | 3.00 | 76.2% | 70.2% | 78.9% | 81.0% | ~800 |
| GPT-3 Davinci‡ | ✗ | ----- | 3.0 | 75% | 72% | 78% | 80% | ----- |
<figcaption><p>Models roughly sorted by performance, or by FLOPs if not available.</p>
<p><strong>*</strong> Evaluation numbers reported by their respective authors. All other numbers are provided by
running <a href="https://github.com/EleutherAI/lm-evaluation-harness/"><code>lm-evaluation-harness</code></a> either with released
weights or with API access. Due to subtle implementation differences as well as different zero shot task framing, these
might not be directly comparable. See <a href="https://blog.eleuther.ai/gpt3-model-sizes/">this blog post</a> for more
details.</p>
<p><strong>†</strong> Megatron-11B provides no comparable metrics, and several implementations using the released weights do not
reproduce the generation quality and evaluations. (see <a href="https://github.com/huggingface/transformers/pull/10301">1</a>
<a href="https://github.com/pytorch/fairseq/issues/2358">2</a> <a href="https://github.com/pytorch/fairseq/issues/2719">3</a>)
Thus, evaluation was not attempted.</p>
<p><strong>‡</strong> These models have been trained with data which contains possible test set contamination. The OpenAI GPT-3 models
failed to deduplicate training data for certain test sets, while the GPT-Neo models as well as this one is
trained on the Pile, which has not been deduplicated against any test sets.</p></figcaption></figure>
## Citation and Related Information
### BibTeX entry
To cite this model:
```bibtex
@misc{gpt-j,
author = {Wang, Ben and Komatsuzaki, Aran},
title = {{GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model}},
howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}},
year = 2021,
month = May
}
```
To cite the codebase that trained this model:
```bibtex
@misc{mesh-transformer-jax,
author = {Wang, Ben},
title = {{Mesh-Transformer-JAX: Model-Parallel Implementation of Transformer Language Model with JAX}},
howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}},
year = 2021,
month = May
}
```
If you use this model, we would love to hear about it! Reach out on [GitHub](https://github.com/kingoflolz/mesh-transformer-jax), Discord, or shoot Ben an email.
## Acknowledgements
This project would not have been possible without compute generously provided by Google through the
[TPU Research Cloud](https://sites.research.google/trc/), as well as the Cloud TPU team for providing early access to the [Cloud TPU VM](https://cloud.google.com/blog/products/compute/introducing-cloud-tpu-vms) Alpha.
Thanks to everyone who have helped out one way or another (listed alphabetically):
- [James Bradbury](https://twitter.com/jekbradbury) for valuable assistance with debugging JAX issues.
- [Stella Biderman](https://www.stellabiderman.com), [Eric Hallahan](https://twitter.com/erichallahan), [Kurumuz](https://github.com/kurumuz/), and [Finetune](https://github.com/finetuneanon/) for converting the model to be compatible with the `transformers` package.
- [Leo Gao](https://twitter.com/nabla_theta) for running zero shot evaluations for the baseline models for the table.
- [Laurence Golding](https://github.com/researcher2/) for adding some features to the web demo.
- [Aran Komatsuzaki](https://twitter.com/arankomatsuzaki) for advice with experiment design and writing the blog posts.
- [Janko Prester](https://github.com/jprester/) for creating the web demo frontend.
|
5142fd2c92b32ce76e3f1d846b364bf8
|
jonatasgrosman/exp_w2v2t_sv-se_vp-sv_s363
|
jonatasgrosman
|
wav2vec2
| 10 | 7 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['sv-SE']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'sv-SE']
| false | true | true | 475 | false |
# exp_w2v2t_sv-se_vp-sv_s363
Fine-tuned [facebook/wav2vec2-large-sv-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-sv-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (sv-SE)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
1562034d87ab411188645d23155cb193
|
DOOGLAK/Article_250v5_NER_Model_3Epochs_UNAUGMENTED
|
DOOGLAK
|
bert
| 13 | 6 |
transformers
| 0 |
token-classification
| true | false | false |
apache-2.0
| null |
['article250v5_wikigold_split']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,561 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Article_250v5_NER_Model_3Epochs_UNAUGMENTED
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the article250v5_wikigold_split dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3250
- Precision: 0.3979
- Recall: 0.4221
- F1: 0.4097
- Accuracy: 0.8779
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 31 | 0.5229 | 0.1336 | 0.0344 | 0.0547 | 0.8008 |
| No log | 2.0 | 62 | 0.3701 | 0.3628 | 0.3357 | 0.3487 | 0.8596 |
| No log | 3.0 | 93 | 0.3250 | 0.3979 | 0.4221 | 0.4097 | 0.8779 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.4.0
- Tokenizers 0.11.6
|
6d3fea1be8953d957e6ea4a20476f6ad
|
TransQuest/microtransquest-en_de-it-nmt
|
TransQuest
|
xlm-roberta
| 12 | 9 |
transformers
| 0 |
token-classification
| true | false | false |
apache-2.0
|
['en-de']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['Quality Estimation', 'microtransquest']
| false | true | true | 5,279 | false |
# TransQuest: Translation Quality Estimation with Cross-lingual Transformers
The goal of quality estimation (QE) is to evaluate the quality of a translation without having access to a reference translation. High-accuracy QE that can be easily deployed for a number of language pairs is the missing piece in many commercial translation workflows as they have numerous potential uses. They can be employed to select the best translation when several translation engines are available or can inform the end user about the reliability of automatically translated content. In addition, QE systems can be used to decide whether a translation can be published as it is in a given context, or whether it requires human post-editing before publishing or translation from scratch by a human. The quality estimation can be done at different levels: document level, sentence level and word level.
With TransQuest, we have opensourced our research in translation quality estimation which also won the sentence-level direct assessment quality estimation shared task in [WMT 2020](http://www.statmt.org/wmt20/quality-estimation-task.html). TransQuest outperforms current open-source quality estimation frameworks such as [OpenKiwi](https://github.com/Unbabel/OpenKiwi) and [DeepQuest](https://github.com/sheffieldnlp/deepQuest).
## Features
- Sentence-level translation quality estimation on both aspects: predicting post editing efforts and direct assessment.
- Word-level translation quality estimation capable of predicting quality of source words, target words and target gaps.
- Outperform current state-of-the-art quality estimation methods like DeepQuest and OpenKiwi in all the languages experimented.
- Pre-trained quality estimation models for fifteen language pairs are available in [HuggingFace.](https://huggingface.co/TransQuest)
## Installation
### From pip
```bash
pip install transquest
```
### From Source
```bash
git clone https://github.com/TharinduDR/TransQuest.git
cd TransQuest
pip install -r requirements.txt
```
## Using Pre-trained Models
```python
from transquest.algo.word_level.microtransquest.run_model import MicroTransQuestModel
import torch
model = MicroTransQuestModel("xlmroberta", "TransQuest/microtransquest-en_de-it-nmt", labels=["OK", "BAD"], use_cuda=torch.cuda.is_available())
source_tags, target_tags = model.predict([["if not , you may not be protected against the diseases . ", "ja tā nav , Jūs varat nepasargāt no slimībām . "]])
```
## Documentation
For more details follow the documentation.
1. **[Installation](https://tharindudr.github.io/TransQuest/install/)** - Install TransQuest locally using pip.
2. **Architectures** - Checkout the architectures implemented in TransQuest
1. [Sentence-level Architectures](https://tharindudr.github.io/TransQuest/architectures/sentence_level_architectures/) - We have released two architectures; MonoTransQuest and SiameseTransQuest to perform sentence level quality estimation.
2. [Word-level Architecture](https://tharindudr.github.io/TransQuest/architectures/word_level_architecture/) - We have released MicroTransQuest to perform word level quality estimation.
3. **Examples** - We have provided several examples on how to use TransQuest in recent WMT quality estimation shared tasks.
1. [Sentence-level Examples](https://tharindudr.github.io/TransQuest/examples/sentence_level_examples/)
2. [Word-level Examples](https://tharindudr.github.io/TransQuest/examples/word_level_examples/)
4. **Pre-trained Models** - We have provided pretrained quality estimation models for fifteen language pairs covering both sentence-level and word-level
1. [Sentence-level Models](https://tharindudr.github.io/TransQuest/models/sentence_level_pretrained/)
2. [Word-level Models](https://tharindudr.github.io/TransQuest/models/word_level_pretrained/)
5. **[Contact](https://tharindudr.github.io/TransQuest/contact/)** - Contact us for any issues with TransQuest
## Citations
If you are using the word-level architecture, please consider citing this paper which is accepted to [ACL 2021](https://2021.aclweb.org/).
```bash
@InProceedings{ranasinghe2021,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {An Exploratory Analysis of Multilingual Word Level Quality Estimation with Cross-Lingual Transformers},
booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics},
year = {2021}
}
```
If you are using the sentence-level architectures, please consider citing these papers which were presented in [COLING 2020](https://coling2020.org/) and in [WMT 2020](http://www.statmt.org/wmt20/) at EMNLP 2020.
```bash
@InProceedings{transquest:2020a,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {TransQuest: Translation Quality Estimation with Cross-lingual Transformers},
booktitle = {Proceedings of the 28th International Conference on Computational Linguistics},
year = {2020}
}
```
```bash
@InProceedings{transquest:2020b,
author = {Ranasinghe, Tharindu and Orasan, Constantin and Mitkov, Ruslan},
title = {TransQuest at WMT2020: Sentence-Level Direct Assessment},
booktitle = {Proceedings of the Fifth Conference on Machine Translation},
year = {2020}
}
```
|
ffe26587e65c7e16bfd1e8d4ab807060
|
hamjang/distilbert-base-uncased-finetuned-emotion
|
hamjang
|
distilbert
| 12 | 7 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['emotion']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,344 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2182
- Accuracy: 0.9235
- F1: 0.9237
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.809 | 1.0 | 250 | 0.3096 | 0.903 | 0.9009 |
| 0.2451 | 2.0 | 500 | 0.2182 | 0.9235 | 0.9237 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
607cf80c4c1f7f641c2905fbde07402a
|
facebook/maskformer-swin-large-coco
|
facebook
|
maskformer
| 5 | 107,642 |
transformers
| 4 |
image-segmentation
| true | false | false |
other
| null |
['coco']
| null | 0 | 0 | 0 | 0 | 1 | 1 | 0 |
['vision', 'image-segmentation']
| false | true | true | 2,524 | false |
# MaskFormer
MaskFormer model trained on COCO panoptic segmentation (large-sized version, Swin backbone). It was introduced in the paper [Per-Pixel Classification is Not All You Need for Semantic Segmentation](https://arxiv.org/abs/2107.06278) and first released in [this repository](https://github.com/facebookresearch/MaskFormer/blob/da3e60d85fdeedcb31476b5edd7d328826ce56cc/mask_former/modeling/criterion.py#L169).
Disclaimer: The team releasing MaskFormer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
MaskFormer addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation.

## Intended uses & limitations
You can use this particular checkpoint for semantic segmentation. See the [model hub](https://huggingface.co/models?search=maskformer) to look for other
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
from transformers import MaskFormerFeatureExtractor, MaskFormerForInstanceSegmentation
from PIL import Image
import requests
# load MaskFormer fine-tuned on COCO panoptic segmentation
feature_extractor = MaskFormerFeatureExtractor.from_pretrained("facebook/maskformer-swin-large-coco")
model = MaskFormerForInstanceSegmentation.from_pretrained("facebook/maskformer-swin-large-coco")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
# model predicts class_queries_logits of shape `(batch_size, num_queries)`
# and masks_queries_logits of shape `(batch_size, num_queries, height, width)`
class_queries_logits = outputs.class_queries_logits
masks_queries_logits = outputs.masks_queries_logits
# you can pass them to feature_extractor for postprocessing
result = feature_extractor.post_process_panoptic_segmentation(outputs, target_sizes=[image.size[::-1]])[0]
# we refer to the demo notebooks for visualization (see "Resources" section in the MaskFormer docs)
predicted_panoptic_map = result["segmentation"]
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/maskformer).
|
48b3ed23a7b80694a10dc62414b03be4
|
microsoft/deberta-v3-base
|
microsoft
|
deberta-v2
| 8 | 276,895 |
transformers
| 65 |
fill-mask
| true | true | false |
mit
|
['en']
| null | null | 2 | 1 | 1 | 0 | 1 | 1 | 0 |
['deberta', 'deberta-v3', 'fill-mask']
| false | true | true | 3,325 | false |
## DeBERTaV3: Improving DeBERTa using ELECTRA-Style Pre-Training with Gradient-Disentangled Embedding Sharing
[DeBERTa](https://arxiv.org/abs/2006.03654) improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. With those two improvements, DeBERTa out perform RoBERTa on a majority of NLU tasks with 80GB training data.
In [DeBERTa V3](https://arxiv.org/abs/2111.09543), we further improved the efficiency of DeBERTa using ELECTRA-Style pre-training with Gradient Disentangled Embedding Sharing. Compared to DeBERTa, our V3 version significantly improves the model performance on downstream tasks. You can find more technique details about the new model from our [paper](https://arxiv.org/abs/2111.09543).
Please check the [official repository](https://github.com/microsoft/DeBERTa) for more implementation details and updates.
The DeBERTa V3 base model comes with 12 layers and a hidden size of 768. It has only 86M backbone parameters with a vocabulary containing 128K tokens which introduces 98M parameters in the Embedding layer. This model was trained using the 160GB data as DeBERTa V2.
#### Fine-tuning on NLU tasks
We present the dev results on SQuAD 2.0 and MNLI tasks.
| Model |Vocabulary(K)|Backbone #Params(M)| SQuAD 2.0(F1/EM) | MNLI-m/mm(ACC)|
|-------------------|----------|-------------------|-----------|----------|
| RoBERTa-base |50 |86 | 83.7/80.5 | 87.6/- |
| XLNet-base |32 |92 | -/80.2 | 86.8/- |
| ELECTRA-base |30 |86 | -/80.5 | 88.8/ |
| DeBERTa-base |50 |100 | 86.2/83.1| 88.8/88.5|
| DeBERTa-v3-base |128|86 | **88.4/85.4** | **90.6/90.7**|
| DeBERTa-v3-base + SiFT |128|86 | -/- | 91.0/-|
We present the dev results on SQuAD 1.1/2.0 and MNLI tasks.
#### Fine-tuning with HF transformers
```bash
#!/bin/bash
cd transformers/examples/pytorch/text-classification/
pip install datasets
export TASK_NAME=mnli
output_dir="ds_results"
num_gpus=8
batch_size=8
python -m torch.distributed.launch --nproc_per_node=${num_gpus} \
run_glue.py \
--model_name_or_path microsoft/deberta-v3-base \
--task_name $TASK_NAME \
--do_train \
--do_eval \
--evaluation_strategy steps \
--max_seq_length 256 \
--warmup_steps 500 \
--per_device_train_batch_size ${batch_size} \
--learning_rate 2e-5 \
--num_train_epochs 3 \
--output_dir $output_dir \
--overwrite_output_dir \
--logging_steps 1000 \
--logging_dir $output_dir
```
### Citation
If you find DeBERTa useful for your work, please cite the following papers:
``` latex
@misc{he2021debertav3,
title={DeBERTaV3: Improving DeBERTa using ELECTRA-Style Pre-Training with Gradient-Disentangled Embedding Sharing},
author={Pengcheng He and Jianfeng Gao and Weizhu Chen},
year={2021},
eprint={2111.09543},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
``` latex
@inproceedings{
he2021deberta,
title={DEBERTA: DECODING-ENHANCED BERT WITH DISENTANGLED ATTENTION},
author={Pengcheng He and Xiaodong Liu and Jianfeng Gao and Weizhu Chen},
booktitle={International Conference on Learning Representations},
year={2021},
url={https://openreview.net/forum?id=XPZIaotutsD}
}
```
|
c16ad7f038eb12fea5daeb30c38bd99d
|
Kamesh22/bart-base-News_Summarization_CNN
|
Kamesh22
|
bart
| 25 | 2 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,304 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-News_Summarization_CNN
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3750
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.3979 | 0.99 | 114 | 1.2718 |
| 0.8315 | 1.99 | 228 | 0.3750 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
b63ce7f5573407af9a8f05107f1dee49
|
yuhuizhang/my_awesome_eli5_clm-model2
|
yuhuizhang
|
gpt2
| 6 | 0 |
transformers
| 0 |
text-generation
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,239 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_clm-model2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7470
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.8701 | 1.0 | 1055 | 3.7642 |
| 3.7747 | 2.0 | 2110 | 3.7501 |
| 3.7318 | 3.0 | 3165 | 3.7470 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.2
- Tokenizers 0.12.1
|
966d409de5a9074b00eb300d5aff0907
|
girinlp-i2i/generic_ner_model
|
girinlp-i2i
|
bert
| 16 | 0 |
transformers
| 0 |
token-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,518 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# generic_ner_model
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0999
- Precision: 0.8727
- Recall: 0.8953
- F1: 0.8838
- Accuracy: 0.9740
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1083 | 1.0 | 1958 | 0.1007 | 0.8684 | 0.8836 | 0.8759 | 0.9723 |
| 0.0679 | 2.0 | 3916 | 0.0977 | 0.8672 | 0.8960 | 0.8813 | 0.9738 |
| 0.0475 | 3.0 | 5874 | 0.0999 | 0.8727 | 0.8953 | 0.8838 | 0.9740 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
1a84a58a5402cdd999b67c86c656bf5d
|
sv/gpt2-nft-poetry
|
sv
|
gpt2
| 12 | 4 |
transformers
| 0 |
text-generation
| true | false | false |
mit
| null |
[]
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| false | true | true | 1,316 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-nft-poetry
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.0243
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 282 | 4.3092 |
| 4.5403 | 2.0 | 564 | 4.1283 |
| 4.5403 | 3.0 | 846 | 4.0605 |
| 4.039 | 4.0 | 1128 | 4.0321 |
| 4.039 | 5.0 | 1410 | 4.0243 |
### Framework versions
- Transformers 4.10.0
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
300ed1ab696c355753379948bf1d2c4a
|
mmorr7son/bert-fine-turned-cola
|
mmorr7son
|
bert
| 12 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['glue']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,388 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-fine-turned-cola
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8073
- Matthews Correlation: 0.6107
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4681 | 1.0 | 1069 | 0.5613 | 0.4892 |
| 0.321 | 2.0 | 2138 | 0.6681 | 0.5851 |
| 0.1781 | 3.0 | 3207 | 0.8073 | 0.6107 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
f4b19bbf0863f48d14584bb1c6b0038a
|
KoichiYasuoka/roberta-large-korean-hanja
|
KoichiYasuoka
|
roberta
| 7 | 5 |
transformers
| 0 |
fill-mask
| true | false | false |
cc-by-sa-4.0
|
['ko']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['korean', 'masked-lm']
| false | true | true | 783 | false |
# roberta-large-korean-hanja
## Model Description
This is a RoBERTa model pre-trained on Korean texts, derived from [klue/roberta-large](https://huggingface.co/klue/roberta-large). Token-embeddings are enhanced to include all 한문 교육용 기초 한자 and 인명용 한자 characters. You can fine-tune `roberta-large-korean-hanja` for downstream tasks, such as [POS-tagging](https://huggingface.co/KoichiYasuoka/roberta-large-korean-upos), [dependency-parsing](https://huggingface.co/KoichiYasuoka/roberta-large-korean-ud-goeswith), and so on.
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForMaskedLM
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-large-korean-hanja")
model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/roberta-large-korean-hanja")
```
|
36eaf7046cab4b87df470ae9fe7e398b
|
sd-concepts-library/pixel-mania
|
sd-concepts-library
| null | 11 | 0 | null | 0 | null | false | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 537 | false |
### pixel-mania on Stable Diffusion
This is the `<pixel-mania>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
|
b236934244999b157b456094f4b204c8
|
edugp/data2vec-nlp-base
|
edugp
|
data2vec
| 8 | 2 |
transformers
| 0 |
fill-mask
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| true | true | true | 678 | false |
# Data2Vec NLP Base
This model was converted from `fairseq`.
The original weights can be found in https://dl.fbaipublicfiles.com/fairseq/data2vec/nlp_base.pt
Example usage:
```python
from transformers import RobertaTokenizer, Data2VecForSequenceClassification, Data2VecConfig
import torch
tokenizer = RobertaTokenizer.from_pretrained("roberta-large")
config = Data2VecConfig.from_pretrained("edugp/data2vec-nlp-base")
model = Data2VecForSequenceClassification.from_pretrained("edugp/data2vec-nlp-base", config=config)
# Fine-tune this model
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
prediction_logits = outputs.logits
```
|
75d9fbb615ce6ead407d510ad70aed55
|
tscholak/3vnuv1vf
|
tscholak
|
t5
| 21 | 1,827 |
transformers
| 6 |
text2text-generation
| true | false | false |
apache-2.0
|
['en']
|
['spider']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['text2sql']
| false | true | true | 2,769 | false |
## tscholak/3vnuv1vf
Fine-tuned weights for [PICARD - Parsing Incrementally for Constrained Auto-Regressive Decoding from Language Models](https://arxiv.org/abs/2109.05093) based on [t5.1.1.lm100k.large](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#lm-adapted-t511lm100k).
### Training Data
The model has been fine-tuned on the 7000 training examples in the [Spider text-to-SQL dataset](https://yale-lily.github.io/spider). The model solves Spider's zero-shot text-to-SQL translation task, and that means that it can generalize to unseen SQL databases.
### Training Objective
This model was initialized with [t5.1.1.lm100k.large](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#lm-adapted-t511lm100k) and fine-tuned with the text-to-text generation objective.
Questions are always grounded in a database schema, and the model is trained to predict the SQL query that would be used to answer the question. The input to the model is composed of the user's natural language question, the database identifier, and a list of tables and their columns:
```
[question] | [db_id] | [table] : [column] ( [content] , [content] ) , [column] ( ... ) , [...] | [table] : ... | ...
```
The model outputs the database identifier and the SQL query that will be executed on the database to answer the user's question:
```
[db_id] | [sql]
```
### Performance
Out of the box, this model achieves 71.2 % exact-set match accuracy and 74.4 % execution accuracy on the Spider development set.
Using the PICARD constrained decoding method (see [the official PICARD implementation](https://github.com/ElementAI/picard)), the model's performance can be improved to **74.8 %** exact-set match accuracy and **79.2 %** execution accuracy on the Spider development set.
### Usage
Please see [the official repository](https://github.com/ElementAI/picard) for scripts and docker images that support evaluation and serving of this model.
### References
1. [PICARD - Parsing Incrementally for Constrained Auto-Regressive Decoding from Language Models](https://arxiv.org/abs/2109.05093)
2. [Official PICARD code](https://github.com/ElementAI/picard)
### Citation
```bibtex
@inproceedings{Scholak2021:PICARD,
author = {Torsten Scholak and Nathan Schucher and Dzmitry Bahdanau},
title = "{PICARD}: Parsing Incrementally for Constrained Auto-Regressive Decoding from Language Models",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.779",
pages = "9895--9901",
}
```
|
73d55583e903d7eeba84286c35f7c997
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.