modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-13 18:26:42
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 558
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-13 18:25:20
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
allenai/aspire-contextualsentence-singlem-biomed
|
allenai
| 2022-10-03T21:16:38Z | 166 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"feature-extraction",
"en",
"arxiv:2111.08366",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-04-23T14:18:20Z |
---
language: en
license: apache-2.0
---
## Overview
Model included in a paper for modeling fine grained similarity between documents:
**Title**: "Multi-Vector Models with Textual Guidance for Fine-Grained Scientific Document Similarity"
**Authors**: Sheshera Mysore, Arman Cohan, Tom Hope
**Paper**: https://arxiv.org/abs/2111.08366
**Github**: https://github.com/allenai/aspire
**Note**: In the context of the paper, this model is referred to as `tsAspire` and represents the papers proposed multi-vector model for fine-grained scientific document similarity.
## Model Card
### Model description
This model is a BERT based multi-vector model trained for fine-grained similarity of biomedical scientific papers. This model inputs the title and abstract of a paper and represents a paper with a contextual sentence vectors obtained by averaging the token representations of individual sentences - the whole title and abstract are encoded with cross-attention in the encoder block before obtaining sentence embeddings. The model is trained by leveraging a novel form of textual supervision which leverages co-citation contexts to align the sentences of positive examples. Test time behavior ranks documents based on the smallest L2 distance of sentences between documents or the smallest L2 distance between a set of query sentences and a candidate document.
### Training data
The model is trained on pairs of co-cited papers with their sentences aligned by the co-citation context in a contrastive learning setup. The model is trained on 1.2 million biomedical paper pairs. In training the model, negative examples for the contrastive loss are obtained as random in-batch negatives. Co-citations are obtained from the full text of papers. For example - the papers in brackets below are all co-cited and each pair of papers would be used as a training pair with the abstracts sentence aligned using the co-citation context. Here the context notes why the cited papers are similar:
> The idea of distant supervision has been proposed and used widely in Relation Extraction (Mintz et al., 2009; Riedel et al., 2010; Hoffmann et al., 2011; Surdeanu et al., 2012) , where the source of labels is an external knowledge base.
### Training procedure
The model was trained with the Adam Optimizer and a learning rate of 2e-5 with 1000 warm-up steps followed by linear decay of the learning rate. The model training convergence is checked with the loss on a held out dev set consisting of co-cited paper pairs.
### Intended uses & limitations
This model is trained for fine-grained document similarity tasks in **biomedical** scientific text using multiple vectors per document. The model allows fine grained similarity by establishing sentence-to-sentence similarity between documents. The model is most well suited to an aspect conditional task formulation where a query might consist of sentence in a query document and candidates must be retrieved along this specified sentences. Here, the documents are the title and abstract of a paper. With appropriate fine-tuning the model can also be used for other tasks such as document or sentence level classification. Since the training data comes primarily from biomedicine, performance on other domains may be poorer.
### How to use
This model can be used via the `transformers` library and some additional code to compute contextual sentence vectors.
View example usage in the model github repo: https://github.com/allenai/aspire#tsaspire
### Variable and metrics
This model is evaluated on information retrieval datasets with document level queries. Here we report performance on RELISH (biomedical/English), and TRECCOVID (biomedical/English). These are detailed on [github](https://github.com/allenai/aspire) and in our [paper](https://arxiv.org/abs/2111.08366). These datasets represent a abstract level retrieval task, where given a query scientific abstract the task requires the retrieval of relevant candidate abstracts. In using this sentence level model for abstract level retrieval we rank documents by the minimal L2 distance between the sentences in the query and candidate abstract.
### Evaluation results
The released model `aspire-contextualsentence-singlem-biomed` is compared against `allenai/specter`, a bi-encoder baseline and `all-mpnet-base-v2` a strong non-contextual sentence-bert baseline model trained on ~1 billion training examples. `aspire-contextualsentence-singlem-biomed`<sup>*</sup> is the performance reported in our paper by averaging over 3 re-runs of the model. The released model `aspire-contextualsentence-singlem-biomed` is the single best run among the 3 re-runs.
| | TRECCOVID | TRECCOVID | RELISH | RELISH |
|-------------------------------------------:|:---------:|:-------:|:------:|:-------:|
| | MAP | NDCG%20 | MAP | NDCG%20 |
| `all-mpnet-base-v2` | 17.35 | 43.87 | 52.92 | 69.69 |
| `specter` | 28.24 | 59.28 | 60.62 | 77.20 |
| `aspire-contextualsentence-singlem-biomed`<sup>*</sup> | 26.24 | 56.55 | 61.29 | 77.89 |
| `aspire-contextualsentence-singlem-biomed` | 26.68 | 57.21 | 61.06 | 77.70 |
**Alternative models:**
Besides the above models consider these alternative models also released in the Aspire paper:
[`aspire-contextualsentence-singlem-compsci`](https://huggingface.co/allenai/aspire-contextualsentence-singlem-compsci): If you wanted to run on computer science papers and want to use a model trained to match a _single_ sentence between documents.
[`aspire-contextualsentence-multim-biomed`](https://huggingface.co/allenai/aspire-contextualsentence-multim-biomed): If you wanted to run on biomedical papers and want to use a model trained to match _multiple_ sentences between documents.
[`aspire-contextualsentence-multim-compsci`](https://huggingface.co/allenai/aspire-contextualsentence-multim-compsci): If you wanted to run on computer science papers and want to use a model trained to match _multiple_ sentences between documents.
|
allenai/transformer_qa
|
allenai
| 2022-10-03T21:12:21Z | 12 | 3 |
allennlp
|
[
"allennlp",
"tensorboard",
"question-answering",
"en",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
language: en
tags:
- allennlp
- question-answering
---
A reading comprehension model patterned after the proposed model in Devlin et al, with improvements borrowed from the SQuAD model in the transformers project
The model implements a reading comprehension model patterned after the proposed model in BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding (Devlin et al, 2018), with improvements borrowed from the SQuAD model in the transformers project. It predicts start tokens and end tokens with a linear layer on top of word piece embeddings.
|
misterkilgore/distilbert-base-uncased-finetuned-disaster-tweet
|
misterkilgore
| 2022-10-03T20:09:45Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-09-30T12:51:56Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-disaster-tweet
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-disaster-tweet
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4052
- Accuracy: 0.8207
- F1: 0.8203
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5056 | 1.0 | 96 | 0.4139 | 0.8188 | 0.8179 |
| 0.3991 | 2.0 | 192 | 0.4052 | 0.8207 | 0.8203 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
Arklyn/fine-tune-Wav2Vec2-XLS-R-300M-Indonesia-1
|
Arklyn
| 2022-10-03T20:00:31Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_10_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-10-03T07:36:55Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice_10_0
model-index:
- name: fine-tune-Wav2Vec2-XLS-R-300M-Indonesia-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tune-Wav2Vec2-XLS-R-300M-Indonesia-1
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice_10_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2114
- Wer: 0.1762
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 36
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 72
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 60
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.9372 | 4.0 | 460 | 2.8359 | 1.0 |
| 2.2755 | 8.0 | 920 | 0.3708 | 0.3983 |
| 0.6614 | 12.0 | 1380 | 0.2433 | 0.2636 |
| 0.5071 | 16.0 | 1840 | 0.2347 | 0.2395 |
| 0.4516 | 20.0 | 2300 | 0.2213 | 0.2185 |
| 0.4206 | 24.0 | 2760 | 0.2222 | 0.2008 |
| 0.3844 | 28.0 | 3220 | 0.2072 | 0.1887 |
| 0.3678 | 32.0 | 3680 | 0.2071 | 0.1886 |
| 0.3565 | 36.0 | 4140 | 0.2015 | 0.1851 |
| 0.3388 | 40.0 | 4600 | 0.2137 | 0.1850 |
| 0.3235 | 44.0 | 5060 | 0.2072 | 0.1791 |
| 0.3173 | 48.0 | 5520 | 0.2095 | 0.1777 |
| 0.3088 | 52.0 | 5980 | 0.2102 | 0.1784 |
| 0.3 | 56.0 | 6440 | 0.2164 | 0.1772 |
| 0.2957 | 60.0 | 6900 | 0.2114 | 0.1762 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.10.0+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
huggingtweets/morgen__shtern
|
huggingtweets
| 2022-10-03T19:49:34Z | 118 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-10-03T19:47:38Z |
---
language: en
thumbnail: http://www.huggingtweets.com/morgen__shtern/1664826569898/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1567266375026053125/0cyfXyiF_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">MORGENSHTERN</div>
<div style="text-align: center; font-size: 14px;">@morgen__shtern</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from MORGENSHTERN.
| Data | MORGENSHTERN |
| --- | --- |
| Tweets downloaded | 3178 |
| Retweets | 57 |
| Short tweets | 1034 |
| Tweets kept | 2087 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3n5yin9a/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @morgen__shtern's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2w93y3gk) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2w93y3gk/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/morgen__shtern')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
kkotkar1/finetuning-sentiment-model-3000-samples-kunal
|
kkotkar1
| 2022-10-03T19:33:19Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-09-30T21:23:09Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: finetuning-sentiment-model-3000-samples-kunal
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples-kunal
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
huggingtweets/elonmusk-medvedevrussia
|
huggingtweets
| 2022-10-03T19:16:02Z | 120 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-10-03T19:12:36Z |
---
language: en
thumbnail: http://www.huggingtweets.com/elonmusk-medvedevrussia/1664824557368/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1572573363255525377/Xz3fufYY_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/2348558617/x0vh6bui3sq97vt4jd2n_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Elon Musk & Дмитрий Медведев</div>
<div style="text-align: center; font-size: 14px;">@elonmusk-medvedevrussia</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Elon Musk & Дмитрий Медведев.
| Data | Elon Musk | Дмитрий Медведев |
| --- | --- | --- |
| Tweets downloaded | 3200 | 1745 |
| Retweets | 122 | 298 |
| Short tweets | 976 | 50 |
| Tweets kept | 2102 | 1397 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3bebrah3/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @elonmusk-medvedevrussia's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/38bpbcfm) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/38bpbcfm/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/elonmusk-medvedevrussia')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
neelrr/xlm-roberta-base-finetuned-panx-ta
|
neelrr
| 2022-10-03T18:57:36Z | 126 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-10-03T18:52:53Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-ta
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.ta
metrics:
- name: F1
type: f1
value: 0.8144578313253013
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-ta
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2183
- F1: 0.8145
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5477 | 1.0 | 209 | 0.2732 | 0.7305 |
| 0.2506 | 2.0 | 418 | 0.2425 | 0.7626 |
| 0.168 | 3.0 | 627 | 0.2183 | 0.8145 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
neelrr/xlm-roberta-base-finetuned-panx-hi-mr
|
neelrr
| 2022-10-03T18:44:54Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-10-03T18:37:10Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-hi-mr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-hi-mr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1942
- F1: 0.8710
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.4628 | 1.0 | 417 | 0.2603 | 0.8062 |
| 0.2064 | 2.0 | 834 | 0.1951 | 0.8492 |
| 0.1289 | 3.0 | 1251 | 0.1942 | 0.8710 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
ad7/finetuning-sentiment-model-3000-samples
|
ad7
| 2022-10-03T18:18:09Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-03T18:08:21Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.77
- name: F1
type: f1
value: 0.7561837455830389
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6587
- Accuracy: 0.77
- F1: 0.7562
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
neelrr/xlm-roberta-base-finetuned-panx-hi
|
neelrr
| 2022-10-03T18:15:43Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-10-03T15:33:45Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-hi
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.hi
metrics:
- name: F1
type: f1
value: 0.8613782051282051
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-hi
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2211
- F1: 0.8614
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.62 | 1.0 | 209 | 0.3914 | 0.7622 |
| 0.2603 | 2.0 | 418 | 0.2665 | 0.8211 |
| 0.1653 | 3.0 | 627 | 0.2211 | 0.8614 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
lilykaw/finetuning-sentiment-model-3000-samples
|
lilykaw
| 2022-10-03T18:14:03Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-03T18:04:22Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.6633333333333333
- name: F1
type: f1
value: 0.7247956403269755
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6551
- Accuracy: 0.6633
- F1: 0.7248
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
arminmehrabian/distilgpt2-finetuned-wikitext2-agu
|
arminmehrabian
| 2022-10-03T18:04:48Z | 324 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-07-25T02:53:12Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitext2-agu
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2-agu
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1869
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 3.7357 | 1.0 | 13655 | 3.6781 |
| 3.5721 | 2.0 | 27310 | 3.5302 |
| 3.4961 | 3.0 | 40965 | 3.4658 |
| 3.4406 | 4.0 | 54620 | 3.4242 |
| 3.4043 | 5.0 | 68275 | 3.3943 |
| 3.3789 | 6.0 | 81930 | 3.3726 |
| 3.3576 | 7.0 | 95585 | 3.3538 |
| 3.3389 | 8.0 | 109240 | 3.3389 |
| 3.3151 | 9.0 | 122895 | 3.3270 |
| 3.314 | 5.0 | 136545 | 3.3226 |
| 3.3044 | 6.0 | 163854 | 3.3124 |
| 3.2931 | 7.0 | 191163 | 3.3078 |
| 3.2874 | 8.0 | 218472 | 3.3094 |
| 3.2817 | 9.0 | 245781 | 3.2943 |
| 3.269 | 10.0 | 273090 | 3.2785 |
| 3.2423 | 11.0 | 300399 | 3.2651 |
| 3.2253 | 12.0 | 327708 | 3.2530 |
| 3.2096 | 13.0 | 355017 | 3.2435 |
| 3.1939 | 14.0 | 382326 | 3.2326 |
| 3.1786 | 15.0 | 409635 | 3.2225 |
| 3.1625 | 16.0 | 436944 | 3.2198 |
| 3.1619 | 17.0 | 464253 | 3.2180 |
| 3.1521 | 18.0 | 491562 | 3.2164 |
| 3.1555 | 19.0 | 518871 | 3.2152 |
| 3.1523 | 20.0 | 546180 | 3.2164 |
| 3.1639 | 21.0 | 573489 | 3.2133 |
| 3.1483 | 22.0 | 600798 | 3.2113 |
| 3.1497 | 23.0 | 628107 | 3.2077 |
| 3.1468 | 24.0 | 655416 | 3.2066 |
| 3.1461 | 25.0 | 682725 | 3.2052 |
| 3.1391 | 26.0 | 710034 | 3.2039 |
| 3.1384 | 27.0 | 737343 | 3.2031 |
| 3.135 | 28.0 | 764652 | 3.2020 |
| 3.1262 | 29.0 | 791961 | 3.2015 |
| 3.1357 | 30.0 | 819270 | 3.2019 |
| 3.1372 | 31.0 | 846579 | 3.2003 |
| 3.1346 | 32.0 | 873888 | 3.1988 |
| 3.134 | 33.0 | 901197 | 3.1975 |
| 3.1256 | 34.0 | 928506 | 3.1965 |
| 3.1261 | 35.0 | 955815 | 3.1950 |
| 3.1255 | 36.0 | 983124 | 3.1945 |
| 3.1278 | 37.0 | 1010433 | 3.1940 |
| 3.1186 | 38.0 | 1037742 | 3.1934 |
| 3.1136 | 39.0 | 1065051 | 3.1932 |
| 3.12 | 40.0 | 1092360 | 3.1931 |
| 3.12 | 41.0 | 1119669 | 3.1930 |
| 3.1165 | 42.0 | 1146978 | 3.1914 |
| 3.1166 | 43.0 | 1174287 | 3.1900 |
| 3.1139 | 44.0 | 1201596 | 3.1892 |
| 3.1135 | 45.0 | 1228905 | 3.1885 |
| 3.1077 | 46.0 | 1256214 | 3.1881 |
| 3.1097 | 47.0 | 1283523 | 3.1873 |
| 3.1076 | 48.0 | 1310832 | 3.1872 |
| 3.102 | 49.0 | 1338141 | 3.1870 |
| 3.1086 | 50.0 | 1365450 | 3.1869 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.9.0+cu111
- Datasets 2.4.0
- Tokenizers 0.12.1
|
model-attribution-challenge/codegen-350M-multi
|
model-attribution-challenge
| 2022-10-03T16:18:49Z | 108 | 2 |
transformers
|
[
"transformers",
"pytorch",
"codegen",
"text-generation",
"arxiv:2203.13474",
"license:bsd-3-clause",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-07-26T13:36:04Z |
---
license: bsd-3-clause
---
# CodeGen (CodeGen-Multi 350M)
## Model description
CodeGen is a family of autoregressive language models for **program synthesis** from the paper: [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong. The models are originally released in [this repository](https://github.com/salesforce/CodeGen), under 3 pre-training data variants (`NL`, `Multi`, `Mono`) and 4 model size variants (`350M`, `2B`, `6B`, `16B`).
The checkpoint included in this repository is denoted as **CodeGen-Multi 350M** in the paper, where "Multi" means the model is initialized with *CodeGen-NL 350M* and further pre-trained on a dataset of multiple programming languages, and "350M" refers to the number of trainable parameters.
## Training data
This checkpoint (CodeGen-Multi 350M) was firstly initialized with *CodeGen-NL 350M*, and then pre-trained on [BigQuery](https://console.cloud.google.com/marketplace/details/github/github-repos), a large-scale dataset of multiple programming languages from GitHub repositories. The data consists of 119.2B tokens and includes C, C++, Go, Java, JavaScript, and Python.
## Training procedure
CodeGen was trained using cross-entropy loss to maximize the likelihood of sequential inputs.
The family of models are trained using multiple TPU-v4-512 by Google, leveraging data and model parallelism.
See Section 2.3 of the [paper](https://arxiv.org/abs/2203.13474) for more details.
## Evaluation results
We evaluate our models on two code generation benchmark: HumanEval and MTPB. Please refer to the [paper](https://arxiv.org/abs/2203.13474) for more details.
## Intended Use and Limitations
As an autoregressive language model, CodeGen is capable of extracting features from given natural language and programming language texts, and calculating the likelihood of them.
However, the model is intended for and best at **program synthesis**, that is, generating executable code given English prompts, where the prompts should be in the form of a comment string. The model can complete partially-generated code as well.
## How to use
This model can be easily loaded using the `AutoModelForCausalLM` functionality:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen-350M-multi")
model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen-350M-multi")
text = "def hello_world():"
input_ids = tokenizer(text, return_tensors="pt").input_ids
generated_ids = model.generate(input_ids, max_length=128)
print(tokenizer.decode(generated_ids[0], skip_special_tokens=True))
```
## BibTeX entry and citation info
```bibtex
@article{Nijkamp2022ACP,
title={A Conversational Paradigm for Program Synthesis},
author={Nijkamp, Erik and Pang, Bo and Hayashi, Hiroaki and Tu, Lifu and Wang, Huan and Zhou, Yingbo and Savarese, Silvio and Xiong, Caiming},
journal={arXiv preprint},
year={2022}
}
```
|
sd-concepts-library/jang-sung-rak-style
|
sd-concepts-library
| 2022-10-03T15:43:16Z | 0 | 1 | null |
[
"license:mit",
"region:us"
] | null | 2022-10-03T15:43:05Z |
---
license: mit
---
### Jang-Sung-Rak-Style on Stable Diffusion
This is the `<Jang-Sung-Rak-style>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:







|
qq895398111/ddjcd
|
qq895398111
| 2022-10-03T14:42:17Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-10-03T14:41:19Z |
The Victorian AgeBig chest woman
|
damilojohn/Bert2BertForTextDescrambling
|
damilojohn
| 2022-10-03T12:40:40Z | 111 | 1 |
transformers
|
[
"transformers",
"pytorch",
"encoder-decoder",
"text2text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-10-03T00:44:12Z |
---
license: apache-2.0
---
This model receives scrambled or incoherent sentences as input and returns a meaningful sentence using the same words in the input . A form of grammar correction if you may .
It was trained on a dataset of permutated sentences derived from wikipedia pages as input with the correct arrangement of words as labels .
It is an encoder-decoder model that uses BERT's weight in both it's encoder and decoder .
|
cw1521/opus-mt-st-en
|
cw1521
| 2022-10-03T12:36:01Z | 115 | 0 |
transformers
|
[
"transformers",
"pytorch",
"marian",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-09-29T01:37:55Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: opus-mt-st-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-st-en
This model is a fine-tuned version of [cw1521/opus-mt-st-en](https://huggingface.co/cw1521/opus-mt-st-en) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
cw1521/opus-mt-en-st
|
cw1521
| 2022-10-03T12:31:35Z | 112 | 0 |
transformers
|
[
"transformers",
"pytorch",
"marian",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-09-29T02:29:39Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: opus-mt-en-st
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-st
This model is a fine-tuned version of [cw1521/opus-mt-en-st](https://huggingface.co/cw1521/opus-mt-en-st) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3710
- Bleu: 77.1725
- Gen Len: 60.3696
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| 0.3768 | 1.0 | 969 | 0.3710 | 77.1725 | 60.3696 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
sd-concepts-library/aadhav-face
|
sd-concepts-library
| 2022-10-03T12:22:50Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2022-10-03T12:22:47Z |
---
license: mit
---
### aadhav face on Stable Diffusion
This is the `<aadhav-face>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:




|
Muennighoff/SGPT-5.8B-weightedmean-nli-bitfit
|
Muennighoff
| 2022-10-03T12:16:09Z | 828 | 6 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"gptj",
"feature-extraction",
"sentence-similarity",
"mteb",
"arxiv:2202.08904",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:04Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
model-index:
- name: SGPT-5.8B-weightedmean-nli-bitfit
results:
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (en)
config: en
split: test
revision: 2d8a100785abf0ae21420d2a55b0c56e3e1ea996
metrics:
- type: accuracy
value: 74.07462686567165
- type: ap
value: 37.44692407529112
- type: f1
value: 68.28971003916419
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (de)
config: de
split: test
revision: 2d8a100785abf0ae21420d2a55b0c56e3e1ea996
metrics:
- type: accuracy
value: 66.63811563169165
- type: ap
value: 78.57252079915924
- type: f1
value: 64.5543087846584
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (en-ext)
config: en-ext
split: test
revision: 2d8a100785abf0ae21420d2a55b0c56e3e1ea996
metrics:
- type: accuracy
value: 77.21889055472263
- type: ap
value: 25.663426367826712
- type: f1
value: 64.26265688503176
- task:
type: Classification
dataset:
type: mteb/amazon_counterfactual
name: MTEB AmazonCounterfactualClassification (ja)
config: ja
split: test
revision: 2d8a100785abf0ae21420d2a55b0c56e3e1ea996
metrics:
- type: accuracy
value: 58.06209850107067
- type: ap
value: 14.028219107023915
- type: f1
value: 48.10387189660778
- task:
type: Classification
dataset:
type: mteb/amazon_polarity
name: MTEB AmazonPolarityClassification
config: default
split: test
revision: 80714f8dcf8cefc218ef4f8c5a966dd83f75a0e1
metrics:
- type: accuracy
value: 82.30920000000002
- type: ap
value: 76.88786578621213
- type: f1
value: 82.15455656065011
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (en)
config: en
split: test
revision: c379a6705fec24a2493fa68e011692605f44e119
metrics:
- type: accuracy
value: 41.584
- type: f1
value: 41.203137944390114
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (de)
config: de
split: test
revision: c379a6705fec24a2493fa68e011692605f44e119
metrics:
- type: accuracy
value: 35.288000000000004
- type: f1
value: 34.672995558518096
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (es)
config: es
split: test
revision: c379a6705fec24a2493fa68e011692605f44e119
metrics:
- type: accuracy
value: 38.34
- type: f1
value: 37.608755629529455
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (fr)
config: fr
split: test
revision: c379a6705fec24a2493fa68e011692605f44e119
metrics:
- type: accuracy
value: 37.839999999999996
- type: f1
value: 36.86898201563507
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (ja)
config: ja
split: test
revision: c379a6705fec24a2493fa68e011692605f44e119
metrics:
- type: accuracy
value: 30.936000000000003
- type: f1
value: 30.49401738527071
- task:
type: Classification
dataset:
type: mteb/amazon_reviews_multi
name: MTEB AmazonReviewsClassification (zh)
config: zh
split: test
revision: c379a6705fec24a2493fa68e011692605f44e119
metrics:
- type: accuracy
value: 33.75
- type: f1
value: 33.38338946025617
- task:
type: Retrieval
dataset:
type: arguana
name: MTEB ArguAna
config: default
split: test
revision: 5b3e3697907184a9b77a3c99ee9ea1a9cbb1e4e3
metrics:
- type: map_at_1
value: 13.727
- type: map_at_10
value: 26.740000000000002
- type: map_at_100
value: 28.218
- type: map_at_1000
value: 28.246
- type: map_at_3
value: 21.728
- type: map_at_5
value: 24.371000000000002
- type: ndcg_at_1
value: 13.727
- type: ndcg_at_10
value: 35.07
- type: ndcg_at_100
value: 41.947
- type: ndcg_at_1000
value: 42.649
- type: ndcg_at_3
value: 24.484
- type: ndcg_at_5
value: 29.282999999999998
- type: precision_at_1
value: 13.727
- type: precision_at_10
value: 6.223
- type: precision_at_100
value: 0.9369999999999999
- type: precision_at_1000
value: 0.099
- type: precision_at_3
value: 10.835
- type: precision_at_5
value: 8.848
- type: recall_at_1
value: 13.727
- type: recall_at_10
value: 62.233000000000004
- type: recall_at_100
value: 93.67
- type: recall_at_1000
value: 99.14699999999999
- type: recall_at_3
value: 32.504
- type: recall_at_5
value: 44.239
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-p2p
name: MTEB ArxivClusteringP2P
config: default
split: test
revision: 0bbdb47bcbe3a90093699aefeed338a0f28a7ee8
metrics:
- type: v_measure
value: 40.553923271901695
- task:
type: Clustering
dataset:
type: mteb/arxiv-clustering-s2s
name: MTEB ArxivClusteringS2S
config: default
split: test
revision: b73bd54100e5abfa6e3a23dcafb46fe4d2438dc3
metrics:
- type: v_measure
value: 32.49323183712211
- task:
type: Reranking
dataset:
type: mteb/askubuntudupquestions-reranking
name: MTEB AskUbuntuDupQuestions
config: default
split: test
revision: 4d853f94cd57d85ec13805aeeac3ae3e5eb4c49c
metrics:
- type: map
value: 55.89811361443445
- type: mrr
value: 70.16235764850724
- task:
type: STS
dataset:
type: mteb/biosses-sts
name: MTEB BIOSSES
config: default
split: test
revision: 9ee918f184421b6bd48b78f6c714d86546106103
metrics:
- type: cos_sim_pearson
value: 82.50506557805856
- type: cos_sim_spearman
value: 79.50000423261176
- type: euclidean_pearson
value: 75.76190885392926
- type: euclidean_spearman
value: 76.7330737163434
- type: manhattan_pearson
value: 75.825318036112
- type: manhattan_spearman
value: 76.7415076434559
- task:
type: BitextMining
dataset:
type: mteb/bucc-bitext-mining
name: MTEB BUCC (de-en)
config: de-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 75.49060542797494
- type: f1
value: 75.15379262352123
- type: precision
value: 74.99391092553932
- type: recall
value: 75.49060542797494
- task:
type: BitextMining
dataset:
type: mteb/bucc-bitext-mining
name: MTEB BUCC (fr-en)
config: fr-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 0.4182258419546555
- type: f1
value: 0.4182258419546555
- type: precision
value: 0.4182258419546555
- type: recall
value: 0.4182258419546555
- task:
type: BitextMining
dataset:
type: mteb/bucc-bitext-mining
name: MTEB BUCC (ru-en)
config: ru-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 0.013855213023900243
- type: f1
value: 0.0115460108532502
- type: precision
value: 0.010391409767925183
- type: recall
value: 0.013855213023900243
- task:
type: BitextMining
dataset:
type: mteb/bucc-bitext-mining
name: MTEB BUCC (zh-en)
config: zh-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 0.315955766192733
- type: f1
value: 0.315955766192733
- type: precision
value: 0.315955766192733
- type: recall
value: 0.315955766192733
- task:
type: Classification
dataset:
type: mteb/banking77
name: MTEB Banking77Classification
config: default
split: test
revision: 44fa15921b4c889113cc5df03dd4901b49161ab7
metrics:
- type: accuracy
value: 81.74025974025973
- type: f1
value: 81.66568824876
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-p2p
name: MTEB BiorxivClusteringP2P
config: default
split: test
revision: 11d0121201d1f1f280e8cc8f3d98fb9c4d9f9c55
metrics:
- type: v_measure
value: 33.59451202614059
- task:
type: Clustering
dataset:
type: mteb/biorxiv-clustering-s2s
name: MTEB BiorxivClusteringS2S
config: default
split: test
revision: c0fab014e1bcb8d3a5e31b2088972a1e01547dc1
metrics:
- type: v_measure
value: 29.128241446157165
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackAndroidRetrieval
config: default
split: test
revision: 2b9f5791698b5be7bc5e10535c8690f20043c3db
metrics:
- type: map_at_1
value: 26.715
- type: map_at_10
value: 35.007
- type: map_at_100
value: 36.352000000000004
- type: map_at_1000
value: 36.51
- type: map_at_3
value: 32.257999999999996
- type: map_at_5
value: 33.595000000000006
- type: ndcg_at_1
value: 33.906
- type: ndcg_at_10
value: 40.353
- type: ndcg_at_100
value: 45.562999999999995
- type: ndcg_at_1000
value: 48.454
- type: ndcg_at_3
value: 36.349
- type: ndcg_at_5
value: 37.856
- type: precision_at_1
value: 33.906
- type: precision_at_10
value: 7.854
- type: precision_at_100
value: 1.29
- type: precision_at_1000
value: 0.188
- type: precision_at_3
value: 17.549
- type: precision_at_5
value: 12.561
- type: recall_at_1
value: 26.715
- type: recall_at_10
value: 49.508
- type: recall_at_100
value: 71.76599999999999
- type: recall_at_1000
value: 91.118
- type: recall_at_3
value: 37.356
- type: recall_at_5
value: 41.836
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackEnglishRetrieval
config: default
split: test
revision: 2b9f5791698b5be7bc5e10535c8690f20043c3db
metrics:
- type: map_at_1
value: 19.663
- type: map_at_10
value: 27.086
- type: map_at_100
value: 28.066999999999997
- type: map_at_1000
value: 28.18
- type: map_at_3
value: 24.819
- type: map_at_5
value: 26.332
- type: ndcg_at_1
value: 25.732
- type: ndcg_at_10
value: 31.613999999999997
- type: ndcg_at_100
value: 35.757
- type: ndcg_at_1000
value: 38.21
- type: ndcg_at_3
value: 28.332
- type: ndcg_at_5
value: 30.264000000000003
- type: precision_at_1
value: 25.732
- type: precision_at_10
value: 6.038
- type: precision_at_100
value: 1.034
- type: precision_at_1000
value: 0.149
- type: precision_at_3
value: 13.864
- type: precision_at_5
value: 10.241999999999999
- type: recall_at_1
value: 19.663
- type: recall_at_10
value: 39.585
- type: recall_at_100
value: 57.718
- type: recall_at_1000
value: 74.26700000000001
- type: recall_at_3
value: 29.845
- type: recall_at_5
value: 35.105
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGamingRetrieval
config: default
split: test
revision: 2b9f5791698b5be7bc5e10535c8690f20043c3db
metrics:
- type: map_at_1
value: 30.125
- type: map_at_10
value: 39.824
- type: map_at_100
value: 40.935
- type: map_at_1000
value: 41.019
- type: map_at_3
value: 37.144
- type: map_at_5
value: 38.647999999999996
- type: ndcg_at_1
value: 34.922
- type: ndcg_at_10
value: 45.072
- type: ndcg_at_100
value: 50.046
- type: ndcg_at_1000
value: 51.895
- type: ndcg_at_3
value: 40.251
- type: ndcg_at_5
value: 42.581
- type: precision_at_1
value: 34.922
- type: precision_at_10
value: 7.303999999999999
- type: precision_at_100
value: 1.0739999999999998
- type: precision_at_1000
value: 0.13
- type: precision_at_3
value: 17.994
- type: precision_at_5
value: 12.475999999999999
- type: recall_at_1
value: 30.125
- type: recall_at_10
value: 57.253
- type: recall_at_100
value: 79.35799999999999
- type: recall_at_1000
value: 92.523
- type: recall_at_3
value: 44.088
- type: recall_at_5
value: 49.893
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackGisRetrieval
config: default
split: test
revision: 2b9f5791698b5be7bc5e10535c8690f20043c3db
metrics:
- type: map_at_1
value: 16.298000000000002
- type: map_at_10
value: 21.479
- type: map_at_100
value: 22.387
- type: map_at_1000
value: 22.483
- type: map_at_3
value: 19.743
- type: map_at_5
value: 20.444000000000003
- type: ndcg_at_1
value: 17.740000000000002
- type: ndcg_at_10
value: 24.887
- type: ndcg_at_100
value: 29.544999999999998
- type: ndcg_at_1000
value: 32.417
- type: ndcg_at_3
value: 21.274
- type: ndcg_at_5
value: 22.399
- type: precision_at_1
value: 17.740000000000002
- type: precision_at_10
value: 3.932
- type: precision_at_100
value: 0.666
- type: precision_at_1000
value: 0.094
- type: precision_at_3
value: 8.927
- type: precision_at_5
value: 6.056
- type: recall_at_1
value: 16.298000000000002
- type: recall_at_10
value: 34.031
- type: recall_at_100
value: 55.769000000000005
- type: recall_at_1000
value: 78.19500000000001
- type: recall_at_3
value: 23.799999999999997
- type: recall_at_5
value: 26.562
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackMathematicaRetrieval
config: default
split: test
revision: 2b9f5791698b5be7bc5e10535c8690f20043c3db
metrics:
- type: map_at_1
value: 10.958
- type: map_at_10
value: 16.999
- type: map_at_100
value: 17.979
- type: map_at_1000
value: 18.112000000000002
- type: map_at_3
value: 15.010000000000002
- type: map_at_5
value: 16.256999999999998
- type: ndcg_at_1
value: 14.179
- type: ndcg_at_10
value: 20.985
- type: ndcg_at_100
value: 26.216
- type: ndcg_at_1000
value: 29.675
- type: ndcg_at_3
value: 17.28
- type: ndcg_at_5
value: 19.301
- type: precision_at_1
value: 14.179
- type: precision_at_10
value: 3.968
- type: precision_at_100
value: 0.784
- type: precision_at_1000
value: 0.121
- type: precision_at_3
value: 8.541
- type: precision_at_5
value: 6.468
- type: recall_at_1
value: 10.958
- type: recall_at_10
value: 29.903000000000002
- type: recall_at_100
value: 53.413
- type: recall_at_1000
value: 78.74799999999999
- type: recall_at_3
value: 19.717000000000002
- type: recall_at_5
value: 24.817
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackPhysicsRetrieval
config: default
split: test
revision: 2b9f5791698b5be7bc5e10535c8690f20043c3db
metrics:
- type: map_at_1
value: 21.217
- type: map_at_10
value: 29.677
- type: map_at_100
value: 30.928
- type: map_at_1000
value: 31.063000000000002
- type: map_at_3
value: 26.611
- type: map_at_5
value: 28.463
- type: ndcg_at_1
value: 26.083000000000002
- type: ndcg_at_10
value: 35.217
- type: ndcg_at_100
value: 40.715
- type: ndcg_at_1000
value: 43.559
- type: ndcg_at_3
value: 30.080000000000002
- type: ndcg_at_5
value: 32.701
- type: precision_at_1
value: 26.083000000000002
- type: precision_at_10
value: 6.622
- type: precision_at_100
value: 1.115
- type: precision_at_1000
value: 0.156
- type: precision_at_3
value: 14.629
- type: precision_at_5
value: 10.837
- type: recall_at_1
value: 21.217
- type: recall_at_10
value: 47.031
- type: recall_at_100
value: 70.378
- type: recall_at_1000
value: 89.704
- type: recall_at_3
value: 32.427
- type: recall_at_5
value: 39.31
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackProgrammersRetrieval
config: default
split: test
revision: 2b9f5791698b5be7bc5e10535c8690f20043c3db
metrics:
- type: map_at_1
value: 19.274
- type: map_at_10
value: 26.398
- type: map_at_100
value: 27.711000000000002
- type: map_at_1000
value: 27.833000000000002
- type: map_at_3
value: 24.294
- type: map_at_5
value: 25.385
- type: ndcg_at_1
value: 24.886
- type: ndcg_at_10
value: 30.909
- type: ndcg_at_100
value: 36.941
- type: ndcg_at_1000
value: 39.838
- type: ndcg_at_3
value: 27.455000000000002
- type: ndcg_at_5
value: 28.828
- type: precision_at_1
value: 24.886
- type: precision_at_10
value: 5.6739999999999995
- type: precision_at_100
value: 1.0290000000000001
- type: precision_at_1000
value: 0.146
- type: precision_at_3
value: 13.242
- type: precision_at_5
value: 9.292
- type: recall_at_1
value: 19.274
- type: recall_at_10
value: 39.643
- type: recall_at_100
value: 66.091
- type: recall_at_1000
value: 86.547
- type: recall_at_3
value: 29.602
- type: recall_at_5
value: 33.561
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackRetrieval
config: default
split: test
revision: 2b9f5791698b5be7bc5e10535c8690f20043c3db
metrics:
- type: map_at_1
value: 18.653666666666666
- type: map_at_10
value: 25.606666666666666
- type: map_at_100
value: 26.669333333333334
- type: map_at_1000
value: 26.795833333333334
- type: map_at_3
value: 23.43433333333333
- type: map_at_5
value: 24.609666666666666
- type: ndcg_at_1
value: 22.742083333333333
- type: ndcg_at_10
value: 29.978333333333335
- type: ndcg_at_100
value: 34.89808333333333
- type: ndcg_at_1000
value: 37.806583333333336
- type: ndcg_at_3
value: 26.223666666666674
- type: ndcg_at_5
value: 27.91033333333333
- type: precision_at_1
value: 22.742083333333333
- type: precision_at_10
value: 5.397083333333334
- type: precision_at_100
value: 0.9340000000000002
- type: precision_at_1000
value: 0.13691666666666663
- type: precision_at_3
value: 12.331083333333332
- type: precision_at_5
value: 8.805499999999999
- type: recall_at_1
value: 18.653666666666666
- type: recall_at_10
value: 39.22625000000001
- type: recall_at_100
value: 61.31049999999999
- type: recall_at_1000
value: 82.19058333333334
- type: recall_at_3
value: 28.517333333333333
- type: recall_at_5
value: 32.9565
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackStatsRetrieval
config: default
split: test
revision: 2b9f5791698b5be7bc5e10535c8690f20043c3db
metrics:
- type: map_at_1
value: 16.07
- type: map_at_10
value: 21.509
- type: map_at_100
value: 22.335
- type: map_at_1000
value: 22.437
- type: map_at_3
value: 19.717000000000002
- type: map_at_5
value: 20.574
- type: ndcg_at_1
value: 18.865000000000002
- type: ndcg_at_10
value: 25.135999999999996
- type: ndcg_at_100
value: 29.483999999999998
- type: ndcg_at_1000
value: 32.303
- type: ndcg_at_3
value: 21.719
- type: ndcg_at_5
value: 23.039
- type: precision_at_1
value: 18.865000000000002
- type: precision_at_10
value: 4.263999999999999
- type: precision_at_100
value: 0.696
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 9.866999999999999
- type: precision_at_5
value: 6.902
- type: recall_at_1
value: 16.07
- type: recall_at_10
value: 33.661
- type: recall_at_100
value: 54.001999999999995
- type: recall_at_1000
value: 75.564
- type: recall_at_3
value: 23.956
- type: recall_at_5
value: 27.264
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackTexRetrieval
config: default
split: test
revision: 2b9f5791698b5be7bc5e10535c8690f20043c3db
metrics:
- type: map_at_1
value: 10.847
- type: map_at_10
value: 15.518
- type: map_at_100
value: 16.384
- type: map_at_1000
value: 16.506
- type: map_at_3
value: 14.093
- type: map_at_5
value: 14.868
- type: ndcg_at_1
value: 13.764999999999999
- type: ndcg_at_10
value: 18.766
- type: ndcg_at_100
value: 23.076
- type: ndcg_at_1000
value: 26.344
- type: ndcg_at_3
value: 16.150000000000002
- type: ndcg_at_5
value: 17.373
- type: precision_at_1
value: 13.764999999999999
- type: precision_at_10
value: 3.572
- type: precision_at_100
value: 0.6779999999999999
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 7.88
- type: precision_at_5
value: 5.712
- type: recall_at_1
value: 10.847
- type: recall_at_10
value: 25.141999999999996
- type: recall_at_100
value: 44.847
- type: recall_at_1000
value: 68.92099999999999
- type: recall_at_3
value: 17.721999999999998
- type: recall_at_5
value: 20.968999999999998
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackUnixRetrieval
config: default
split: test
revision: 2b9f5791698b5be7bc5e10535c8690f20043c3db
metrics:
- type: map_at_1
value: 18.377
- type: map_at_10
value: 26.005
- type: map_at_100
value: 26.996
- type: map_at_1000
value: 27.116
- type: map_at_3
value: 23.712
- type: map_at_5
value: 24.859
- type: ndcg_at_1
value: 22.201
- type: ndcg_at_10
value: 30.635
- type: ndcg_at_100
value: 35.623
- type: ndcg_at_1000
value: 38.551
- type: ndcg_at_3
value: 26.565
- type: ndcg_at_5
value: 28.28
- type: precision_at_1
value: 22.201
- type: precision_at_10
value: 5.41
- type: precision_at_100
value: 0.88
- type: precision_at_1000
value: 0.125
- type: precision_at_3
value: 12.531
- type: precision_at_5
value: 8.806
- type: recall_at_1
value: 18.377
- type: recall_at_10
value: 40.908
- type: recall_at_100
value: 63.563
- type: recall_at_1000
value: 84.503
- type: recall_at_3
value: 29.793999999999997
- type: recall_at_5
value: 34.144999999999996
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWebmastersRetrieval
config: default
split: test
revision: 2b9f5791698b5be7bc5e10535c8690f20043c3db
metrics:
- type: map_at_1
value: 20.246
- type: map_at_10
value: 27.528000000000002
- type: map_at_100
value: 28.78
- type: map_at_1000
value: 29.002
- type: map_at_3
value: 25.226
- type: map_at_5
value: 26.355
- type: ndcg_at_1
value: 25.099
- type: ndcg_at_10
value: 32.421
- type: ndcg_at_100
value: 37.2
- type: ndcg_at_1000
value: 40.693
- type: ndcg_at_3
value: 28.768
- type: ndcg_at_5
value: 30.23
- type: precision_at_1
value: 25.099
- type: precision_at_10
value: 6.245
- type: precision_at_100
value: 1.269
- type: precision_at_1000
value: 0.218
- type: precision_at_3
value: 13.767999999999999
- type: precision_at_5
value: 9.881
- type: recall_at_1
value: 20.246
- type: recall_at_10
value: 41.336
- type: recall_at_100
value: 63.098
- type: recall_at_1000
value: 86.473
- type: recall_at_3
value: 30.069000000000003
- type: recall_at_5
value: 34.262
- task:
type: Retrieval
dataset:
type: BeIR/cqadupstack
name: MTEB CQADupstackWordpressRetrieval
config: default
split: test
revision: 2b9f5791698b5be7bc5e10535c8690f20043c3db
metrics:
- type: map_at_1
value: 14.054
- type: map_at_10
value: 20.25
- type: map_at_100
value: 21.178
- type: map_at_1000
value: 21.288999999999998
- type: map_at_3
value: 18.584999999999997
- type: map_at_5
value: 19.536
- type: ndcg_at_1
value: 15.527
- type: ndcg_at_10
value: 23.745
- type: ndcg_at_100
value: 28.610999999999997
- type: ndcg_at_1000
value: 31.740000000000002
- type: ndcg_at_3
value: 20.461
- type: ndcg_at_5
value: 22.072
- type: precision_at_1
value: 15.527
- type: precision_at_10
value: 3.882
- type: precision_at_100
value: 0.6930000000000001
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 9.181000000000001
- type: precision_at_5
value: 6.433
- type: recall_at_1
value: 14.054
- type: recall_at_10
value: 32.714
- type: recall_at_100
value: 55.723
- type: recall_at_1000
value: 79.72399999999999
- type: recall_at_3
value: 23.832
- type: recall_at_5
value: 27.754
- task:
type: Retrieval
dataset:
type: climate-fever
name: MTEB ClimateFEVER
config: default
split: test
revision: 392b78eb68c07badcd7c2cd8f39af108375dfcce
metrics:
- type: map_at_1
value: 6.122
- type: map_at_10
value: 11.556
- type: map_at_100
value: 12.998000000000001
- type: map_at_1000
value: 13.202
- type: map_at_3
value: 9.657
- type: map_at_5
value: 10.585
- type: ndcg_at_1
value: 15.049000000000001
- type: ndcg_at_10
value: 17.574
- type: ndcg_at_100
value: 24.465999999999998
- type: ndcg_at_1000
value: 28.511999999999997
- type: ndcg_at_3
value: 13.931
- type: ndcg_at_5
value: 15.112
- type: precision_at_1
value: 15.049000000000001
- type: precision_at_10
value: 5.831
- type: precision_at_100
value: 1.322
- type: precision_at_1000
value: 0.20500000000000002
- type: precision_at_3
value: 10.749
- type: precision_at_5
value: 8.365
- type: recall_at_1
value: 6.122
- type: recall_at_10
value: 22.207
- type: recall_at_100
value: 47.08
- type: recall_at_1000
value: 70.182
- type: recall_at_3
value: 13.416
- type: recall_at_5
value: 16.672
- task:
type: Retrieval
dataset:
type: dbpedia-entity
name: MTEB DBPedia
config: default
split: test
revision: f097057d03ed98220bc7309ddb10b71a54d667d6
metrics:
- type: map_at_1
value: 4.672
- type: map_at_10
value: 10.534
- type: map_at_100
value: 14.798
- type: map_at_1000
value: 15.927
- type: map_at_3
value: 7.317
- type: map_at_5
value: 8.726
- type: ndcg_at_1
value: 36.5
- type: ndcg_at_10
value: 26.098
- type: ndcg_at_100
value: 29.215999999999998
- type: ndcg_at_1000
value: 36.254999999999995
- type: ndcg_at_3
value: 29.247
- type: ndcg_at_5
value: 27.692
- type: precision_at_1
value: 47.25
- type: precision_at_10
value: 22.625
- type: precision_at_100
value: 7.042
- type: precision_at_1000
value: 1.6129999999999998
- type: precision_at_3
value: 34.083000000000006
- type: precision_at_5
value: 29.5
- type: recall_at_1
value: 4.672
- type: recall_at_10
value: 15.638
- type: recall_at_100
value: 36.228
- type: recall_at_1000
value: 58.831
- type: recall_at_3
value: 8.578
- type: recall_at_5
value: 11.18
- task:
type: Classification
dataset:
type: mteb/emotion
name: MTEB EmotionClassification
config: default
split: test
revision: 829147f8f75a25f005913200eb5ed41fae320aa1
metrics:
- type: accuracy
value: 49.919999999999995
- type: f1
value: 45.37973678791632
- task:
type: Retrieval
dataset:
type: fever
name: MTEB FEVER
config: default
split: test
revision: 1429cf27e393599b8b359b9b72c666f96b2525f9
metrics:
- type: map_at_1
value: 25.801000000000002
- type: map_at_10
value: 33.941
- type: map_at_100
value: 34.73
- type: map_at_1000
value: 34.793
- type: map_at_3
value: 31.705
- type: map_at_5
value: 33.047
- type: ndcg_at_1
value: 27.933000000000003
- type: ndcg_at_10
value: 38.644
- type: ndcg_at_100
value: 42.594
- type: ndcg_at_1000
value: 44.352000000000004
- type: ndcg_at_3
value: 34.199
- type: ndcg_at_5
value: 36.573
- type: precision_at_1
value: 27.933000000000003
- type: precision_at_10
value: 5.603000000000001
- type: precision_at_100
value: 0.773
- type: precision_at_1000
value: 0.094
- type: precision_at_3
value: 14.171
- type: precision_at_5
value: 9.786999999999999
- type: recall_at_1
value: 25.801000000000002
- type: recall_at_10
value: 50.876
- type: recall_at_100
value: 69.253
- type: recall_at_1000
value: 82.907
- type: recall_at_3
value: 38.879000000000005
- type: recall_at_5
value: 44.651999999999994
- task:
type: Retrieval
dataset:
type: fiqa
name: MTEB FiQA2018
config: default
split: test
revision: 41b686a7f28c59bcaaa5791efd47c67c8ebe28be
metrics:
- type: map_at_1
value: 9.142
- type: map_at_10
value: 13.841999999999999
- type: map_at_100
value: 14.960999999999999
- type: map_at_1000
value: 15.187000000000001
- type: map_at_3
value: 11.966000000000001
- type: map_at_5
value: 12.921
- type: ndcg_at_1
value: 18.364
- type: ndcg_at_10
value: 18.590999999999998
- type: ndcg_at_100
value: 24.153
- type: ndcg_at_1000
value: 29.104000000000003
- type: ndcg_at_3
value: 16.323
- type: ndcg_at_5
value: 17.000999999999998
- type: precision_at_1
value: 18.364
- type: precision_at_10
value: 5.216
- type: precision_at_100
value: 1.09
- type: precision_at_1000
value: 0.193
- type: precision_at_3
value: 10.751
- type: precision_at_5
value: 7.932
- type: recall_at_1
value: 9.142
- type: recall_at_10
value: 22.747
- type: recall_at_100
value: 44.585
- type: recall_at_1000
value: 75.481
- type: recall_at_3
value: 14.602
- type: recall_at_5
value: 17.957
- task:
type: Retrieval
dataset:
type: hotpotqa
name: MTEB HotpotQA
config: default
split: test
revision: 766870b35a1b9ca65e67a0d1913899973551fc6c
metrics:
- type: map_at_1
value: 18.677
- type: map_at_10
value: 26.616
- type: map_at_100
value: 27.605
- type: map_at_1000
value: 27.711999999999996
- type: map_at_3
value: 24.396
- type: map_at_5
value: 25.627
- type: ndcg_at_1
value: 37.352999999999994
- type: ndcg_at_10
value: 33.995
- type: ndcg_at_100
value: 38.423
- type: ndcg_at_1000
value: 40.947
- type: ndcg_at_3
value: 29.885
- type: ndcg_at_5
value: 31.874999999999996
- type: precision_at_1
value: 37.352999999999994
- type: precision_at_10
value: 7.539999999999999
- type: precision_at_100
value: 1.107
- type: precision_at_1000
value: 0.145
- type: precision_at_3
value: 18.938
- type: precision_at_5
value: 12.943
- type: recall_at_1
value: 18.677
- type: recall_at_10
value: 37.698
- type: recall_at_100
value: 55.354000000000006
- type: recall_at_1000
value: 72.255
- type: recall_at_3
value: 28.406
- type: recall_at_5
value: 32.357
- task:
type: Classification
dataset:
type: mteb/imdb
name: MTEB ImdbClassification
config: default
split: test
revision: 8d743909f834c38949e8323a8a6ce8721ea6c7f4
metrics:
- type: accuracy
value: 74.3292
- type: ap
value: 68.30186110189658
- type: f1
value: 74.20709636944783
- task:
type: Retrieval
dataset:
type: msmarco
name: MTEB MSMARCO
config: default
split: validation
revision: e6838a846e2408f22cf5cc337ebc83e0bcf77849
metrics:
- type: map_at_1
value: 6.889000000000001
- type: map_at_10
value: 12.321
- type: map_at_100
value: 13.416
- type: map_at_1000
value: 13.525
- type: map_at_3
value: 10.205
- type: map_at_5
value: 11.342
- type: ndcg_at_1
value: 7.092
- type: ndcg_at_10
value: 15.827
- type: ndcg_at_100
value: 21.72
- type: ndcg_at_1000
value: 24.836
- type: ndcg_at_3
value: 11.393
- type: ndcg_at_5
value: 13.462
- type: precision_at_1
value: 7.092
- type: precision_at_10
value: 2.7969999999999997
- type: precision_at_100
value: 0.583
- type: precision_at_1000
value: 0.08499999999999999
- type: precision_at_3
value: 5.019
- type: precision_at_5
value: 4.06
- type: recall_at_1
value: 6.889000000000001
- type: recall_at_10
value: 26.791999999999998
- type: recall_at_100
value: 55.371
- type: recall_at_1000
value: 80.12899999999999
- type: recall_at_3
value: 14.573
- type: recall_at_5
value: 19.557
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (en)
config: en
split: test
revision: a7e2a951126a26fc8c6a69f835f33a346ba259e3
metrics:
- type: accuracy
value: 89.6374829001368
- type: f1
value: 89.20878379358307
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (de)
config: de
split: test
revision: a7e2a951126a26fc8c6a69f835f33a346ba259e3
metrics:
- type: accuracy
value: 84.54212454212454
- type: f1
value: 82.81080100037023
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (es)
config: es
split: test
revision: a7e2a951126a26fc8c6a69f835f33a346ba259e3
metrics:
- type: accuracy
value: 86.46430953969313
- type: f1
value: 86.00019824223267
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (fr)
config: fr
split: test
revision: a7e2a951126a26fc8c6a69f835f33a346ba259e3
metrics:
- type: accuracy
value: 81.31850923896022
- type: f1
value: 81.07860454762863
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (hi)
config: hi
split: test
revision: a7e2a951126a26fc8c6a69f835f33a346ba259e3
metrics:
- type: accuracy
value: 58.23234134098243
- type: f1
value: 56.63845098081841
- task:
type: Classification
dataset:
type: mteb/mtop_domain
name: MTEB MTOPDomainClassification (th)
config: th
split: test
revision: a7e2a951126a26fc8c6a69f835f33a346ba259e3
metrics:
- type: accuracy
value: 72.28571428571429
- type: f1
value: 70.95796714592039
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (en)
config: en
split: test
revision: 6299947a7777084cc2d4b64235bf7190381ce755
metrics:
- type: accuracy
value: 70.68171454628363
- type: f1
value: 52.57188062729139
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (de)
config: de
split: test
revision: 6299947a7777084cc2d4b64235bf7190381ce755
metrics:
- type: accuracy
value: 60.521273598196665
- type: f1
value: 42.70492970339204
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (es)
config: es
split: test
revision: 6299947a7777084cc2d4b64235bf7190381ce755
metrics:
- type: accuracy
value: 64.32288192128087
- type: f1
value: 45.97360620220273
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (fr)
config: fr
split: test
revision: 6299947a7777084cc2d4b64235bf7190381ce755
metrics:
- type: accuracy
value: 58.67209520826808
- type: f1
value: 42.82844991304579
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (hi)
config: hi
split: test
revision: 6299947a7777084cc2d4b64235bf7190381ce755
metrics:
- type: accuracy
value: 41.95769092864826
- type: f1
value: 28.914127631431263
- task:
type: Classification
dataset:
type: mteb/mtop_intent
name: MTEB MTOPIntentClassification (th)
config: th
split: test
revision: 6299947a7777084cc2d4b64235bf7190381ce755
metrics:
- type: accuracy
value: 55.28390596745027
- type: f1
value: 38.33899250561289
- task:
type: Classification
dataset:
type: mteb/amazon_massive_intent
name: MTEB MassiveIntentClassification (en)
config: en
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 70.00336247478144
- type: f1
value: 68.72041942191649
- task:
type: Classification
dataset:
type: mteb/amazon_massive_scenario
name: MTEB MassiveScenarioClassification (en)
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 75.0268997982515
- type: f1
value: 75.29844481506652
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-p2p
name: MTEB MedrxivClusteringP2P
config: default
split: test
revision: dcefc037ef84348e49b0d29109e891c01067226b
metrics:
- type: v_measure
value: 30.327566856300813
- task:
type: Clustering
dataset:
type: mteb/medrxiv-clustering-s2s
name: MTEB MedrxivClusteringS2S
config: default
split: test
revision: 3cd0e71dfbe09d4de0f9e5ecba43e7ce280959dc
metrics:
- type: v_measure
value: 28.01650210863619
- task:
type: Reranking
dataset:
type: mteb/mind_small
name: MTEB MindSmallReranking
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 31.11041256752524
- type: mrr
value: 32.14172939750204
- task:
type: Retrieval
dataset:
type: nfcorpus
name: MTEB NFCorpus
config: default
split: test
revision: 7eb63cc0c1eb59324d709ebed25fcab851fa7610
metrics:
- type: map_at_1
value: 3.527
- type: map_at_10
value: 9.283
- type: map_at_100
value: 11.995000000000001
- type: map_at_1000
value: 13.33
- type: map_at_3
value: 6.223
- type: map_at_5
value: 7.68
- type: ndcg_at_1
value: 36.223
- type: ndcg_at_10
value: 28.255999999999997
- type: ndcg_at_100
value: 26.355
- type: ndcg_at_1000
value: 35.536
- type: ndcg_at_3
value: 31.962000000000003
- type: ndcg_at_5
value: 30.61
- type: precision_at_1
value: 37.771
- type: precision_at_10
value: 21.889
- type: precision_at_100
value: 7.1080000000000005
- type: precision_at_1000
value: 1.989
- type: precision_at_3
value: 30.857
- type: precision_at_5
value: 27.307
- type: recall_at_1
value: 3.527
- type: recall_at_10
value: 14.015
- type: recall_at_100
value: 28.402
- type: recall_at_1000
value: 59.795
- type: recall_at_3
value: 7.5969999999999995
- type: recall_at_5
value: 10.641
- task:
type: Retrieval
dataset:
type: nq
name: MTEB NQ
config: default
split: test
revision: 6062aefc120bfe8ece5897809fb2e53bfe0d128c
metrics:
- type: map_at_1
value: 11.631
- type: map_at_10
value: 19.532
- type: map_at_100
value: 20.821
- type: map_at_1000
value: 20.910999999999998
- type: map_at_3
value: 16.597
- type: map_at_5
value: 18.197
- type: ndcg_at_1
value: 13.413
- type: ndcg_at_10
value: 24.628
- type: ndcg_at_100
value: 30.883
- type: ndcg_at_1000
value: 33.216
- type: ndcg_at_3
value: 18.697
- type: ndcg_at_5
value: 21.501
- type: precision_at_1
value: 13.413
- type: precision_at_10
value: 4.571
- type: precision_at_100
value: 0.812
- type: precision_at_1000
value: 0.10300000000000001
- type: precision_at_3
value: 8.845
- type: precision_at_5
value: 6.889000000000001
- type: recall_at_1
value: 11.631
- type: recall_at_10
value: 38.429
- type: recall_at_100
value: 67.009
- type: recall_at_1000
value: 84.796
- type: recall_at_3
value: 22.74
- type: recall_at_5
value: 29.266
- task:
type: Retrieval
dataset:
type: quora
name: MTEB QuoraRetrieval
config: default
split: test
revision: 6205996560df11e3a3da9ab4f926788fc30a7db4
metrics:
- type: map_at_1
value: 66.64
- type: map_at_10
value: 80.394
- type: map_at_100
value: 81.099
- type: map_at_1000
value: 81.122
- type: map_at_3
value: 77.289
- type: map_at_5
value: 79.25999999999999
- type: ndcg_at_1
value: 76.85
- type: ndcg_at_10
value: 84.68
- type: ndcg_at_100
value: 86.311
- type: ndcg_at_1000
value: 86.49900000000001
- type: ndcg_at_3
value: 81.295
- type: ndcg_at_5
value: 83.199
- type: precision_at_1
value: 76.85
- type: precision_at_10
value: 12.928999999999998
- type: precision_at_100
value: 1.51
- type: precision_at_1000
value: 0.156
- type: precision_at_3
value: 35.557
- type: precision_at_5
value: 23.576
- type: recall_at_1
value: 66.64
- type: recall_at_10
value: 93.059
- type: recall_at_100
value: 98.922
- type: recall_at_1000
value: 99.883
- type: recall_at_3
value: 83.49499999999999
- type: recall_at_5
value: 88.729
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering
name: MTEB RedditClustering
config: default
split: test
revision: b2805658ae38990172679479369a78b86de8c390
metrics:
- type: v_measure
value: 42.17131361041068
- task:
type: Clustering
dataset:
type: mteb/reddit-clustering-p2p
name: MTEB RedditClusteringP2P
config: default
split: test
revision: 385e3cb46b4cfa89021f56c4380204149d0efe33
metrics:
- type: v_measure
value: 48.01815621479994
- task:
type: Retrieval
dataset:
type: scidocs
name: MTEB SCIDOCS
config: default
split: test
revision: 5c59ef3e437a0a9651c8fe6fde943e7dce59fba5
metrics:
- type: map_at_1
value: 3.198
- type: map_at_10
value: 7.550999999999999
- type: map_at_100
value: 9.232
- type: map_at_1000
value: 9.51
- type: map_at_3
value: 5.2940000000000005
- type: map_at_5
value: 6.343999999999999
- type: ndcg_at_1
value: 15.8
- type: ndcg_at_10
value: 13.553999999999998
- type: ndcg_at_100
value: 20.776
- type: ndcg_at_1000
value: 26.204
- type: ndcg_at_3
value: 12.306000000000001
- type: ndcg_at_5
value: 10.952
- type: precision_at_1
value: 15.8
- type: precision_at_10
value: 7.180000000000001
- type: precision_at_100
value: 1.762
- type: precision_at_1000
value: 0.307
- type: precision_at_3
value: 11.333
- type: precision_at_5
value: 9.62
- type: recall_at_1
value: 3.198
- type: recall_at_10
value: 14.575
- type: recall_at_100
value: 35.758
- type: recall_at_1000
value: 62.317
- type: recall_at_3
value: 6.922000000000001
- type: recall_at_5
value: 9.767000000000001
- task:
type: STS
dataset:
type: mteb/sickr-sts
name: MTEB SICK-R
config: default
split: test
revision: 20a6d6f312dd54037fe07a32d58e5e168867909d
metrics:
- type: cos_sim_pearson
value: 84.5217161312271
- type: cos_sim_spearman
value: 79.58562467776268
- type: euclidean_pearson
value: 76.69364353942403
- type: euclidean_spearman
value: 74.68959282070473
- type: manhattan_pearson
value: 76.81159265133732
- type: manhattan_spearman
value: 74.7519444048176
- task:
type: STS
dataset:
type: mteb/sts12-sts
name: MTEB STS12
config: default
split: test
revision: fdf84275bb8ce4b49c971d02e84dd1abc677a50f
metrics:
- type: cos_sim_pearson
value: 83.70403706922605
- type: cos_sim_spearman
value: 74.28502198729447
- type: euclidean_pearson
value: 83.32719404608066
- type: euclidean_spearman
value: 75.92189433460788
- type: manhattan_pearson
value: 83.35841543005293
- type: manhattan_spearman
value: 75.94458615451978
- task:
type: STS
dataset:
type: mteb/sts13-sts
name: MTEB STS13
config: default
split: test
revision: 1591bfcbe8c69d4bf7fe2a16e2451017832cafb9
metrics:
- type: cos_sim_pearson
value: 84.94127878986795
- type: cos_sim_spearman
value: 85.35148434923192
- type: euclidean_pearson
value: 81.71127467071571
- type: euclidean_spearman
value: 82.88240481546771
- type: manhattan_pearson
value: 81.72826221967252
- type: manhattan_spearman
value: 82.90725064625128
- task:
type: STS
dataset:
type: mteb/sts14-sts
name: MTEB STS14
config: default
split: test
revision: e2125984e7df8b7871f6ae9949cf6b6795e7c54b
metrics:
- type: cos_sim_pearson
value: 83.1474704168523
- type: cos_sim_spearman
value: 79.20612995350827
- type: euclidean_pearson
value: 78.85993329596555
- type: euclidean_spearman
value: 78.91956572744715
- type: manhattan_pearson
value: 78.89999720522347
- type: manhattan_spearman
value: 78.93956842550107
- task:
type: STS
dataset:
type: mteb/sts15-sts
name: MTEB STS15
config: default
split: test
revision: 1cd7298cac12a96a373b6a2f18738bb3e739a9b6
metrics:
- type: cos_sim_pearson
value: 84.81255514055894
- type: cos_sim_spearman
value: 85.5217140762934
- type: euclidean_pearson
value: 82.15024353784499
- type: euclidean_spearman
value: 83.04155334389833
- type: manhattan_pearson
value: 82.18598945053624
- type: manhattan_spearman
value: 83.07248357693301
- task:
type: STS
dataset:
type: mteb/sts16-sts
name: MTEB STS16
config: default
split: test
revision: 360a0b2dff98700d09e634a01e1cc1624d3e42cd
metrics:
- type: cos_sim_pearson
value: 80.63248465157822
- type: cos_sim_spearman
value: 82.53853238521991
- type: euclidean_pearson
value: 78.33936863828221
- type: euclidean_spearman
value: 79.16305579487414
- type: manhattan_pearson
value: 78.3888359870894
- type: manhattan_spearman
value: 79.18504473136467
- task:
type: STS
dataset:
type: mteb/sts17-crosslingual-sts
name: MTEB STS17 (en-en)
config: en-en
split: test
revision: 9fc37e8c632af1c87a3d23e685d49552a02582a0
metrics:
- type: cos_sim_pearson
value: 90.09066290639687
- type: cos_sim_spearman
value: 90.43893699357069
- type: euclidean_pearson
value: 82.39520777222396
- type: euclidean_spearman
value: 81.23948185395952
- type: manhattan_pearson
value: 82.35529784653383
- type: manhattan_spearman
value: 81.12681522483975
- task:
type: STS
dataset:
type: mteb/sts22-crosslingual-sts
name: MTEB STS22 (en)
config: en
split: test
revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906
metrics:
- type: cos_sim_pearson
value: 63.52752323046846
- type: cos_sim_spearman
value: 63.19719780439462
- type: euclidean_pearson
value: 58.29085490641428
- type: euclidean_spearman
value: 58.975178656335046
- type: manhattan_pearson
value: 58.183542772416985
- type: manhattan_spearman
value: 59.190630462178994
- task:
type: STS
dataset:
type: mteb/stsbenchmark-sts
name: MTEB STSBenchmark
config: default
split: test
revision: 8913289635987208e6e7c72789e4be2fe94b6abd
metrics:
- type: cos_sim_pearson
value: 85.45100366635687
- type: cos_sim_spearman
value: 85.66816193002651
- type: euclidean_pearson
value: 81.87976731329091
- type: euclidean_spearman
value: 82.01382867690964
- type: manhattan_pearson
value: 81.88260155706726
- type: manhattan_spearman
value: 82.05258597906492
- task:
type: Reranking
dataset:
type: mteb/scidocs-reranking
name: MTEB SciDocsRR
config: default
split: test
revision: 56a6d0140cf6356659e2a7c1413286a774468d44
metrics:
- type: map
value: 77.53549990038017
- type: mrr
value: 93.37474163454556
- task:
type: Retrieval
dataset:
type: scifact
name: MTEB SciFact
config: default
split: test
revision: a75ae049398addde9b70f6b268875f5cbce99089
metrics:
- type: map_at_1
value: 31.167
- type: map_at_10
value: 40.778
- type: map_at_100
value: 42.063
- type: map_at_1000
value: 42.103
- type: map_at_3
value: 37.12
- type: map_at_5
value: 39.205
- type: ndcg_at_1
value: 33.667
- type: ndcg_at_10
value: 46.662
- type: ndcg_at_100
value: 51.995999999999995
- type: ndcg_at_1000
value: 53.254999999999995
- type: ndcg_at_3
value: 39.397999999999996
- type: ndcg_at_5
value: 42.934
- type: precision_at_1
value: 33.667
- type: precision_at_10
value: 7.1
- type: precision_at_100
value: 0.993
- type: precision_at_1000
value: 0.11
- type: precision_at_3
value: 16.111
- type: precision_at_5
value: 11.600000000000001
- type: recall_at_1
value: 31.167
- type: recall_at_10
value: 63.744
- type: recall_at_100
value: 87.156
- type: recall_at_1000
value: 97.556
- type: recall_at_3
value: 44.0
- type: recall_at_5
value: 52.556000000000004
- task:
type: PairClassification
dataset:
type: mteb/sprintduplicatequestions-pairclassification
name: MTEB SprintDuplicateQuestions
config: default
split: test
revision: 5a8256d0dff9c4bd3be3ba3e67e4e70173f802ea
metrics:
- type: cos_sim_accuracy
value: 99.55148514851486
- type: cos_sim_ap
value: 80.535236573428
- type: cos_sim_f1
value: 75.01331912626532
- type: cos_sim_precision
value: 80.27366020524515
- type: cos_sim_recall
value: 70.39999999999999
- type: dot_accuracy
value: 99.04851485148515
- type: dot_ap
value: 28.505358821499726
- type: dot_f1
value: 36.36363636363637
- type: dot_precision
value: 37.160751565762006
- type: dot_recall
value: 35.6
- type: euclidean_accuracy
value: 99.4990099009901
- type: euclidean_ap
value: 74.95819047075476
- type: euclidean_f1
value: 71.15489874110564
- type: euclidean_precision
value: 78.59733978234583
- type: euclidean_recall
value: 65.0
- type: manhattan_accuracy
value: 99.50198019801981
- type: manhattan_ap
value: 75.02070096015086
- type: manhattan_f1
value: 71.20535714285712
- type: manhattan_precision
value: 80.55555555555556
- type: manhattan_recall
value: 63.800000000000004
- type: max_accuracy
value: 99.55148514851486
- type: max_ap
value: 80.535236573428
- type: max_f1
value: 75.01331912626532
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering
name: MTEB StackExchangeClustering
config: default
split: test
revision: 70a89468f6dccacc6aa2b12a6eac54e74328f235
metrics:
- type: v_measure
value: 54.13314692311623
- task:
type: Clustering
dataset:
type: mteb/stackexchange-clustering-p2p
name: MTEB StackExchangeClusteringP2P
config: default
split: test
revision: d88009ab563dd0b16cfaf4436abaf97fa3550cf0
metrics:
- type: v_measure
value: 31.115181648287145
- task:
type: Reranking
dataset:
type: mteb/stackoverflowdupquestions-reranking
name: MTEB StackOverflowDupQuestions
config: default
split: test
revision: ef807ea29a75ec4f91b50fd4191cb4ee4589a9f9
metrics:
- type: map
value: 44.771112666694336
- type: mrr
value: 45.30415764790765
- task:
type: Summarization
dataset:
type: mteb/summeval
name: MTEB SummEval
config: default
split: test
revision: 8753c2788d36c01fc6f05d03fe3f7268d63f9122
metrics:
- type: cos_sim_pearson
value: 30.849429597669374
- type: cos_sim_spearman
value: 30.384175038360194
- type: dot_pearson
value: 29.030383429536823
- type: dot_spearman
value: 28.03273624951732
- task:
type: Retrieval
dataset:
type: trec-covid
name: MTEB TRECCOVID
config: default
split: test
revision: 2c8041b2c07a79b6f7ba8fe6acc72e5d9f92d217
metrics:
- type: map_at_1
value: 0.19499999999999998
- type: map_at_10
value: 1.0959999999999999
- type: map_at_100
value: 5.726
- type: map_at_1000
value: 13.611999999999998
- type: map_at_3
value: 0.45399999999999996
- type: map_at_5
value: 0.67
- type: ndcg_at_1
value: 71.0
- type: ndcg_at_10
value: 55.352999999999994
- type: ndcg_at_100
value: 40.797
- type: ndcg_at_1000
value: 35.955999999999996
- type: ndcg_at_3
value: 63.263000000000005
- type: ndcg_at_5
value: 60.14000000000001
- type: precision_at_1
value: 78.0
- type: precision_at_10
value: 56.99999999999999
- type: precision_at_100
value: 41.199999999999996
- type: precision_at_1000
value: 16.154
- type: precision_at_3
value: 66.667
- type: precision_at_5
value: 62.8
- type: recall_at_1
value: 0.19499999999999998
- type: recall_at_10
value: 1.3639999999999999
- type: recall_at_100
value: 9.317
- type: recall_at_1000
value: 33.629999999999995
- type: recall_at_3
value: 0.49300000000000005
- type: recall_at_5
value: 0.756
- task:
type: Retrieval
dataset:
type: webis-touche2020
name: MTEB Touche2020
config: default
split: test
revision: 527b7d77e16e343303e68cb6af11d6e18b9f7b3b
metrics:
- type: map_at_1
value: 1.335
- type: map_at_10
value: 6.293
- type: map_at_100
value: 10.928
- type: map_at_1000
value: 12.359
- type: map_at_3
value: 3.472
- type: map_at_5
value: 4.935
- type: ndcg_at_1
value: 19.387999999999998
- type: ndcg_at_10
value: 16.178
- type: ndcg_at_100
value: 28.149
- type: ndcg_at_1000
value: 39.845000000000006
- type: ndcg_at_3
value: 19.171
- type: ndcg_at_5
value: 17.864
- type: precision_at_1
value: 20.408
- type: precision_at_10
value: 14.49
- type: precision_at_100
value: 6.306000000000001
- type: precision_at_1000
value: 1.3860000000000001
- type: precision_at_3
value: 21.088
- type: precision_at_5
value: 18.367
- type: recall_at_1
value: 1.335
- type: recall_at_10
value: 10.825999999999999
- type: recall_at_100
value: 39.251000000000005
- type: recall_at_1000
value: 74.952
- type: recall_at_3
value: 4.9110000000000005
- type: recall_at_5
value: 7.312
- task:
type: Classification
dataset:
type: mteb/toxic_conversations_50k
name: MTEB ToxicConversationsClassification
config: default
split: test
revision: edfaf9da55d3dd50d43143d90c1ac476895ae6de
metrics:
- type: accuracy
value: 69.93339999999999
- type: ap
value: 13.87476602492533
- type: f1
value: 53.867357615848555
- task:
type: Classification
dataset:
type: mteb/tweet_sentiment_extraction
name: MTEB TweetSentimentExtractionClassification
config: default
split: test
revision: 62146448f05be9e52a36b8ee9936447ea787eede
metrics:
- type: accuracy
value: 62.43916242218449
- type: f1
value: 62.870386304954685
- task:
type: Clustering
dataset:
type: mteb/twentynewsgroups-clustering
name: MTEB TwentyNewsgroupsClustering
config: default
split: test
revision: 091a54f9a36281ce7d6590ec8c75dd485e7e01d4
metrics:
- type: v_measure
value: 37.202082549859796
- task:
type: PairClassification
dataset:
type: mteb/twittersemeval2015-pairclassification
name: MTEB TwitterSemEval2015
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 83.65023544137807
- type: cos_sim_ap
value: 65.99787692764193
- type: cos_sim_f1
value: 62.10650887573965
- type: cos_sim_precision
value: 56.30901287553648
- type: cos_sim_recall
value: 69.23482849604221
- type: dot_accuracy
value: 79.10830303391549
- type: dot_ap
value: 48.80109642320246
- type: dot_f1
value: 51.418744625967314
- type: dot_precision
value: 40.30253107683091
- type: dot_recall
value: 71.00263852242745
- type: euclidean_accuracy
value: 82.45812719794957
- type: euclidean_ap
value: 60.09969493259607
- type: euclidean_f1
value: 57.658573789246226
- type: euclidean_precision
value: 55.62913907284768
- type: euclidean_recall
value: 59.84168865435356
- type: manhattan_accuracy
value: 82.46408773916671
- type: manhattan_ap
value: 60.116199786815116
- type: manhattan_f1
value: 57.683903860160235
- type: manhattan_precision
value: 53.41726618705036
- type: manhattan_recall
value: 62.69129287598945
- type: max_accuracy
value: 83.65023544137807
- type: max_ap
value: 65.99787692764193
- type: max_f1
value: 62.10650887573965
- task:
type: PairClassification
dataset:
type: mteb/twitterurlcorpus-pairclassification
name: MTEB TwitterURLCorpus
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 88.34943920518494
- type: cos_sim_ap
value: 84.5428891020442
- type: cos_sim_f1
value: 77.09709933923172
- type: cos_sim_precision
value: 74.83150952967607
- type: cos_sim_recall
value: 79.50415768401602
- type: dot_accuracy
value: 84.53448208949432
- type: dot_ap
value: 73.96328242371995
- type: dot_f1
value: 70.00553786515299
- type: dot_precision
value: 63.58777665995976
- type: dot_recall
value: 77.86418232214352
- type: euclidean_accuracy
value: 86.87662514068381
- type: euclidean_ap
value: 81.45499631520235
- type: euclidean_f1
value: 73.46567109816063
- type: euclidean_precision
value: 69.71037533697381
- type: euclidean_recall
value: 77.6485987064983
- type: manhattan_accuracy
value: 86.88244654014825
- type: manhattan_ap
value: 81.47180273946366
- type: manhattan_f1
value: 73.44624393136418
- type: manhattan_precision
value: 70.80385852090032
- type: manhattan_recall
value: 76.29350169387126
- type: max_accuracy
value: 88.34943920518494
- type: max_ap
value: 84.5428891020442
- type: max_f1
value: 77.09709933923172
---
# SGPT-5.8B-weightedmean-msmarco-specb-bitfit
## Usage
For usage instructions, refer to our codebase: https://github.com/Muennighoff/sgpt
## Evaluation Results
For eval results, refer to our paper: https://arxiv.org/abs/2202.08904
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 249592 with parameters:
```
{'batch_size': 2, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 5e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 300, 'do_lower_case': False}) with Transformer model: GPTJModel
(1): Pooling({'word_embedding_dimension': 4096, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': True, 'pooling_mode_lasttoken': False})
)
```
## Citing & Authors
```bibtex
@article{muennighoff2022sgpt,
title={SGPT: GPT Sentence Embeddings for Semantic Search},
author={Muennighoff, Niklas},
journal={arXiv preprint arXiv:2202.08904},
year={2022}
}
```
|
olivebradshaw/dummy-model
|
olivebradshaw
| 2022-10-03T11:33:35Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"en",
"region:us"
] | null | 2022-10-01T12:22:27Z |
---
language:
- en
tags:
- sentence-transformers # Example: audio
widget:
- source_sentence: "That is a happy person"
sentences:
- "That is a happy dog"
- "That is a very happy person"
- "Today is a sunny day"
example_title: "Happy"
thumbnail: "https://www.google.com/search?q=cat&source=lnms&tbm=isch&sa=X&ved=2ahUKEwi85fKL5cP6AhWES8AKHbVQD-MQ_AUoAXoECAIQAw&biw=1440&bih=730&dpr=1#imgrc=ik9IXqP62Fi4sM"
---
# This is the title
## This is a subtitle
As titles and subtitles you may want to include: Model Description, Intended use and limitations, How to use, Limitations and bias, Training data, Training procedure, Preprocessing and Evaluation results. Along with anything else that seems relevant.
|
lewtun/my-awesome-setfit-model-3
|
lewtun
| 2022-10-03T09:06:25Z | 2 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-10-03T09:06:17Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 40 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 40,
"warmup_steps": 4,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
tkubotake/distilbert-base-uncased-finetuned-emotion
|
tkubotake
| 2022-10-03T08:25:34Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-09-28T05:01:33Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.918
- name: F1
type: f1
value: 0.9179456491632857
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2263
- Accuracy: 0.918
- F1: 0.9179
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8566 | 1.0 | 250 | 0.3283 | 0.903 | 0.9002 |
| 0.2607 | 2.0 | 500 | 0.2263 | 0.918 | 0.9179 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
PartiallyTyped/answerable_tydiqa_lm_pretrained_finnish
|
PartiallyTyped
| 2022-10-03T07:09:57Z | 119 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"text generation",
"fi",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-10-02T14:52:13Z |
---
language:
- fi
tags:
- text generation
license: mit
datasets:
- answerable tydiqa
---
# ReadMe
This is a pretrained model based on [Finnish-NLP/gpt2-finnish](https://huggingface.co/Finnish-NLP/gpt2-finnish) that has been trained on [copenlu/answerable_tydiqa](https://huggingface.co/datasets/copenlu/answerable_tydiqa), specifically the text field of the Finnish samples for 2 epochs.
To use the pretrained head, use: `AutoModelForCausalLM.from_pretrained`.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PartiallyTyped/answerable_tydiqa_lm_pretrained_finnish"
model = AutoModelForCausalLM.from_pretrained(model_path)
tokenizer = AutoTokenizer.from_pretrained(model_path)
```
|
lokibots/vit-patch16-1280-gpt2-large-image-summary
|
lokibots
| 2022-10-03T04:43:49Z | 43 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vision-encoder-decoder",
"image-text-to-text",
"image-to-text",
"en",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2022-10-01T11:32:08Z |
---
language:
- en
tags:
- image-to-text
---
## lokibots/vit-patch16-1280-gpt2-large-image-summary
This model generates a summary from a given chart image. The model accepts an image of size 1280x768 (or less) and generates a summary describing the contents of the image. **However, training is still required.**
## sample inference code
```{python}
from transformers import VisionEncoderDecoderModel, ViTFeatureExtractor, GPT2Tokenizer
from PIL import Image
model = VisionEncoderDecoderModel.from_pretrained("lokibots/vit-patch16-1280-gpt2-large-image-summary")
feature_extractor = ViTFeatureExtractor.from_pretrained("lokibots/vit-patch16-1280-gpt2-large-image-summary")
tokenizer = GPT2Tokenizer.from_pretrained('gpt2-large')
image = Image.open("image_file").convert("RGB")
pixel_values = feature_extractor(images=image, return_tensors="pt").pixel_values
gen_kwargs = {"max_length": 1024, "num_beams": 4}
output_ids = model.generate(pixel_values, **gen_kwargs)
preds = tokenizer.batch_decode(output_ids, skip_special_tokens=True)
```
|
sd-concepts-library/liminalspaces
|
sd-concepts-library
| 2022-10-03T04:23:32Z | 0 | 2 | null |
[
"license:mit",
"region:us"
] | null | 2022-10-03T04:23:28Z |
---
license: mit
---
### Liminalspaces on Stable Diffusion
This is the `<liminal image>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:






|
NeelNanda/SoLU
|
NeelNanda
| 2022-10-03T02:22:57Z | 0 | 0 | null |
[
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2022-09-03T22:11:14Z |
---
license: bigscience-bloom-rail-1.0
---
|
g30rv17ys/ddpm-geeve-cnv-1500-300ep
|
g30rv17ys
| 2022-10-03T01:51:32Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"en",
"dataset:imagefolder",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-10-02T16:17:04Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: imagefolder
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-geeve-cnv-1500-300ep
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `imagefolder` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/geevegeorge/ddpm-geeve-cnv-1500-300ep/tensorboard?#scalars)
|
quantumind/elonmusk-tweets-generator
|
quantumind
| 2022-10-03T01:32:25Z | 0 | 1 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2022-10-03T01:07:42Z |
---
license: apache-2.0
---
A simple text generation model trained on 17+K "Elon Musk tweets" with an accuracy of 92%.
|
donymorph/donyface
|
donymorph
| 2022-10-03T01:07:08Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2022-10-03T01:07:08Z |
---
license: creativeml-openrail-m
---
|
faceyacc/my-segmentation-model
|
faceyacc
| 2022-10-03T01:01:27Z | 31 | 0 |
transformers
|
[
"transformers",
"pytorch",
"segformer",
"endpoints_compatible",
"region:us"
] | null | 2022-10-02T15:00:45Z |
## Sidewalk & Street Image Segmentation
This model was pre-trained on *SegFormer model fine-tuned on ADE20k*, to segment images of sidewalks, pedestrian walkways, and streets/avenues. The goal of this model is build the "*eyes*" of a self-driving car ML model.
|
din0s/t5-small-fr-finetuned-en-to-it
|
din0s
| 2022-10-02T22:46:16Z | 112 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:ccmatrix",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-10-02T22:08:17Z |
---
tags:
- generated_from_trainer
datasets:
- ccmatrix
metrics:
- bleu
model-index:
- name: t5-small_fr-finetuned-en-to-it
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: ccmatrix
type: ccmatrix
config: en-it
split: train[3000:12000]
args: en-it
metrics:
- name: Bleu
type: bleu
value: 7.4222
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small_fr-finetuned-en-to-it
This model is a fine-tuned version of [din0s/t5-small-finetuned-en-to-fr](https://huggingface.co/din0s/t5-small-finetuned-en-to-fr) on the ccmatrix dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3225
- Bleu: 7.4222
- Gen Len: 59.1127
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 96
- eval_batch_size: 96
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 40
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 94 | 3.0406 | 3.2546 | 52.6127 |
| No log | 2.0 | 188 | 2.9278 | 3.1206 | 62.774 |
| No log | 3.0 | 282 | 2.8573 | 3.4206 | 63.6707 |
| No log | 4.0 | 376 | 2.8030 | 3.4847 | 66.408 |
| No log | 5.0 | 470 | 2.7602 | 3.8933 | 64.362 |
| 3.2982 | 6.0 | 564 | 2.7185 | 3.9298 | 66.058 |
| 3.2982 | 7.0 | 658 | 2.6842 | 4.0344 | 65.5773 |
| 3.2982 | 8.0 | 752 | 2.6536 | 4.3243 | 65.0047 |
| 3.2982 | 9.0 | 846 | 2.6233 | 4.5078 | 64.5813 |
| 3.2982 | 10.0 | 940 | 2.5966 | 4.6657 | 63.654 |
| 2.9837 | 11.0 | 1034 | 2.5743 | 4.7664 | 63.326 |
| 2.9837 | 12.0 | 1128 | 2.5526 | 4.9535 | 62.7327 |
| 2.9837 | 13.0 | 1222 | 2.5303 | 5.1386 | 63.5887 |
| 2.9837 | 14.0 | 1316 | 2.5122 | 5.1037 | 64.1667 |
| 2.9837 | 15.0 | 1410 | 2.4937 | 5.3304 | 63.116 |
| 2.8416 | 16.0 | 1504 | 2.4797 | 5.5006 | 61.4953 |
| 2.8416 | 17.0 | 1598 | 2.4627 | 5.5892 | 62.01 |
| 2.8416 | 18.0 | 1692 | 2.4497 | 5.8497 | 61.42 |
| 2.8416 | 19.0 | 1786 | 2.4372 | 6.0074 | 61.1587 |
| 2.8416 | 20.0 | 1880 | 2.4256 | 6.1464 | 60.522 |
| 2.8416 | 21.0 | 1974 | 2.4148 | 6.3117 | 59.5567 |
| 2.7428 | 22.0 | 2068 | 2.4039 | 6.4626 | 59.532 |
| 2.7428 | 23.0 | 2162 | 2.3939 | 6.5287 | 60.2307 |
| 2.7428 | 24.0 | 2256 | 2.3857 | 6.6093 | 60.22 |
| 2.7428 | 25.0 | 2350 | 2.3772 | 6.8004 | 59.396 |
| 2.7428 | 26.0 | 2444 | 2.3703 | 6.9433 | 59.5027 |
| 2.6779 | 27.0 | 2538 | 2.3631 | 7.0153 | 59.1433 |
| 2.6779 | 28.0 | 2632 | 2.3575 | 7.1783 | 58.9793 |
| 2.6779 | 29.0 | 2726 | 2.3514 | 7.1639 | 59.362 |
| 2.6779 | 30.0 | 2820 | 2.3457 | 7.2176 | 58.9927 |
| 2.6779 | 31.0 | 2914 | 2.3411 | 7.2599 | 59.1433 |
| 2.6335 | 32.0 | 3008 | 2.3374 | 7.284 | 59.1787 |
| 2.6335 | 33.0 | 3102 | 2.3339 | 7.3678 | 59.07 |
| 2.6335 | 34.0 | 3196 | 2.3307 | 7.3364 | 58.9813 |
| 2.6335 | 35.0 | 3290 | 2.3281 | 7.3318 | 58.96 |
| 2.6335 | 36.0 | 3384 | 2.3259 | 7.394 | 59.0787 |
| 2.6335 | 37.0 | 3478 | 2.3245 | 7.4133 | 59.0393 |
| 2.609 | 38.0 | 3572 | 2.3232 | 7.383 | 59.1887 |
| 2.609 | 39.0 | 3666 | 2.3227 | 7.4105 | 59.1227 |
| 2.609 | 40.0 | 3760 | 2.3225 | 7.4222 | 59.1127 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1
- Datasets 2.5.1
- Tokenizers 0.11.0
|
ChrisC1657/Aqua_tests
|
ChrisC1657
| 2022-10-02T20:33:02Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2022-10-02T17:06:56Z |
---
license: mit
---
Aqua_anime_girl_blk_reg_2000.ckpt
# Dataset
>Training: 16 images
>Regularization: 400 images - black reg images
# Info
>Model Used: Waifu Diffusion 1.3 epoch 5
>Steps: 2000
>Keyword: Aqua (Use this in the prompt)
>Class Phrase: Anime girl
Aqua_animegirl_wrd_reg_2000.ckpt
# Dataset
>Training: 16 images
>Regularization: 400 images - Waifu Research Department reg images
# Info
>Model Used: Waifu Diffusion 1.3 epoch 5
>Steps: 2000
>Keyword: Aqua (Use this in the prompt)
>Class Phrase: Anime girl
Aqua_CxRwMwCYViVz2JUH_blk_reg_2000.ckpt
# Dataset
>Training: 16 images
>Regularization: 400 images - black reg images
# Info
>Model Used: Waifu Diffusion 1.3 epoch 5
>Steps: 2000
>Keyword: Aqua (Use this in the prompt)
>Class Phrase: CxRwMwCYViVz2JUH
Aqua_CxRwMwCYViVz2JUH_wrd_reg_2000.ckpt
# Dataset
>Training: 16 images
>Regularization: 400 images - Waifu Research Department reg images
# Info
>Model Used: Waifu Diffusion 1.3 epoch 5
>Steps: 2000
>Keyword: Aqua (Use this in the prompt)
>Class Phrase: CxRwMwCYViVz2JUH
|
ericntay/stbl_clinical_bert_ft_rs10
|
ericntay
| 2022-10-02T19:49:57Z | 118 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-10-02T19:31:32Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: stbl_clinical_bert_ft_rs10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# stbl_clinical_bert_ft_rs10
This model is a fine-tuned version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0846
- F1: 0.9297
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 12
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2834 | 1.0 | 101 | 0.0930 | 0.8446 |
| 0.0669 | 2.0 | 202 | 0.0732 | 0.8938 |
| 0.033 | 3.0 | 303 | 0.0676 | 0.9119 |
| 0.0168 | 4.0 | 404 | 0.0703 | 0.9219 |
| 0.0084 | 5.0 | 505 | 0.0742 | 0.9245 |
| 0.006 | 6.0 | 606 | 0.0772 | 0.9252 |
| 0.0033 | 7.0 | 707 | 0.0844 | 0.9239 |
| 0.0023 | 8.0 | 808 | 0.0855 | 0.9272 |
| 0.0019 | 9.0 | 909 | 0.0843 | 0.9296 |
| 0.0013 | 10.0 | 1010 | 0.0878 | 0.9262 |
| 0.0012 | 11.0 | 1111 | 0.0857 | 0.9266 |
| 0.0008 | 12.0 | 1212 | 0.0846 | 0.9297 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
coastalcph/fairlex-ecthr-minilm
|
coastalcph
| 2022-10-02T19:30:00Z | 114 | 2 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"legal",
"fairlex",
"en",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language: en
pipeline_tag: fill-mask
license: cc-by-nc-sa-4.0
tags:
- legal
- fairlex
widget:
- text: "The applicant submitted that her husband was subjected to treatment amounting to <mask> whilst in the custody of Adana Security Directorate"
---
# FairLex: A multilingual benchmark for evaluating fairness in legal text processing
We present a benchmark suite of four datasets for evaluating the fairness of pre-trained legal language models and the techniques used to fine-tune them for downstream tasks. Our benchmarks cover four jurisdictions (European Council, USA, Swiss, and Chinese), five languages (English, German, French, Italian and Chinese) and fairness across five attributes (gender, age, nationality/region, language, and legal area). In our experiments, we evaluate pre-trained language models using several group-robust fine-tuning techniques and show that performance group disparities are vibrant in many cases, while none of these techniques guarantee fairness, nor consistently mitigate group disparities. Furthermore, we provide a quantitative and qualitative analysis of our results, highlighting open challenges in the development of robustness methods in legal NLP.
---
Ilias Chalkidis, Tommaso Passini, Sheng Zhang, Letizia Tomada, Sebastian Felix Schwemer, and Anders Søgaard. 2022. FairLex: A multilingual bench-mark for evaluating fairness in legal text processing. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, Dublin, Ireland.
---
## Pre-training details
For the purpose of this work, we release four domain-specific BERT models with continued pre-training on the corpora of the examined datasets (ECtHR, SCOTUS, FSCS, SPC).
We train mini-sized BERT models with 6 Transformer blocks, 384 hidden units, and 12 attention heads.
We warm-start all models from the public MiniLMv2 (Wang et al., 2021) using the distilled version of RoBERTa (Liu et al., 2019).
For the English datasets (ECtHR, SCOTUS) and the one distilled from XLM-R (Conneau et al., 2021) for the rest (trilingual FSCS, and Chinese SPC).
## Models list
| Model name | Training corpora | Language |
|-----------------------------------|------------------|--------------------|
| `coastalcph/fairlex-ecthr-minlm` | ECtHR | `en` |
| `coastalcph/fairlex-scotus-minlm` | SCOTUS | `en` |
| `coastalcph/fairlex-fscs-minlm` | FSCS | [`de`, `fr`, `it`] |
| `coastalcph/fairlex-cail-minlm` | CAIL | `zh` |
## Load Pretrained Model
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("coastalcph/fairlex-ecthr-minilm")
model = AutoModel.from_pretrained("coastalcph/fairlex-ecthr-minilm")
```
## Evaluation on downstream tasks
Consider the experiments in the article:
_Ilias Chalkidis, Tommaso Passini, Sheng Zhang, Letizia Tomada, Sebastian Felix Schwemer, and Anders Søgaard. 2022. Fairlex: A multilingual bench-mark for evaluating fairness in legal text processing. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, Dublin, Ireland._
## Author - Publication
```
@inproceedings{chalkidis-2022-fairlex,
author={Chalkidis, Ilias and Passini, Tommaso and Zhang, Sheng and
Tomada, Letizia and Schwemer, Sebastian Felix and Søgaard, Anders},
title={FairLex: A Multilingual Benchmark for Evaluating Fairness in Legal Text Processing},
booktitle={Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics},
year={2022},
address={Dublin, Ireland}
}
```
Ilias Chalkidis on behalf of [CoAStaL NLP Group](https://coastalcph.github.io)
| Github: [@ilias.chalkidis](https://github.com/iliaschalkidis) | Twitter: [@KiddoThe2B](https://twitter.com/KiddoThe2B) |
|
spicytaco17/model
|
spicytaco17
| 2022-10-02T17:53:15Z | 112 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-09-03T14:43:13Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4500
- Accuracy: 0.7127
- F1: 0.7141
- Precision: 0.7157
- Recall: 0.7127
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
Finnish-NLP/t5-base-nl36-finnish
|
Finnish-NLP
| 2022-10-02T15:58:48Z | 43 | 1 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"finnish",
"t5x",
"seq2seq",
"fi",
"dataset:Finnish-NLP/mc4_fi_cleaned",
"dataset:wikipedia",
"arxiv:1910.10683",
"arxiv:2002.05202",
"arxiv:2109.10686",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text2text-generation
| 2022-04-15T10:50:33Z |
---
language:
- fi
license: apache-2.0
tags:
- finnish
- t5
- t5x
- seq2seq
datasets:
- Finnish-NLP/mc4_fi_cleaned
- wikipedia
inference: false
---
# T5-base-nl36 for Finnish
Pretrained T5 model on Finnish language using a span-based masked language modeling (MLM) objective. T5 was introduced in
[this paper](https://arxiv.org/abs/1910.10683)
and first released at [this page](https://github.com/google-research/text-to-text-transfer-transformer).
**Note:** The Hugging Face inference widget is deactivated because this model needs a text-to-text fine-tuning on a specific downstream task to be useful in practice. As an example of a fine-tuned Finnish T5 model, you can check [Finnish-NLP/t5-small-nl24-casing-punctuation-correction](https://huggingface.co/Finnish-NLP/t5-small-nl24-casing-punctuation-correction) which has been fine-tuned to correct missing casing and punctuation for Finnish text.
## Model description
T5 is an encoder-decoder model and treats all NLP problems in a text-to-text format.
Finnish T5 is a transformers model pretrained on a very large corpus of Finnish data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and outputs from those texts.
More precisely, it was pretrained with the span-based masked language modeling (MLM) objective. Spans of the input sequence are masked by so-called sentinel tokens (a.k.a unique mask tokens) and the output sequence is formed as a concatenation of the same sentinel tokens and the real masked tokens. This way, the model learns an inner representation of the Finnish language.
This model used the [T5 v1.1](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) improvements compared to the original T5 model during the pretraining:
- GEGLU activation in feed-forward hidden layer, rather than ReLU - see [here](https://arxiv.org/abs/2002.05202)
- Dropout was turned off in pretraining (quality win). Dropout should be re-enabled during fine-tuning
- Pretrained on span-based masked language modeling (MLM) objective only without mixing in the downstream tasks
- No parameter sharing between embedding and classifier layer
This model also used the "efficient" T5 architecture findings presented in [this paper](https://arxiv.org/abs/2109.10686). In a nutshell, the paper indicates that a Deep-Narrow model architecture is favorable for downstream performance compared to other model architectures of similar parameter count. To be more precise, model depth is defined as the number of transformer blocks that are stacked sequentially.
This model uses the [t5-efficient-base-nl36](https://huggingface.co/google/t5-efficient-base-nl36) architecture's layer depth which means both the encoder and the decoder have 36 transformer layers compared to the original T5 "base" model's architecture of 12 transformer layers.
In total, this model has 814 million parameters.
## Intended uses & limitations
This model was only pretrained in a self-supervised way excluding any supervised training. Therefore, this model has to be fine-tuned before it is usable on a downstream task, like text classification, unlike the Google's original T5 model. **Note:** You most likely need to fine-tune these T5 models without mixed precision so fine-tune them with full fp32 precision. You can also find more fine-tuning tips from [here](https://discuss.huggingface.co/t/t5-finetuning-tips), for example.
### How to use
Here is how to use this model in PyTorch:
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("Finnish-NLP/t5-base-nl36-finnish")
model = T5ForConditionalGeneration.from_pretrained("Finnish-NLP/t5-base-nl36-finnish")
```
and in TensorFlow:
```python
from transformers import T5Tokenizer, TFT5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("Finnish-NLP/t5-base-nl36-finnish")
model = T5ForConditionalGeneration.from_pretrained("Finnish-NLP/t5-base-nl36-finnish", from_pt=True)
```
### Limitations and bias
The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model.
## Training data
This Finnish T5 model was pretrained on the combination of six datasets:
- [mc4_fi_cleaned](https://huggingface.co/datasets/Finnish-NLP/mc4_fi_cleaned), the dataset mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the mC4 dataset and further cleaned it with our own text data cleaning codes (check the dataset repo).
- [wikipedia](https://huggingface.co/datasets/wikipedia) We used the Finnish subset of the wikipedia (August 2021) dataset
- [Yle Finnish News Archive 2011-2018](http://urn.fi/urn:nbn:fi:lb-2017070501)
- [Yle Finnish News Archive 2019-2020](http://urn.fi/urn:nbn:fi:lb-2021050401)
- [Finnish News Agency Archive (STT)](http://urn.fi/urn:nbn:fi:lb-2018121001)
- [The Suomi24 Sentences Corpus](http://urn.fi/urn:nbn:fi:lb-2020021803)
Raw datasets were automatically cleaned to filter out bad quality and non-Finnish examples. Also, a [perplexity](https://huggingface.co/course/chapter7/3#perplexity-for-language-models) score was calculated for all texts with a KenLM model which was trained with very clean Finnish texts only. This perplexity score can then be used to determine how "clean" Finnish language the text contains. Lastly, all datasets were concatenated and the top 90% perplexity score was used as a filtering threshold to filter out the worst quality 10% of texts. Together these cleaned datasets were around 76GB of text.
## Training procedure
### Preprocessing
The texts are tokenized using WordPiece and a vocabulary size of 32000. The inputs and the outputs are sequences of 512 consecutive tokens. Texts are not lower cased so this model is case-sensitive: it makes a difference between finnish and Finnish.
### Pretraining
The model was trained on TPUv3-8 VM, sponsored by the [Google TPU Research Cloud](https://sites.research.google/trc/about/), for 1M steps with a batch size of 64 (in total 33B tokens). The optimizer used was a AdaFactor with learning rate warmup for 10K steps with a constant learning rate of 1e-2, and then an inverse square root decay (exponential decay) of the learning rate after.
Training code was from the Google's Jax/Flax based [t5x framework](https://github.com/google-research/t5x) and also some t5x task definitions were adapted from [Per's t5x work](https://huggingface.co/pere).
## Evaluation results
Evaluation was done by fine-tuning the model on a downstream text classification task with two different labeled Finnish datasets: [Yle News](https://github.com/spyysalo/yle-corpus) and [Eduskunta](https://github.com/aajanki/eduskunta-vkk). Classification fine-tuning was done with a sequence length of 128 tokens.
When fine-tuned on those datasets, this model (the sixth row of the table) achieves the following accuracy results compared to our other T5 models and their parameter counts:
| | Model parameters | Yle News accuracy | Eduskunta accuracy |
|-------------------------------------------------------|------------------|---------------------|----------------------|
|Finnish-NLP/t5-tiny-nl6-finnish | 31 million |92.80 |69.07 |
|Finnish-NLP/t5-mini-nl8-finnish | 72 million |93.89 |71.43 |
|Finnish-NLP/t5-small-nl16-finnish | 184 million |94.46 |74.00 |
|Finnish-NLP/t5-small-nl24-finnish | 260 million |**94.68** |74.90 |
|Finnish-NLP/byt5-base-finnish | 582 million |92.33 |73.13 |
|Finnish-NLP/t5-base-nl36-finnish | 814 million |94.40 |**75.97** |
|Finnish-NLP/t5-large-nl36-finnish | 1425 million |94.17 |73.50 |
Fine-tuning Google's multilingual mT5 models on the same datasets we can clearly see that our monolingual Finnish T5 models achieve much better results on Finnish text classification:
| | Model parameters | Yle News accuracy | Eduskunta accuracy |
|-------------------------------------------------------|------------------|---------------------|----------------------|
|google/mt5-small | 301 million |91.51 |64.10 |
|google/mt5-base | 583 million |92.71 |68.40 |
## Acknowledgements
This project would not have been possible without compute generously provided by Google through the
[TPU Research Cloud](https://sites.research.google/trc/).
## Team Members
- Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/)
- Rasmus Toivanen, [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/)
Feel free to contact us for more details 🤗
|
Finnish-NLP/t5-small-nl24-finnish
|
Finnish-NLP
| 2022-10-02T15:58:11Z | 26 | 1 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"finnish",
"t5x",
"seq2seq",
"fi",
"dataset:Finnish-NLP/mc4_fi_cleaned",
"dataset:wikipedia",
"arxiv:1910.10683",
"arxiv:2002.05202",
"arxiv:2109.10686",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text2text-generation
| 2022-04-03T16:37:27Z |
---
language:
- fi
license: apache-2.0
tags:
- finnish
- t5
- t5x
- seq2seq
datasets:
- Finnish-NLP/mc4_fi_cleaned
- wikipedia
inference: false
---
# T5-small-nl24 for Finnish
Pretrained T5 model on Finnish language using a span-based masked language modeling (MLM) objective. T5 was introduced in
[this paper](https://arxiv.org/abs/1910.10683)
and first released at [this page](https://github.com/google-research/text-to-text-transfer-transformer).
**Note:** The Hugging Face inference widget is deactivated because this model needs a text-to-text fine-tuning on a specific downstream task to be useful in practice. As an example of a fine-tuned Finnish T5 model, you can check [Finnish-NLP/t5-small-nl24-casing-punctuation-correction](https://huggingface.co/Finnish-NLP/t5-small-nl24-casing-punctuation-correction) which has been fine-tuned to correct missing casing and punctuation for Finnish text.
## Model description
T5 is an encoder-decoder model and treats all NLP problems in a text-to-text format.
Finnish T5 is a transformers model pretrained on a very large corpus of Finnish data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and outputs from those texts.
More precisely, it was pretrained with the span-based masked language modeling (MLM) objective. Spans of the input sequence are masked by so-called sentinel tokens (a.k.a unique mask tokens) and the output sequence is formed as a concatenation of the same sentinel tokens and the real masked tokens. This way, the model learns an inner representation of the Finnish language.
This model used the [T5 v1.1](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) improvements compared to the original T5 model during the pretraining:
- GEGLU activation in feed-forward hidden layer, rather than ReLU - see [here](https://arxiv.org/abs/2002.05202)
- Dropout was turned off in pretraining (quality win). Dropout should be re-enabled during fine-tuning
- Pretrained on span-based masked language modeling (MLM) objective only without mixing in the downstream tasks
- No parameter sharing between embedding and classifier layer
This model also used the "efficient" T5 architecture findings presented in [this paper](https://arxiv.org/abs/2109.10686). In a nutshell, the paper indicates that a Deep-Narrow model architecture is favorable for downstream performance compared to other model architectures of similar parameter count. To be more precise, model depth is defined as the number of transformer blocks that are stacked sequentially.
This model uses the [t5-efficient-small-nl24](https://huggingface.co/google/t5-efficient-small-nl24) architecture's layer depth which means both the encoder and the decoder have 24 transformer layers compared to the original T5 "small" model's architecture of 6 transformer layers.
In total, this model has 260 million parameters.
## Intended uses & limitations
This model was only pretrained in a self-supervised way excluding any supervised training. Therefore, this model has to be fine-tuned before it is usable on a downstream task, like text classification, unlike the Google's original T5 model. **Note:** You most likely need to fine-tune these T5 models without mixed precision so fine-tune them with full fp32 precision. You can also find more fine-tuning tips from [here](https://discuss.huggingface.co/t/t5-finetuning-tips), for example.
### How to use
Here is how to use this model in PyTorch:
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("Finnish-NLP/t5-small-nl24-finnish")
model = T5ForConditionalGeneration.from_pretrained("Finnish-NLP/t5-small-nl24-finnish")
```
and in TensorFlow:
```python
from transformers import T5Tokenizer, TFT5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("Finnish-NLP/t5-small-nl24-finnish")
model = T5ForConditionalGeneration.from_pretrained("Finnish-NLP/t5-small-nl24-finnish", from_pt=True)
```
### Limitations and bias
The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model.
## Training data
This Finnish T5 model was pretrained on the combination of six datasets:
- [mc4_fi_cleaned](https://huggingface.co/datasets/Finnish-NLP/mc4_fi_cleaned), the dataset mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the mC4 dataset and further cleaned it with our own text data cleaning codes (check the dataset repo).
- [wikipedia](https://huggingface.co/datasets/wikipedia) We used the Finnish subset of the wikipedia (August 2021) dataset
- [Yle Finnish News Archive 2011-2018](http://urn.fi/urn:nbn:fi:lb-2017070501)
- [Yle Finnish News Archive 2019-2020](http://urn.fi/urn:nbn:fi:lb-2021050401)
- [Finnish News Agency Archive (STT)](http://urn.fi/urn:nbn:fi:lb-2018121001)
- [The Suomi24 Sentences Corpus](http://urn.fi/urn:nbn:fi:lb-2020021803)
Raw datasets were automatically cleaned to filter out bad quality and non-Finnish examples. Also, a [perplexity](https://huggingface.co/course/chapter7/3#perplexity-for-language-models) score was calculated for all texts with a KenLM model which was trained with very clean Finnish texts only. This perplexity score can then be used to determine how "clean" Finnish language the text contains. Lastly, all datasets were concatenated and the top 90% perplexity score was used as a filtering threshold to filter out the worst quality 10% of texts. Together these cleaned datasets were around 76GB of text.
## Training procedure
### Preprocessing
The texts are tokenized using WordPiece and a vocabulary size of 32000. The inputs and the outputs are sequences of 512 consecutive tokens. Texts are not lower cased so this model is case-sensitive: it makes a difference between finnish and Finnish.
### Pretraining
The model was trained on TPUv3-8 VM, sponsored by the [Google TPU Research Cloud](https://sites.research.google/trc/about/), for 500K steps with a batch size of 256 (in total 66B tokens). The optimizer used was a AdaFactor with learning rate warmup for 10K steps with a constant learning rate of 1e-2, and then an inverse square root decay (exponential decay) of the learning rate after.
Training code was from the Google's Jax/Flax based [t5x framework](https://github.com/google-research/t5x) and also some t5x task definitions were adapted from [Per's t5x work](https://huggingface.co/pere).
## Evaluation results
Evaluation was done by fine-tuning the model on a downstream text classification task with two different labeled Finnish datasets: [Yle News](https://github.com/spyysalo/yle-corpus) and [Eduskunta](https://github.com/aajanki/eduskunta-vkk). Classification fine-tuning was done with a sequence length of 128 tokens.
When fine-tuned on those datasets, this model (the fourth row of the table) achieves the following accuracy results compared to our other T5 models and their parameter counts:
| | Model parameters | Yle News accuracy | Eduskunta accuracy |
|-------------------------------------------------------|------------------|---------------------|----------------------|
|Finnish-NLP/t5-tiny-nl6-finnish | 31 million |92.80 |69.07 |
|Finnish-NLP/t5-mini-nl8-finnish | 72 million |93.89 |71.43 |
|Finnish-NLP/t5-small-nl16-finnish | 184 million |94.46 |74.00 |
|Finnish-NLP/t5-small-nl24-finnish | 260 million |**94.68** |74.90 |
|Finnish-NLP/byt5-base-finnish | 582 million |92.33 |73.13 |
|Finnish-NLP/t5-base-nl36-finnish | 814 million |94.40 |**75.97** |
|Finnish-NLP/t5-large-nl36-finnish | 1425 million |94.17 |73.50 |
Fine-tuning Google's multilingual mT5 models on the same datasets we can clearly see that our monolingual Finnish T5 models achieve much better results on Finnish text classification:
| | Model parameters | Yle News accuracy | Eduskunta accuracy |
|-------------------------------------------------------|------------------|---------------------|----------------------|
|google/mt5-small | 301 million |91.51 |64.10 |
|google/mt5-base | 583 million |92.71 |68.40 |
## Acknowledgements
This project would not have been possible without compute generously provided by Google through the
[TPU Research Cloud](https://sites.research.google/trc/).
## Team Members
- Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/)
- Rasmus Toivanen, [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/)
Feel free to contact us for more details 🤗
|
Sandipan1994/t5-small-finetuned-eli5-extra-finetune
|
Sandipan1994
| 2022-10-02T15:57:26Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:eli5",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-10-02T08:32:37Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- eli5
metrics:
- rouge
model-index:
- name: t5-small-finetuned-eli5-extra-finetune
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: eli5
type: eli5
config: LFQA_reddit
split: train_eli5
args: LFQA_reddit
metrics:
- name: Rouge1
type: rouge
value: 9.8647
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-eli5-extra-finetune
This model is a fine-tuned version of [Sandipan1994/t5-small-finetuned-eli5](https://huggingface.co/Sandipan1994/t5-small-finetuned-eli5) on the eli5 dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6711
- Rouge1: 9.8647
- Rouge2: 1.9166
- Rougel: 7.9523
- Rougelsum: 9.1657
- Gen Len: 18.9981
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 3.8865 | 1.0 | 17040 | 3.6929 | 9.9257 | 1.9075 | 8.026 | 9.2238 | 19.0 |
| 3.8777 | 2.0 | 34080 | 3.6764 | 9.8568 | 1.9027 | 7.9443 | 9.1535 | 18.9981 |
| 3.8667 | 3.0 | 51120 | 3.6711 | 9.8647 | 1.9166 | 7.9523 | 9.1657 | 18.9981 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
Finnish-NLP/t5-mini-nl8-finnish
|
Finnish-NLP
| 2022-10-02T15:57:23Z | 185 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"finnish",
"t5x",
"seq2seq",
"fi",
"dataset:Finnish-NLP/mc4_fi_cleaned",
"dataset:wikipedia",
"arxiv:1910.10683",
"arxiv:2002.05202",
"arxiv:2109.10686",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text2text-generation
| 2022-03-31T10:43:36Z |
---
language:
- fi
license: apache-2.0
tags:
- finnish
- t5
- t5x
- seq2seq
datasets:
- Finnish-NLP/mc4_fi_cleaned
- wikipedia
inference: false
---
# T5-mini-nl8 for Finnish
Pretrained T5 model on Finnish language using a span-based masked language modeling (MLM) objective. T5 was introduced in
[this paper](https://arxiv.org/abs/1910.10683)
and first released at [this page](https://github.com/google-research/text-to-text-transfer-transformer).
**Note:** The Hugging Face inference widget is deactivated because this model needs a text-to-text fine-tuning on a specific downstream task to be useful in practice. As an example of a fine-tuned Finnish T5 model, you can check [Finnish-NLP/t5-small-nl24-casing-punctuation-correction](https://huggingface.co/Finnish-NLP/t5-small-nl24-casing-punctuation-correction) which has been fine-tuned to correct missing casing and punctuation for Finnish text.
## Model description
T5 is an encoder-decoder model and treats all NLP problems in a text-to-text format.
Finnish T5 is a transformers model pretrained on a very large corpus of Finnish data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and outputs from those texts.
More precisely, it was pretrained with the span-based masked language modeling (MLM) objective. Spans of the input sequence are masked by so-called sentinel tokens (a.k.a unique mask tokens) and the output sequence is formed as a concatenation of the same sentinel tokens and the real masked tokens. This way, the model learns an inner representation of the Finnish language.
This model used the [T5 v1.1](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) improvements compared to the original T5 model during the pretraining:
- GEGLU activation in feed-forward hidden layer, rather than ReLU - see [here](https://arxiv.org/abs/2002.05202)
- Dropout was turned off in pretraining (quality win). Dropout should be re-enabled during fine-tuning
- Pretrained on span-based masked language modeling (MLM) objective only without mixing in the downstream tasks
- No parameter sharing between embedding and classifier layer
This model also used the "efficient" T5 architecture findings presented in [this paper](https://arxiv.org/abs/2109.10686). In a nutshell, the paper indicates that a Deep-Narrow model architecture is favorable for downstream performance compared to other model architectures of similar parameter count. To be more precise, model depth is defined as the number of transformer blocks that are stacked sequentially.
This model uses the [t5-efficient-mini-nl8](https://huggingface.co/google/t5-efficient-mini-nl8) architecture's layer depth which means both the encoder and the decoder have 8 transformer layers compared to the original T5 "mini" model's architecture of 4 transformer layers.
In total, this model has 72 million parameters.
## Intended uses & limitations
This model was only pretrained in a self-supervised way excluding any supervised training. Therefore, this model has to be fine-tuned before it is usable on a downstream task, like text classification, unlike the Google's original T5 model. **Note:** You most likely need to fine-tune these T5 models without mixed precision so fine-tune them with full fp32 precision. You can also find more fine-tuning tips from [here](https://discuss.huggingface.co/t/t5-finetuning-tips), for example.
### How to use
Here is how to use this model in PyTorch:
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("Finnish-NLP/t5-mini-nl8-finnish")
model = T5ForConditionalGeneration.from_pretrained("Finnish-NLP/t5-mini-nl8-finnish")
```
and in TensorFlow:
```python
from transformers import T5Tokenizer, TFT5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("Finnish-NLP/t5-mini-nl8-finnish")
model = T5ForConditionalGeneration.from_pretrained("Finnish-NLP/t5-mini-nl8-finnish", from_pt=True)
```
### Limitations and bias
The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model.
## Training data
This Finnish T5 model was pretrained on the combination of six datasets:
- [mc4_fi_cleaned](https://huggingface.co/datasets/Finnish-NLP/mc4_fi_cleaned), the dataset mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the mC4 dataset and further cleaned it with our own text data cleaning codes (check the dataset repo).
- [wikipedia](https://huggingface.co/datasets/wikipedia) We used the Finnish subset of the wikipedia (August 2021) dataset
- [Yle Finnish News Archive 2011-2018](http://urn.fi/urn:nbn:fi:lb-2017070501)
- [Yle Finnish News Archive 2019-2020](http://urn.fi/urn:nbn:fi:lb-2021050401)
- [Finnish News Agency Archive (STT)](http://urn.fi/urn:nbn:fi:lb-2018121001)
- [The Suomi24 Sentences Corpus](http://urn.fi/urn:nbn:fi:lb-2020021803)
Raw datasets were automatically cleaned to filter out bad quality and non-Finnish examples. Also, a [perplexity](https://huggingface.co/course/chapter7/3#perplexity-for-language-models) score was calculated for all texts with a KenLM model which was trained with very clean Finnish texts only. This perplexity score can then be used to determine how "clean" Finnish language the text contains. Lastly, all datasets were concatenated and the top 90% perplexity score was used as a filtering threshold to filter out the worst quality 10% of texts. Together these cleaned datasets were around 76GB of text.
## Training procedure
### Preprocessing
The texts are tokenized using WordPiece and a vocabulary size of 32000. The inputs and the outputs are sequences of 512 consecutive tokens. Texts are not lower cased so this model is case-sensitive: it makes a difference between finnish and Finnish.
### Pretraining
The model was trained on TPUv3-8 VM, sponsored by the [Google TPU Research Cloud](https://sites.research.google/trc/about/), for 500K steps with a batch size of 256 (in total 66B tokens). The optimizer used was a AdaFactor with learning rate warmup for 10K steps with a constant learning rate of 1e-2, and then an inverse square root decay (exponential decay) of the learning rate after.
Training code was from the Google's Jax/Flax based [t5x framework](https://github.com/google-research/t5x) and also some t5x task definitions were adapted from [Per's t5x work](https://huggingface.co/pere).
## Evaluation results
Evaluation was done by fine-tuning the model on a downstream text classification task with two different labeled Finnish datasets: [Yle News](https://github.com/spyysalo/yle-corpus) and [Eduskunta](https://github.com/aajanki/eduskunta-vkk). Classification fine-tuning was done with a sequence length of 128 tokens.
When fine-tuned on those datasets, this model (the second row of the table) achieves the following accuracy results compared to our other T5 models and their parameter counts:
| | Model parameters | Yle News accuracy | Eduskunta accuracy |
|-------------------------------------------------------|------------------|---------------------|----------------------|
|Finnish-NLP/t5-tiny-nl6-finnish | 31 million |92.80 |69.07 |
|Finnish-NLP/t5-mini-nl8-finnish | 72 million |93.89 |71.43 |
|Finnish-NLP/t5-small-nl16-finnish | 184 million |94.46 |74.00 |
|Finnish-NLP/t5-small-nl24-finnish | 260 million |**94.68** |74.90 |
|Finnish-NLP/byt5-base-finnish | 582 million |92.33 |73.13 |
|Finnish-NLP/t5-base-nl36-finnish | 814 million |94.40 |**75.97** |
|Finnish-NLP/t5-large-nl36-finnish | 1425 million |94.17 |73.50 |
Fine-tuning Google's multilingual mT5 models on the same datasets we can clearly see that our monolingual Finnish T5 models achieve much better results on Finnish text classification:
| | Model parameters | Yle News accuracy | Eduskunta accuracy |
|-------------------------------------------------------|------------------|---------------------|----------------------|
|google/mt5-small | 301 million |91.51 |64.10 |
|google/mt5-base | 583 million |92.71 |68.40 |
## Acknowledgements
This project would not have been possible without compute generously provided by Google through the
[TPU Research Cloud](https://sites.research.google/trc/).
## Team Members
- Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/)
- Rasmus Toivanen, [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/)
Feel free to contact us for more details 🤗
|
Finnish-NLP/t5-small-nl16-finnish
|
Finnish-NLP
| 2022-10-02T15:55:05Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"finnish",
"t5x",
"seq2seq",
"fi",
"dataset:Finnish-NLP/mc4_fi_cleaned",
"dataset:wikipedia",
"arxiv:1910.10683",
"arxiv:2002.05202",
"arxiv:2109.10686",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text2text-generation
| 2022-08-18T10:51:43Z |
---
language:
- fi
license: apache-2.0
tags:
- finnish
- t5
- t5x
- seq2seq
datasets:
- Finnish-NLP/mc4_fi_cleaned
- wikipedia
inference: false
---
# T5-small-nl16 for Finnish
Pretrained T5 model on Finnish language using a span-based masked language modeling (MLM) objective. T5 was introduced in
[this paper](https://arxiv.org/abs/1910.10683)
and first released at [this page](https://github.com/google-research/text-to-text-transfer-transformer).
**Note:** The Hugging Face inference widget is deactivated because this model needs a text-to-text fine-tuning on a specific downstream task to be useful in practice. As an example of a fine-tuned Finnish T5 model, you can check [Finnish-NLP/t5-small-nl24-casing-punctuation-correction](https://huggingface.co/Finnish-NLP/t5-small-nl24-casing-punctuation-correction) which has been fine-tuned to correct missing casing and punctuation for Finnish text.
## Model description
T5 is an encoder-decoder model and treats all NLP problems in a text-to-text format.
Finnish T5 is a transformers model pretrained on a very large corpus of Finnish data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and outputs from those texts.
More precisely, it was pretrained with the span-based masked language modeling (MLM) objective. Spans of the input sequence are masked by so-called sentinel tokens (a.k.a unique mask tokens) and the output sequence is formed as a concatenation of the same sentinel tokens and the real masked tokens. This way, the model learns an inner representation of the Finnish language.
This model used the [T5 v1.1](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) improvements compared to the original T5 model during the pretraining:
- GEGLU activation in feed-forward hidden layer, rather than ReLU - see [here](https://arxiv.org/abs/2002.05202)
- Dropout was turned off in pretraining (quality win). Dropout should be re-enabled during fine-tuning
- Pretrained on span-based masked language modeling (MLM) objective only without mixing in the downstream tasks
- No parameter sharing between embedding and classifier layer
This model also used the "efficient" T5 architecture findings presented in [this paper](https://arxiv.org/abs/2109.10686). In a nutshell, the paper indicates that a Deep-Narrow model architecture is favorable for downstream performance compared to other model architectures of similar parameter count. To be more precise, model depth is defined as the number of transformer blocks that are stacked sequentially.
This model uses the [t5-efficient-small-nl16](https://huggingface.co/google/t5-efficient-small-nl16) architecture's layer depth which means both the encoder and the decoder have 16 transformer layers compared to the original T5 "small" model's architecture of 6 transformer layers.
In total, this model has 184 million parameters.
## Intended uses & limitations
This model was only pretrained in a self-supervised way excluding any supervised training. Therefore, this model has to be fine-tuned before it is usable on a downstream task, like text classification, unlike the Google's original T5 model. **Note:** You most likely need to fine-tune these T5 models without mixed precision so fine-tune them with full fp32 precision. You can also find more fine-tuning tips from [here](https://discuss.huggingface.co/t/t5-finetuning-tips), for example.
### How to use
Here is how to use this model in PyTorch:
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("Finnish-NLP/t5-small-nl16-finnish")
model = T5ForConditionalGeneration.from_pretrained("Finnish-NLP/t5-small-nl16-finnish")
```
and in TensorFlow:
```python
from transformers import T5Tokenizer, TFT5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("Finnish-NLP/t5-small-nl16-finnish")
model = T5ForConditionalGeneration.from_pretrained("Finnish-NLP/t5-small-nl16-finnish", from_pt=True)
```
### Limitations and bias
The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model.
## Training data
This Finnish T5 model was pretrained on the combination of six datasets:
- [mc4_fi_cleaned](https://huggingface.co/datasets/Finnish-NLP/mc4_fi_cleaned), the dataset mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the mC4 dataset and further cleaned it with our own text data cleaning codes (check the dataset repo).
- [wikipedia](https://huggingface.co/datasets/wikipedia) We used the Finnish subset of the wikipedia (August 2021) dataset
- [Yle Finnish News Archive 2011-2018](http://urn.fi/urn:nbn:fi:lb-2017070501)
- [Yle Finnish News Archive 2019-2020](http://urn.fi/urn:nbn:fi:lb-2021050401)
- [Finnish News Agency Archive (STT)](http://urn.fi/urn:nbn:fi:lb-2018121001)
- [The Suomi24 Sentences Corpus](http://urn.fi/urn:nbn:fi:lb-2020021803)
Raw datasets were automatically cleaned to filter out bad quality and non-Finnish examples. Also, a [perplexity](https://huggingface.co/course/chapter7/3#perplexity-for-language-models) score was calculated for all texts with a KenLM model which was trained with very clean Finnish texts only. This perplexity score can then be used to determine how "clean" Finnish language the text contains. Lastly, all datasets were concatenated and the top 90% perplexity score was used as a filtering threshold to filter out the worst quality 10% of texts. Together these cleaned datasets were around 76GB of text.
## Training procedure
### Preprocessing
The texts are tokenized using WordPiece and a vocabulary size of 32000. The inputs and the outputs are sequences of 512 consecutive tokens. Texts are not lower cased so this model is case-sensitive: it makes a difference between finnish and Finnish.
### Pretraining
The model was trained on TPUv3-8 VM, sponsored by the [Google TPU Research Cloud](https://sites.research.google/trc/about/), for 500K steps with a batch size of 256 (in total 66B tokens). The optimizer used was a AdaFactor with learning rate warmup for 10K steps with a constant learning rate of 1e-2, and then an inverse square root decay (exponential decay) of the learning rate after.
Training code was from the Google's Jax/Flax based [t5x framework](https://github.com/google-research/t5x) and also some t5x task definitions were adapted from [Per's t5x work](https://huggingface.co/pere).
## Evaluation results
Evaluation was done by fine-tuning the model on a downstream text classification task with two different labeled Finnish datasets: [Yle News](https://github.com/spyysalo/yle-corpus) and [Eduskunta](https://github.com/aajanki/eduskunta-vkk). Classification fine-tuning was done with a sequence length of 128 tokens.
When fine-tuned on those datasets, this model (the third row of the table) achieves the following accuracy results compared to our other T5 models and their parameter counts:
| | Model parameters | Yle News accuracy | Eduskunta accuracy |
|-------------------------------------------------------|------------------|---------------------|----------------------|
|Finnish-NLP/t5-tiny-nl6-finnish | 31 million |92.80 |69.07 |
|Finnish-NLP/t5-mini-nl8-finnish | 72 million |93.89 |71.43 |
|Finnish-NLP/t5-small-nl16-finnish | 184 million |94.46 |74.00 |
|Finnish-NLP/t5-small-nl24-finnish | 260 million |**94.68** |74.90 |
|Finnish-NLP/byt5-base-finnish | 582 million |92.33 |73.13 |
|Finnish-NLP/t5-base-nl36-finnish | 814 million |94.40 |**75.97** |
|Finnish-NLP/t5-large-nl36-finnish | 1425 million |94.17 |73.50 |
Fine-tuning Google's multilingual mT5 models on the same datasets we can clearly see that our monolingual Finnish T5 models achieve much better results on Finnish text classification:
| | Model parameters | Yle News accuracy | Eduskunta accuracy |
|-------------------------------------------------------|------------------|---------------------|----------------------|
|google/mt5-small | 301 million |91.51 |64.10 |
|google/mt5-base | 583 million |92.71 |68.40 |
## Acknowledgements
This project would not have been possible without compute generously provided by Google through the
[TPU Research Cloud](https://sites.research.google/trc/).
## Team Members
- Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/)
- Rasmus Toivanen, [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/)
Feel free to contact us for more details 🤗
|
jamesesguerra/mt5-small-finetuned-1.1.0
|
jamesesguerra
| 2022-10-02T12:52:58Z | 111 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-10-02T12:23:36Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-small-finetuned-1.1.0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-small-finetuned-1.1.0
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.5550
- Rouge1: 18.5458
- Rouge2: 5.7454
- Rougel: 15.5515
- Rougelsum: 15.7806
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|
| 15.4799 | 1.0 | 97 | 6.9041 | 16.3755 | 5.6407 | 13.8081 | 13.8801 |
| 9.8046 | 2.0 | 194 | 4.5550 | 18.5458 | 5.7454 | 15.5515 | 15.7806 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
huggingtweets/evelynisepic
|
huggingtweets
| 2022-10-02T11:55:16Z | 119 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-10-02T11:53:59Z |
---
language: en
thumbnail: http://www.huggingtweets.com/evelynisepic/1664711711588/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1559633536130465798/8RaUnaJl_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Evelyn 🏳️⚧️</div>
<div style="text-align: center; font-size: 14px;">@evelynisepic</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Evelyn 🏳️⚧️.
| Data | Evelyn 🏳️⚧️ |
| --- | --- |
| Tweets downloaded | 3246 |
| Retweets | 32 |
| Short tweets | 838 |
| Tweets kept | 2376 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2ej5kc1v/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @evelynisepic's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3bpnoglx) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3bpnoglx/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/evelynisepic')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
justinpinkney/stylegan3-t-lhq-256
|
justinpinkney
| 2022-10-02T11:18:54Z | 0 | 3 | null |
[
"license:mit",
"region:us"
] | null | 2022-10-02T11:14:34Z |
---
license: mit
inference: false
---
## StyleGAN3-t LHQ 256

- Name: LHQ-256
- Author: Justin Pinkney/LambdaLabs
- Author URL: https://www.justinpinkney.com/ https://lambdalabs.com/
- Dataset: LHQ
- Source URL: https://twitter.com/Buntworthy/status/1460980442409185287?s=20
- Resolution: 256x256
- Config: T
- Notes: FID=2.31, trained 25 Mimgs, gamma=2, mapping layers=2
For more models see the [Awesome Pretrained StyleGAN3 repo](https://github.com/justinpinkney/awesome-pretrained-stylegan3).
|
Den4ikAI/rubert-tiny-squad
|
Den4ikAI
| 2022-10-02T11:03:32Z | 142 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"pretraining",
"question-answering",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-09-11T15:49:10Z |
---
license: mit
pipeline_tag: question-answering
widget:
- context: "Пушкин родился 6 июля 1799 года"
- text: "Когда родился Пушкин?"
example_title: "test"
---
обученный rubert от cointegrated/rubert-tiny2.
размер выборки - 4.
Эпохи - 16.
```python
from transformers import pipeline
qa_pipeline = pipeline(
"question-answering",
model="Den4ikAI/rubert-tiny-squad",
tokenizer="Den4ikAI/rubert-tiny-squad"
)
predictions = qa_pipeline({
'context': "Пушкин родился 6 июля 1799 года",
'question': "Когда родился Пушкин?"
})
print(predictions)
# output:
#{'score': 0.9413797664642334, 'start': 15, 'end': 31, 'answer': '6 июля 1799 года'}
```
|
Den4ikAI/rubert_large_squad_2
|
Den4ikAI
| 2022-10-02T11:03:11Z | 245 | 4 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-09-12T03:08:58Z |
---
license: mit
pipeline_tag: question-answering
widget:
- context: "Пушкин родился 6 июля 1799 года"
- text: "Когда родился Пушкин?"
example_title: "test"
---
обученный rubert от sberbank-ai/ruBert-base.
размер выборки - 4.
Эпохи - 4.
```python
from transformers import pipeline
qa_pipeline = pipeline(
"question-answering",
model="Den4ikAI/rubert_large_squad_2",
tokenizer="Den4ikAI/rubert_large_squad_2"
)
predictions = qa_pipeline({
'context': "Пушкин родился 6 июля 1799 года",
'question': "Когда родился Пушкин?"
})
print(predictions)
# output:
#{'score': 0.8013797664642334, 'start': 15, 'end': 31, 'answer': '6 июля 1799 года'}
```
|
Den4ikAI/rubert-large-squad
|
Den4ikAI
| 2022-10-02T11:00:22Z | 154 | 0 |
transformers
|
[
"transformers",
"pytorch",
"question-answering",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-09-09T01:31:51Z |
---
license: mit
pipeline_tag: question-answering
widget:
- context: "Пушкин родился 6 июля 1799 года"
- text: "Когда родился Пушкин?"
example_title: "test"
---
обученный rubert от sberbank-ai/ruBert-base.
размер выборки - 4.
Эпохи - 2.
```python
from transformers import pipeline
qa_pipeline = pipeline(
"question-answering",
model="Den4ikAI/rubert-large-squad",
tokenizer="Den4ikAI/rubert-large-squad"
)
predictions = qa_pipeline({
'context': "Пушкин родился 6 июля 1799 года",
'question': "Когда родился Пушкин?"
})
print(predictions)
# output:
#{'score': 0.9613797664642334, 'start': 15, 'end': 31, 'answer': '6 июля 1799 года'}
```
|
Den4ikAI/russian_sensitive_topics
|
Den4ikAI
| 2022-10-02T10:42:13Z | 331 | 2 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"ru",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-02T10:27:58Z |
---
license: mit
widget:
- text: "Я трахну тебя и убью"
example_title: "test_1"
- text: "Завтра я приду в эту школу и убью всех"
example_title: "test_2"
- text: "Я убью тебя с ак47"
example_title: "test_2"
language:
- ru
---
Обученный ruBert-large от sberbank-ai
Обучался до 10 эпох
|
rwheel/q-FrozenLake-v1-8x8-no_slippery
|
rwheel
| 2022-10-02T10:08:53Z | 0 | 0 | null |
[
"FrozenLake-v1-8x8-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-10-02T10:08:47Z |
---
tags:
- FrozenLake-v1-8x8-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-8x8-no_slippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-8x8-no_slippery
type: FrozenLake-v1-8x8-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="rwheel/q-FrozenLake-v1-8x8-no_slippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
farisk263/PPO
|
farisk263
| 2022-10-02T09:24:00Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-10-02T08:45:12Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 235.00 +/- 23.78
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Imran1/sentimen_analysis_yelp
|
Imran1
| 2022-10-02T07:46:30Z | 101 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-09-22T07:22:34Z |
---
#Yelp reviews
example_1: the food is not good.
example_2: I like this dish.
---
|
worachot-n/WangchanBERTa_LimeSoda_FakeNews
|
worachot-n
| 2022-10-02T07:41:21Z | 104 | 1 |
transformers
|
[
"transformers",
"pytorch",
"camembert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-02T07:25:59Z |
Label:
- LABEL 0: Fact
- LABEL 1: Fake
|
FIT17/Reinforce-Pixelcopter-PLE-v0
|
FIT17
| 2022-10-02T05:42:42Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-10-02T05:42:35Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: -0.20 +/- 1.83
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
FIT17/Reinforce-CartPole-v1
|
FIT17
| 2022-10-02T05:41:02Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-10-02T05:40:23Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 127.50 +/- 35.97
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
pedrocaribe/DialoGPT-medium-LL
|
pedrocaribe
| 2022-10-02T05:39:12Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-10-02T04:11:17Z |
---
tags:
- conversational
---
# DialoGPT Model trained based on testimonial given during Brazilian's ex-president trial.
|
kvssetty/kvs-image-classification
|
kvssetty
| 2022-10-02T05:01:19Z | 0 | 0 |
keras
|
[
"keras",
"Image Classification",
"Keras",
"TensorFlow",
"region:us"
] | null | 2022-10-01T12:09:49Z |
---
version: 1.0.0
library_name: keras
tags:
- Image Classification
- Keras
- TensorFlow
extra_gated_prompt: "You agree to not use the model to conduct experiments that cause harm to human subjects."
extra_gated_fields:
Company: text
Country: text
I agree to use this model for non-commerical use ONLY: checkbox
---
## Welcome to KVS's Computer Vision Image Classification Model repo
This is my first model repository in hugging face and it is a test repo, so you may not find anything interesting in this repo. Anyway, I am trying to implement an image classification model here using CNN architecture and use the TensorFlow python library for implementation.
|
yuntian-deng/latex2im_ss_100
|
yuntian-deng
| 2022-10-02T04:37:34Z | 2 | 0 |
diffusers
|
[
"diffusers",
"en",
"dataset:yuntian-deng/im2latex-100k",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-10-02T04:36:55Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: yuntian-deng/im2latex-100k
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# latex2im_ss_100
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `yuntian-deng/im2latex-100k` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: no
### Training results
📈 [TensorBoard logs](https://huggingface.co/yuntian-deng/latex2im_ss_100/tensorboard?#scalars)
|
waifu-research-department/Ryougi-Shiki
|
waifu-research-department
| 2022-10-02T04:24:36Z | 0 | 1 | null |
[
"region:us"
] | null | 2022-10-02T03:04:57Z |
# Description
Trainer: naotsue
Ryougi Shiki from Kara no Kyoukai
# Dataset
>Training: 26 images
>Regularization: 100 images
# Info
>Model Used: Waifu Diffusion 1.3 (Epoch 5)
>Steps: 3000
>Keyword: SHIKI (Use this in the prompt)
>Class Phrase: 1girl

|
Sandipan1994/t5-small-finetuned-eli5
|
Sandipan1994
| 2022-10-02T01:30:41Z | 113 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:eli5",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-09-26T03:25:51Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- eli5
metrics:
- rouge
model-index:
- name: t5-small-finetuned-eli5
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: eli5
type: eli5
config: LFQA_reddit
split: train_eli5
args: LFQA_reddit
metrics:
- name: Rouge1
type: rouge
value: 9.944
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-eli5
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the eli5 dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7275
- Rouge1: 9.944
- Rouge2: 1.908
- Rougel: 8.0145
- Rougelsum: 9.2275
- Gen Len: 18.9988
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 3.9806 | 1.0 | 17040 | 3.7726 | 9.8475 | 1.872 | 7.9462 | 9.1258 | 18.9972 |
| 3.9458 | 2.0 | 34080 | 3.7369 | 9.9232 | 1.8981 | 7.9922 | 9.2061 | 18.9988 |
| 3.9355 | 3.0 | 51120 | 3.7275 | 9.944 | 1.908 | 8.0145 | 9.2275 | 18.9988 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
waifu-research-department/Cirno
|
waifu-research-department
| 2022-10-02T00:22:30Z | 0 | 1 | null |
[
"license:mit",
"region:us"
] | null | 2022-10-01T23:55:26Z |
---
license: mit
---
# Description
Trainer: Hank
Cirno from Touhou
# Dataset
>Training: 16 images
>Regularization: 18 images
# Info
>Model Used: Waifu Diffusion 1.2
>Steps: 3k
>Keyword: A photo of sks (Use this in the prompt)
>Class Phrase: ice_fairy

|
IIIT-L/roberta-large-finetuned-TRAC-DS
|
IIIT-L
| 2022-10-01T20:45:23Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-01T17:17:37Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: roberta-large-finetuned-TRAC-DS
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-finetuned-TRAC-DS
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8198
- Accuracy: 0.7190
- Precision: 0.6955
- Recall: 0.6979
- F1: 0.6963
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 43
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.9538 | 1.0 | 612 | 0.8083 | 0.6111 | 0.6192 | 0.6164 | 0.5994 |
| 0.7924 | 2.0 | 1224 | 0.7594 | 0.6601 | 0.6688 | 0.6751 | 0.6424 |
| 0.6844 | 3.0 | 1836 | 0.6986 | 0.7042 | 0.6860 | 0.6969 | 0.6858 |
| 0.5715 | 3.99 | 2448 | 0.7216 | 0.7075 | 0.6957 | 0.6978 | 0.6925 |
| 0.45 | 4.99 | 3060 | 0.7963 | 0.7288 | 0.7126 | 0.7074 | 0.7073 |
| 0.352 | 5.99 | 3672 | 1.0824 | 0.7141 | 0.6999 | 0.6774 | 0.6818 |
| 0.2546 | 6.99 | 4284 | 1.0884 | 0.7230 | 0.7006 | 0.7083 | 0.7028 |
| 0.1975 | 7.99 | 4896 | 1.5338 | 0.7337 | 0.7090 | 0.7063 | 0.7074 |
| 0.1656 | 8.99 | 5508 | 1.8182 | 0.7100 | 0.6882 | 0.6989 | 0.6896 |
| 0.1358 | 9.98 | 6120 | 2.1623 | 0.7173 | 0.6917 | 0.6959 | 0.6934 |
| 0.1235 | 10.98 | 6732 | 2.3249 | 0.7141 | 0.6881 | 0.6914 | 0.6888 |
| 0.1003 | 11.98 | 7344 | 2.3474 | 0.7124 | 0.6866 | 0.6920 | 0.6887 |
| 0.0826 | 12.98 | 7956 | 2.3574 | 0.7083 | 0.6853 | 0.6959 | 0.6874 |
| 0.0727 | 13.98 | 8568 | 2.4989 | 0.7116 | 0.6858 | 0.6934 | 0.6883 |
| 0.0553 | 14.98 | 9180 | 2.8090 | 0.7026 | 0.6747 | 0.6710 | 0.6725 |
| 0.0433 | 15.97 | 9792 | 2.6647 | 0.7255 | 0.7010 | 0.7028 | 0.7018 |
| 0.0449 | 16.97 | 10404 | 2.6568 | 0.7247 | 0.7053 | 0.6997 | 0.7010 |
| 0.0373 | 17.97 | 11016 | 2.7632 | 0.7149 | 0.6888 | 0.6938 | 0.6909 |
| 0.0278 | 18.97 | 11628 | 2.8245 | 0.7124 | 0.6866 | 0.6930 | 0.6889 |
| 0.0288 | 19.97 | 12240 | 2.8198 | 0.7190 | 0.6955 | 0.6979 | 0.6963 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.10.1+cu111
- Datasets 2.3.2
- Tokenizers 0.12.1
|
rojamet/mora
|
rojamet
| 2022-10-01T20:43:34Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2022-10-01T20:43:34Z |
---
license: bigscience-openrail-m
---
|
Bistolero/du_ge_all_2
|
Bistolero
| 2022-10-01T20:28:17Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-10-01T20:10:50Z |
---
tags:
- generated_from_trainer
model-index:
- name: du_ge_all_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# du_ge_all_2
This model was trained from scratch on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 20
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
Bistolero/nl3
|
Bistolero
| 2022-10-01T20:26:52Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-10-01T20:09:24Z |
---
tags:
- generated_from_trainer
model-index:
- name: nl3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nl3
This model was trained from scratch on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 25
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
MoososCap/SpongeBob-SquarePants-Diffusion
|
MoososCap
| 2022-10-01T19:15:12Z | 0 | 2 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2022-10-01T18:20:25Z |
---
license: creativeml-openrail-m
---
Modified from original model:CompVis/stable-diffusion-v-1-4-original
training using images:
https://i.imgur.com/D76R0eV.jpg
https://i.imgur.com/7zQ6f72.jpg
https://i.imgur.com/T2vcv5K.jpg
https://i.imgur.com/T4RsGHU.jpg
https://i.imgur.com/CRrskPZ.jpg
https://i.imgur.com/HG9Ba3q.jpg
https://i.imgur.com/X0XV8sG.jpg
https://i.imgur.com/RTnZIMr.jpg
https://i.imgur.com/4QVQodx.jpg
https://i.imgur.com/VTsdYj8.jpg
https://i.imgur.com/MM4ng1M.jpg
If you will not import the model, feel free to use the COLAB below
https://colab.research.google.com/drive/1MJ96yoU5J8h1fBWzabBNYBmK_MvNtx71?usp=sharing
|
sd-concepts-library/852style-girl
|
sd-concepts-library
| 2022-10-01T19:07:34Z | 0 | 23 | null |
[
"license:mit",
"region:us"
] | null | 2022-10-01T19:07:30Z |
---
license: mit
---
### <852style-girl> on Stable Diffusion
This is the `<852style-girl>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:




|
huggingtweets/luisbetx9-microversoslt
|
huggingtweets
| 2022-10-01T19:00:42Z | 119 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-10-01T18:54:23Z |
---
language: en
thumbnail: http://www.huggingtweets.com/luisbetx9-microversoslt/1664650577553/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1572306079282872326/rX5Nbrid_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1296918435469897732/ctOlkbD3_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">BetaP̾e̾t̾r̾a̾ 🅼²🆙 & MicroversosLT</div>
<div style="text-align: center; font-size: 14px;">@luisbetx9-microversoslt</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from BetaP̾e̾t̾r̾a̾ 🅼²🆙 & MicroversosLT.
| Data | BetaP̾e̾t̾r̾a̾ 🅼²🆙 | MicroversosLT |
| --- | --- | --- |
| Tweets downloaded | 3248 | 1105 |
| Retweets | 1892 | 709 |
| Short tweets | 644 | 185 |
| Tweets kept | 712 | 211 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/zzml0e6d/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @luisbetx9-microversoslt's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/ewz96ki7) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/ewz96ki7/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/luisbetx9-microversoslt')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
LunNova/sd1.4-pony-finetune
|
LunNova
| 2022-10-01T18:35:14Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2022-10-01T18:35:14Z |
---
license: creativeml-openrail-m
---
|
gabrielgmendonca/bert-base-portuguese-cased-finetuned-chico-xavier-finetuned-chico-xavier
|
gabrielgmendonca
| 2022-10-01T18:10:15Z | 75 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"fill-mask",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-09-28T11:15:15Z |
---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: gabrielgmendonca/bert-base-portuguese-cased-finetuned-chico-xavier-finetuned-chico-xavier
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# gabrielgmendonca/bert-base-portuguese-cased-finetuned-chico-xavier-finetuned-chico-xavier
This model is a fine-tuned version of [gabrielgmendonca/bert-base-portuguese-cased-finetuned-chico-xavier](https://huggingface.co/gabrielgmendonca/bert-base-portuguese-cased-finetuned-chico-xavier) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.8630
- Validation Loss: 1.7215
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 3430, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.8630 | 1.7215 | 0 |
### Framework versions
- Transformers 4.22.2
- TensorFlow 2.8.2
- Datasets 2.5.1
- Tokenizers 0.12.1
|
waifu-research-department/Aqua
|
waifu-research-department
| 2022-10-01T16:14:03Z | 0 | 2 | null |
[
"license:mit",
"region:us"
] | null | 2022-10-01T15:54:14Z |
---
license: mit
---
# Description
Trainer: Chris
Aqua from Konosuba
# Dataset
>Training: 16 images
>Regularization: 3249 images - waifu-research-department reg images
# Info
>Model Used: Waifu Diffusion 1.3 epoch 5
>Steps: 3000
>Keyword: Aqua (Use this in the prompt)
>Class Phrase: Useless_Goddess
|
huggingtweets/elonmusk-nftfreaks-nftgirl
|
huggingtweets
| 2022-10-01T13:26:17Z | 119 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-10-01T13:24:06Z |
---
language: en
thumbnail: http://www.huggingtweets.com/elonmusk-nftfreaks-nftgirl/1664630772232/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1572573363255525377/Xz3fufYY_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1524408283674591232/ZcdTVEPl_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1384551299681714177/fHRGvDJR_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Elon Musk & NFT Freaks 🗝🏰🦸🏿♂️ & NFTGirl 🖼</div>
<div style="text-align: center; font-size: 14px;">@elonmusk-nftfreaks-nftgirl</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Elon Musk & NFT Freaks 🗝🏰🦸🏿♂️ & NFTGirl 🖼.
| Data | Elon Musk | NFT Freaks 🗝🏰🦸🏿♂️ | NFTGirl 🖼 |
| --- | --- | --- | --- |
| Tweets downloaded | 3200 | 3247 | 2210 |
| Retweets | 121 | 1753 | 298 |
| Short tweets | 984 | 306 | 395 |
| Tweets kept | 2095 | 1188 | 1517 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3aevkd35/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @elonmusk-nftfreaks-nftgirl's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/al5jjb8v) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/al5jjb8v/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/elonmusk-nftfreaks-nftgirl')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Betka/finetuning-sentiment-model-3000-samples
|
Betka
| 2022-10-01T10:17:53Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-01T10:06:14Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8733333333333333
- name: F1
type: f1
value: 0.87248322147651
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2850
- Accuracy: 0.8733
- F1: 0.8725
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
philschmid/distilbert-onnx-banking77
|
philschmid
| 2022-10-01T07:40:53Z | 27 | 5 |
generic
|
[
"generic",
"onnx",
"text-classification",
"endpoints-template",
"optimum",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-24T19:53:29Z |
---
tags:
- text-classification
- endpoints-template
- optimum
library_name: generic
---
# Optimized and Quantized DistilBERT with a custom pipeline with handler.py
> NOTE: Blog post coming soon
This is a template repository for Text Classification using Optimum and onnxruntime to support generic inference with Hugging Face Hub generic Inference API. There are two required steps:
1. Specify the requirements by defining a `requirements.txt` file.
2. Implement the `handler.py` `__init__` and `__call__` methods. These methods are called by the Inference API. The `__init__` method should load the model and preload the optimum model and tokenizers as well as the `text-classification` pipeline needed for inference. This is only called once. The `__call__` method performs the actual inference. Make sure to follow the same input/output specifications defined in the template for the pipeline to work.
add
```
library_name: generic
```
to the readme.
_note: the `generic` community image currently only support `inputs` as parameter and no parameter._
|
Bistolero/german_dutchall_mixed2ep
|
Bistolero
| 2022-10-01T03:53:57Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-10-01T03:34:34Z |
---
tags:
- generated_from_trainer
model-index:
- name: german_dutchall_mixed2ep
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# german_dutchall_mixed2ep
This model is a fine-tuned version of [Bistolero/nl_ge_alltr](https://huggingface.co/Bistolero/nl_ge_alltr) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 30
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
sd-concepts-library/crb-portraits
|
sd-concepts-library
| 2022-10-01T01:57:47Z | 0 | 1 | null |
[
"license:mit",
"region:us"
] | null | 2022-10-01T01:57:36Z |
---
license: mit
---
### CRB Portraits on Stable Diffusion
This is the `<crbportrait>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:









|
athugodage/T5-RLS500
|
athugodage
| 2022-10-01T01:47:15Z | 116 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-10-01T01:05:45Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: ru_t5model_for_legalsimplification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ru_t5model_for_legalsimplification
This model is a fine-tuned version of [IlyaGusev/rut5_base_sum_gazeta](https://huggingface.co/IlyaGusev/rut5_base_sum_gazeta) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Rouge1: 0.5364
- Rouge2: 0.1481
- Rougel: 0.506
- Rougelsum: 0.4917
- Gen Len: 163.03
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.002
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 157 | nan | 0.5364 | 0.1481 | 0.506 | 0.4917 | 163.03 |
| No log | 2.0 | 314 | nan | 0.5364 | 0.1481 | 0.506 | 0.4917 | 163.03 |
| No log | 3.0 | 471 | nan | 0.5364 | 0.1481 | 0.506 | 0.4917 | 163.03 |
| 0.0 | 4.0 | 628 | nan | 0.5364 | 0.1481 | 0.506 | 0.4917 | 163.03 |
| 0.0 | 5.0 | 785 | nan | 0.5364 | 0.1481 | 0.506 | 0.4917 | 163.03 |
| 0.0 | 6.0 | 942 | nan | 0.5364 | 0.1481 | 0.506 | 0.4917 | 163.03 |
| 0.0 | 7.0 | 1099 | nan | 0.5364 | 0.1481 | 0.506 | 0.4917 | 163.03 |
| 0.0 | 8.0 | 1256 | nan | 0.5364 | 0.1481 | 0.506 | 0.4917 | 163.03 |
| 0.0 | 9.0 | 1413 | nan | 0.5364 | 0.1481 | 0.506 | 0.4917 | 163.03 |
| 0.0 | 10.0 | 1570 | nan | 0.5364 | 0.1481 | 0.506 | 0.4917 | 163.03 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
rajistics/california_housing
|
rajistics
| 2022-09-30T23:53:36Z | 0 | 2 |
sklearn
|
[
"sklearn",
"skops",
"tabular-regression",
"region:us"
] |
tabular-regression
| 2022-09-30T23:44:07Z |
---
library_name: sklearn
tags:
- sklearn
- skops
- tabular-regression
widget:
structuredData:
AveBedrms:
- 0.9806451612903225
- 1.0379746835443038
- 0.9601449275362319
AveOccup:
- 2.587096774193548
- 2.8658227848101268
- 2.6449275362318843
AveRooms:
- 7.275268817204301
- 5.39493670886076
- 6.536231884057971
HouseAge:
- 38.0
- 25.0
- 39.0
Latitude:
- 37.44
- 37.31
- 34.16
Longitude:
- -122.19
- -122.03
- -118.07
MedInc:
- 9.3198
- 5.3508
- 6.4761
Population:
- 1203.0
- 1132.0
- 730.0
---
# Model description
[More Information Needed]
## Intended uses & limitations
[More Information Needed]
## Training Procedure
### Hyperparameters
The model is trained with below hyperparameters.
<details>
<summary> Click to expand </summary>
| Hyperparameter | Value |
|--------------------------|---------------|
| bootstrap | True |
| ccp_alpha | 0.0 |
| criterion | squared_error |
| max_depth | |
| max_features | 1.0 |
| max_leaf_nodes | |
| max_samples | |
| min_impurity_decrease | 0.0 |
| min_samples_leaf | 1 |
| min_samples_split | 2 |
| min_weight_fraction_leaf | 0.0 |
| n_estimators | 100 |
| n_jobs | |
| oob_score | False |
| random_state | |
| verbose | 0 |
| warm_start | False |
</details>
### Model Plot
The model plot is below.
<style>#sk-container-id-2 {color: black;background-color: white;}#sk-container-id-2 pre{padding: 0;}#sk-container-id-2 div.sk-toggleable {background-color: white;}#sk-container-id-2 label.sk-toggleable__label {cursor: pointer;display: block;width: 100%;margin-bottom: 0;padding: 0.3em;box-sizing: border-box;text-align: center;}#sk-container-id-2 label.sk-toggleable__label-arrow:before {content: "▸";float: left;margin-right: 0.25em;color: #696969;}#sk-container-id-2 label.sk-toggleable__label-arrow:hover:before {color: black;}#sk-container-id-2 div.sk-estimator:hover label.sk-toggleable__label-arrow:before {color: black;}#sk-container-id-2 div.sk-toggleable__content {max-height: 0;max-width: 0;overflow: hidden;text-align: left;background-color: #f0f8ff;}#sk-container-id-2 div.sk-toggleable__content pre {margin: 0.2em;color: black;border-radius: 0.25em;background-color: #f0f8ff;}#sk-container-id-2 input.sk-toggleable__control:checked~div.sk-toggleable__content {max-height: 200px;max-width: 100%;overflow: auto;}#sk-container-id-2 input.sk-toggleable__control:checked~label.sk-toggleable__label-arrow:before {content: "▾";}#sk-container-id-2 div.sk-estimator input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-2 div.sk-label input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-2 input.sk-hidden--visually {border: 0;clip: rect(1px 1px 1px 1px);clip: rect(1px, 1px, 1px, 1px);height: 1px;margin: -1px;overflow: hidden;padding: 0;position: absolute;width: 1px;}#sk-container-id-2 div.sk-estimator {font-family: monospace;background-color: #f0f8ff;border: 1px dotted black;border-radius: 0.25em;box-sizing: border-box;margin-bottom: 0.5em;}#sk-container-id-2 div.sk-estimator:hover {background-color: #d4ebff;}#sk-container-id-2 div.sk-parallel-item::after {content: "";width: 100%;border-bottom: 1px solid gray;flex-grow: 1;}#sk-container-id-2 div.sk-label:hover label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-2 div.sk-serial::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: 0;}#sk-container-id-2 div.sk-serial {display: flex;flex-direction: column;align-items: center;background-color: white;padding-right: 0.2em;padding-left: 0.2em;position: relative;}#sk-container-id-2 div.sk-item {position: relative;z-index: 1;}#sk-container-id-2 div.sk-parallel {display: flex;align-items: stretch;justify-content: center;background-color: white;position: relative;}#sk-container-id-2 div.sk-item::before, #sk-container-id-2 div.sk-parallel-item::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: -1;}#sk-container-id-2 div.sk-parallel-item {display: flex;flex-direction: column;z-index: 1;position: relative;background-color: white;}#sk-container-id-2 div.sk-parallel-item:first-child::after {align-self: flex-end;width: 50%;}#sk-container-id-2 div.sk-parallel-item:last-child::after {align-self: flex-start;width: 50%;}#sk-container-id-2 div.sk-parallel-item:only-child::after {width: 0;}#sk-container-id-2 div.sk-dashed-wrapped {border: 1px dashed gray;margin: 0 0.4em 0.5em 0.4em;box-sizing: border-box;padding-bottom: 0.4em;background-color: white;}#sk-container-id-2 div.sk-label label {font-family: monospace;font-weight: bold;display: inline-block;line-height: 1.2em;}#sk-container-id-2 div.sk-label-container {text-align: center;}#sk-container-id-2 div.sk-container {/* jupyter's `normalize.less` sets `[hidden] { display: none; }` but bootstrap.min.css set `[hidden] { display: none !important; }` so we also need the `!important` here to be able to override the default hidden behavior on the sphinx rendered scikit-learn.org. See: https://github.com/scikit-learn/scikit-learn/issues/21755 */display: inline-block !important;position: relative;}#sk-container-id-2 div.sk-text-repr-fallback {display: none;}</style><div id="sk-container-id-2" class="sk-top-container"><div class="sk-text-repr-fallback"><pre>RandomForestRegressor()</pre><b>In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook. <br />On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.</b></div><div class="sk-container" hidden><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-2" type="checkbox" checked><label for="sk-estimator-id-2" class="sk-toggleable__label sk-toggleable__label-arrow">RandomForestRegressor</label><div class="sk-toggleable__content"><pre>RandomForestRegressor()</pre></div></div></div></div></div>
## Evaluation Results
You can find the details about evaluation process and the evaluation results.
| Metric | Value |
|----------|---------|
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
```python
[More Information Needed]
```
</details>
# Model Card Authors
This model card is written by following authors:
[More Information Needed]
# Model Card Contact
You can contact the model card authors through following channels:
[More Information Needed]
# Citation
Below you can find information related to citation.
**BibTeX:**
```
[More Information Needed]
```
|
Bistolero/genlen2ep
|
Bistolero
| 2022-09-30T23:29:34Z | 113 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-09-30T23:12:28Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: genlen2ep
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# genlen2ep
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 25
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
sd-concepts-library/baluchitherian
|
sd-concepts-library
| 2022-09-30T21:10:04Z | 0 | 1 | null |
[
"license:mit",
"region:us"
] | null | 2022-09-30T21:09:58Z |
---
license: mit
---
### Baluchitherian on Stable Diffusion
This is the `<baluchiter>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:





|
stevhliu/my_awesome_wnut_model
|
stevhliu
| 2022-09-30T18:27:37Z | 176 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-09-30T17:31:41Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: stevhliu/my_awesome_wnut_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# stevhliu/my_awesome_wnut_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1210
- Validation Loss: 0.2698
- Train Precision: 0.5099
- Train Recall: 0.3995
- Train F1: 0.4480
- Train Accuracy: 0.9444
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 636, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Train Accuracy | Epoch |
|:----------:|:---------------:|:---------------:|:------------:|:--------:|:--------------:|:-----:|
| 0.3233 | 0.3099 | 0.4155 | 0.2117 | 0.2805 | 0.9333 | 0 |
| 0.1600 | 0.2743 | 0.5111 | 0.3589 | 0.4216 | 0.9416 | 1 |
| 0.1210 | 0.2698 | 0.5099 | 0.3995 | 0.4480 | 0.9444 | 2 |
### Framework versions
- Transformers 4.22.2
- TensorFlow 2.8.2
- Datasets 2.5.1
- Tokenizers 0.12.1
|
ankurani/bert-base-cased-finetuned-ner
|
ankurani
| 2022-09-30T18:09:12Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-09-24T14:53:51Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-cased-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
abu2sid/t5-small-finetuned-xsum_v3
|
abu2sid
| 2022-09-30T17:48:27Z | 35 | 1 |
transformers
|
[
"transformers",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-09-30T17:47:42Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: t5-small-finetuned-xsum_v3
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum_v3
This model is a fine-tuned version of [Rocketknight1/t5-small-finetuned-xsum](https://huggingface.co/Rocketknight1/t5-small-finetuned-xsum) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.22.2
- TensorFlow 2.8.2
- Datasets 2.5.1
- Tokenizers 0.12.1
|
cardiffnlp/roberta-base-tweet-topic-single-2020
|
cardiffnlp
| 2022-09-30T17:45:15Z | 54 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"dataset:cardiffnlp/tweet_topic_single",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-09-30T07:32:09Z |
---
datasets:
- cardiffnlp/tweet_topic_single
metrics:
- f1
- accuracy
model-index:
- name: cardiffnlp/roberta-base-tweet-topic-single-2020
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: cardiffnlp/tweet_topic_single
type: cardiffnlp/tweet_topic_single
args: cardiffnlp/tweet_topic_single
split: test_2021
metrics:
- name: F1
type: f1
value: 0.8682811577082102
- name: F1 (macro)
type: f1_macro
value: 0.7296667105332716
- name: Accuracy
type: accuracy
value: 0.8682811577082102
pipeline_tag: text-classification
widget:
- text: "I'm sure the {@Tampa Bay Lightning@} would’ve rather faced the Flyers but man does their experience versus the Blue Jackets this year and last help them a lot versus this Islanders team. Another meat grinder upcoming for the good guys"
example_title: "Example 1"
- text: "Love to take night time bike rides at the jersey shore. Seaside Heights boardwalk. Beautiful weather. Wishing everyone a safe Labor Day weekend in the US."
example_title: "Example 2"
---
# cardiffnlp/roberta-base-tweet-topic-single-2020
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the [tweet_topic_single](https://huggingface.co/datasets/cardiffnlp/tweet_topic_single). This model is fine-tuned on `train_2020` split and validated on `test_2021` split of tweet_topic.
Fine-tuning script can be found [here](https://huggingface.co/datasets/cardiffnlp/tweet_topic_single/blob/main/lm_finetuning.py). It achieves the following results on the test_2021 set:
- F1 (micro): 0.8682811577082102
- F1 (macro): 0.7296667105332716
- Accuracy: 0.8682811577082102
### Usage
```python
from transformers import pipeline
pipe = pipeline("text-classification", "cardiffnlp/roberta-base-tweet-topic-single-2020")
topic = pipe("Love to take night time bike rides at the jersey shore. Seaside Heights boardwalk. Beautiful weather. Wishing everyone a safe Labor Day weekend in the US.")
print(topic)
```
### Reference
```
@inproceedings{dimosthenis-etal-2022-twitter,
title = "{T}witter {T}opic {C}lassification",
author = "Antypas, Dimosthenis and
Ushio, Asahi and
Camacho-Collados, Jose and
Neves, Leonardo and
Silva, Vitor and
Barbieri, Francesco",
booktitle = "Proceedings of the 29th International Conference on Computational Linguistics",
month = oct,
year = "2022",
address = "Gyeongju, Republic of Korea",
publisher = "International Committee on Computational Linguistics"
}
```
|
ankurani/roberta-base-finetuned-ner
|
ankurani
| 2022-09-30T17:16:39Z | 48 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-05-09T08:13:34Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: roberta-base-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-ner
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
sd-dreambooth-library/rollerbeetle
|
sd-dreambooth-library
| 2022-09-30T17:02:57Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2022-09-30T17:02:54Z |
---
license: mit
---
### rollerbeetle on Stable Diffusion via Dreambooth
#### model by killer415tv
This your the Stable Diffusion model fine-tuned the rollerbeetle concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt`: **a photo of rollerbeetle mount**
You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb).
And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
Here are the images used for training this concept:









|
IIIT-L/roberta-large-finetuned-combined-DS
|
IIIT-L
| 2022-09-30T16:42:23Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-09-30T12:53:23Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: roberta-large-finetuned-combined-DS
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-finetuned-combined-DS
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2062
- Accuracy: 0.7001
- Precision: 0.6703
- Recall: 0.6700
- F1: 0.6701
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 43
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.8804 | 1.0 | 711 | 0.8517 | 0.6573 | 0.6786 | 0.6253 | 0.6231 |
| 0.6949 | 2.0 | 1422 | 0.7444 | 0.6833 | 0.6609 | 0.6647 | 0.6604 |
| 0.5674 | 3.0 | 2133 | 0.8379 | 0.6798 | 0.6571 | 0.6659 | 0.6575 |
| 0.433 | 3.99 | 2844 | 0.8703 | 0.7079 | 0.6947 | 0.6801 | 0.6809 |
| 0.3314 | 4.99 | 3555 | 1.1792 | 0.6861 | 0.6672 | 0.6558 | 0.6569 |
| 0.2519 | 5.99 | 4266 | 1.5574 | 0.6966 | 0.6761 | 0.6639 | 0.6662 |
| 0.2083 | 6.99 | 4977 | 1.8781 | 0.6952 | 0.6681 | 0.6592 | 0.6619 |
| 0.1773 | 7.99 | 5688 | 1.8687 | 0.6959 | 0.6677 | 0.6748 | 0.6675 |
| 0.1536 | 8.99 | 6399 | 2.2483 | 0.7037 | 0.6788 | 0.6674 | 0.6694 |
| 0.1305 | 9.99 | 7110 | 2.4602 | 0.6875 | 0.6597 | 0.6681 | 0.6612 |
| 0.0982 | 10.98 | 7821 | 2.5573 | 0.6994 | 0.6705 | 0.6728 | 0.6709 |
| 0.0858 | 11.98 | 8532 | 2.8048 | 0.6994 | 0.6765 | 0.6730 | 0.6737 |
| 0.0734 | 12.98 | 9243 | 3.0408 | 0.6945 | 0.6640 | 0.6628 | 0.6626 |
| 0.0625 | 13.98 | 9954 | 3.0047 | 0.7037 | 0.6784 | 0.6757 | 0.6764 |
| 0.0434 | 14.98 | 10665 | 3.0789 | 0.6987 | 0.6737 | 0.6669 | 0.6691 |
| 0.0432 | 15.98 | 11376 | 2.9647 | 0.6945 | 0.6649 | 0.6684 | 0.6663 |
| 0.0326 | 16.98 | 12087 | 3.3076 | 0.6931 | 0.6630 | 0.6563 | 0.6583 |
| 0.032 | 17.97 | 12798 | 3.1890 | 0.7022 | 0.6737 | 0.6702 | 0.6717 |
| 0.0275 | 18.97 | 13509 | 3.1798 | 0.7029 | 0.6738 | 0.6750 | 0.6744 |
| 0.0251 | 19.97 | 14220 | 3.2062 | 0.7001 | 0.6703 | 0.6700 | 0.6701 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.10.1+cu111
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Bistolero/german4ep_4b
|
Bistolero
| 2022-09-30T16:33:12Z | 54 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-09-30T16:12:32Z |
---
tags:
- generated_from_trainer
model-index:
- name: german4ep_4b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# german4ep_4b
This model is a fine-tuned version of [Bistolero/german_2EP](https://huggingface.co/Bistolero/german_2EP) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 25
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
ioanfr/distilbert-base-uncased-finetuned-cola
|
ioanfr
| 2022-09-30T16:28:22Z | 44 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-09-30T14:09:49Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: train
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5340667882909217
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8124
- Matthews Correlation: 0.5341
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5227 | 1.0 | 535 | 0.5222 | 0.4210 |
| 0.3467 | 2.0 | 1070 | 0.5046 | 0.4855 |
| 0.2335 | 3.0 | 1605 | 0.5637 | 0.5173 |
| 0.1813 | 4.0 | 2140 | 0.7634 | 0.5200 |
| 0.1334 | 5.0 | 2675 | 0.8124 | 0.5341 |
### Framework versions
- Transformers 4.23.0.dev0
- Pytorch 1.12.1+cu102
- Datasets 2.5.1
- Tokenizers 0.13.0
|
ericntay/stbl_clinical_bert_ft_rs7
|
ericntay
| 2022-09-30T16:07:57Z | 51 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-09-30T15:49:48Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: stbl_clinical_bert_ft_rs7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# stbl_clinical_bert_ft_rs7
This model is a fine-tuned version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0848
- F1: 0.9208
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 12
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2755 | 1.0 | 101 | 0.0986 | 0.8484 |
| 0.0655 | 2.0 | 202 | 0.0780 | 0.8873 |
| 0.0299 | 3.0 | 303 | 0.0622 | 0.9047 |
| 0.0145 | 4.0 | 404 | 0.0675 | 0.9110 |
| 0.0097 | 5.0 | 505 | 0.0706 | 0.9141 |
| 0.0057 | 6.0 | 606 | 0.0753 | 0.9174 |
| 0.0032 | 7.0 | 707 | 0.0755 | 0.9182 |
| 0.0024 | 8.0 | 808 | 0.0835 | 0.9219 |
| 0.0014 | 9.0 | 909 | 0.0838 | 0.9197 |
| 0.0013 | 10.0 | 1010 | 0.0838 | 0.9204 |
| 0.0009 | 11.0 | 1111 | 0.0850 | 0.9183 |
| 0.0009 | 12.0 | 1212 | 0.0848 | 0.9208 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
anas-awadalla/bart-large-finetuned-squad-seq2seq
|
anas-awadalla
| 2022-09-30T16:02:03Z | 51 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-09-29T19:23:14Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bart-large-finetuned-squad-seq2seq
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-finetuned-squad-seq2seq
This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.11.6
|
mpapucci/it5-topic-classification-tag-it
|
mpapucci
| 2022-09-30T15:05:59Z | 58 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"Text Classification",
"it",
"dataset:TAG-IT",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-09-01T10:07:10Z |
---
language:
- it
tags:
- Text Classification
datasets:
- TAG-IT
---
Write an italian sentence with the prefix "Classifica Argomento: " to get a topic classification of the sentence.
The dataset used for the task is: [TAG-IT](https://sites.google.com/view/tag-it-2020/).
The model is a fine tuned version of [IT5-base](https://huggingface.co/gsarti/it5-base) of Sarti and Nissim.
|
mpapucci/it5-multitask-classification-topic-age-gender-tag-it
|
mpapucci
| 2022-09-30T15:05:42Z | 53 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"Text Classification",
"it",
"dataset:TAG-IT",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-09-01T21:03:28Z |
---
language:
- it
tags:
- Text Classification
datasets:
- TAG-IT
---
Write an italian sentence with one of the following prefixes:
* "Classifica Età: " to get an age classification of the sentence;
* "Classifica Argomento: " to get a topic classification of the sentence;
* "Classifica Genere: " to get a gender classification of the sentence;
The dataset used for the task is: [TAG-IT](https://sites.google.com/view/tag-it-2020/).
The model is a fine tuned version of [IT5-base](https://huggingface.co/gsarti/it5-base) of Sarti and Nissim.
|
mfreihaut/refinement-finetuned-mnli-2
|
mfreihaut
| 2022-09-30T13:55:53Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-09-29T16:36:24Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: refinement-finetuned-mnli-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# refinement-finetuned-mnli-2
This model is a fine-tuned version of [mfreihaut/refinement-finetuned-mnli-1](https://huggingface.co/mfreihaut/refinement-finetuned-mnli-1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0242
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| No log | 1.0 | 303 | 0.3730 |
| 1.1146 | 2.0 | 606 | 0.9860 |
| 1.1146 | 3.0 | 909 | 0.7304 |
| 1.0018 | 4.0 | 1212 | 0.6386 |
| 1.0045 | 5.0 | 1515 | 0.4228 |
| 1.0045 | 6.0 | 1818 | 0.6769 |
| 0.9618 | 7.0 | 2121 | 0.3008 |
| 0.9618 | 8.0 | 2424 | 0.4496 |
| 0.964 | 9.0 | 2727 | 0.1826 |
| 0.9586 | 10.0 | 3030 | 0.0367 |
| 0.9586 | 11.0 | 3333 | 0.1811 |
| 1.0467 | 12.0 | 3636 | 0.1352 |
| 1.0467 | 13.0 | 3939 | 0.0612 |
| 1.0047 | 14.0 | 4242 | 0.1702 |
| 1.0012 | 15.0 | 4545 | 0.0622 |
| 1.0012 | 16.0 | 4848 | 0.7077 |
| 1.0514 | 17.0 | 5151 | 0.2146 |
| 1.0514 | 18.0 | 5454 | 0.5531 |
| 0.9389 | 19.0 | 5757 | 1.2304 |
| 0.9229 | 20.0 | 6060 | 0.6252 |
| 0.9229 | 21.0 | 6363 | 0.6844 |
| 0.9334 | 22.0 | 6666 | 0.5663 |
| 0.9334 | 23.0 | 6969 | 0.9912 |
| 0.9312 | 24.0 | 7272 | 0.3112 |
| 0.8971 | 25.0 | 7575 | 0.4511 |
| 0.8971 | 26.0 | 7878 | 0.3860 |
| 0.9022 | 27.0 | 8181 | 0.5904 |
| 0.9022 | 28.0 | 8484 | 0.4710 |
| 0.7568 | 29.0 | 8787 | 0.8233 |
| 0.6753 | 30.0 | 9090 | 0.6951 |
| 0.6753 | 31.0 | 9393 | 0.6363 |
| 0.5802 | 32.0 | 9696 | 0.8018 |
| 0.5802 | 33.0 | 9999 | 0.9381 |
| 0.5323 | 34.0 | 10302 | 0.9941 |
| 0.5218 | 35.0 | 10605 | 0.9418 |
| 0.5218 | 36.0 | 10908 | 0.9236 |
| 0.4558 | 37.0 | 11211 | 0.4542 |
| 0.4247 | 38.0 | 11514 | 0.9279 |
| 0.4247 | 39.0 | 11817 | 0.9567 |
| 0.43 | 40.0 | 12120 | 0.8077 |
| 0.43 | 41.0 | 12423 | 0.9595 |
| 0.352 | 42.0 | 12726 | 0.9189 |
| 0.3393 | 43.0 | 13029 | 0.8762 |
| 0.3393 | 44.0 | 13332 | 1.0505 |
| 0.316 | 45.0 | 13635 | 0.9273 |
| 0.316 | 46.0 | 13938 | 1.0716 |
| 0.2983 | 47.0 | 14241 | 1.0084 |
| 0.2503 | 48.0 | 14544 | 1.1027 |
| 0.2503 | 49.0 | 14847 | 1.0478 |
| 0.2462 | 50.0 | 15150 | 1.0242 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.10.0
- Datasets 2.5.1
- Tokenizers 0.12.1
|
Myashka/GPT_neo_python_QA
|
Myashka
| 2022-09-30T13:44:05Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-09-30T13:37:19Z |
# GPT neo tuned for QA about Python abstract
## About
The model is the GPT_neo with 1,2B parameters. Was fine-tuned at 10% sample from SO dataset for QA task.
Base Promt: "Question: ...\nAnswer:"
|
facebook/vit-msn-large
|
facebook
| 2022-09-30T13:22:41Z | 483 | 1 |
transformers
|
[
"transformers",
"pytorch",
"vit_msn",
"image-feature-extraction",
"vision",
"dataset:imagenet-1k",
"arxiv:2204.07141",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-feature-extraction
| 2022-09-09T06:09:09Z |
---
license: apache-2.0
tags:
- vision
datasets:
- imagenet-1k
---
# Vision Transformer (large-sized model) pre-trained with MSN
Vision Transformer (ViT) model pre-trained using the MSN method. It was introduced in the paper [Masked Siamese Networks for Label-Efficient Learning](https://arxiv.org/abs/2204.07141) by Mahmoud Assran, Mathilde Caron, Ishan Misra, Piotr Bojanowski, Florian Bordes, Pascal Vincent, Armand Joulin, Michael Rabbat, Nicolas Ballas and first released in [this repository](https://github.com/facebookresearch/msn).
Disclaimer: The team releasing MSN did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The Vision Transformer (ViT) is a transformer encoder model (BERT-like). Images are presented to the model as a sequence of fixed-size patches.
MSN presents a joint-embedding architecture to match the prototypes of masked patches with that of the unmasked patches. With this setup, their method yields excellent performance in the low-shot and extreme low-shot regimes.
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder.
## Intended uses & limitations
You can use the raw model for downstream tasks like image classification. See the [model hub](https://huggingface.co/models?filter=vit_msn) to look for different versions of MSN pre-trained models that interest you. The model is particularly beneficial when you have a few labeled samples in your training set.
### How to use
Here is how to use this backbone encoder:
```python
from transformers import AutoFeatureExtractor, ViTMSNModel
import torch
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/vit-msn-large")
model = ViTMSNModel.from_pretrained("facebook/vit-msn-large")
inputs = feature_extractor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
```
For fine-tuning on image classification use the `ViTMSNForImageClassification` class:
```python
from transformers import AutoFeatureExtractor, ViTMSNForImageClassification
import torch
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/vit-msn-large")
model = ViTMSNForImageClassification.from_pretrained("facebook/vit-msn-large")
...
```
### Citation
```bibtex
@article{assran2022masked,
title={Masked Siamese Networks for Label-Efficient Learning},
author={Assran, Mahmoud, and Caron, Mathilde, and Misra, Ishan, and Bojanowski, Piotr, and Bordes, Florian and Vincent, Pascal, and Joulin, Armand, and Rabbat, Michael, and Ballas, Nicolas},
journal={arXiv preprint arXiv:2204.07141},
year={2022}
}
```
|
Bistolero/italian2ep
|
Bistolero
| 2022-09-30T13:13:34Z | 54 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-09-30T12:51:04Z |
---
tags:
- generated_from_trainer
model-index:
- name: italian2ep
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# italian2ep
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 30
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
anas-awadalla/bart-base-few-shot-k-512-finetuned-squad-seed-4
|
anas-awadalla
| 2022-09-30T12:37:18Z | 46 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-09-30T12:32:57Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bart-base-few-shot-k-512-finetuned-squad-seed-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-few-shot-k-512-finetuned-squad-seed-4
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.11.6
|
anas-awadalla/bart-base-few-shot-k-512-finetuned-squad-seed-2
|
anas-awadalla
| 2022-09-30T12:30:21Z | 49 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-09-30T12:26:24Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bart-base-few-shot-k-512-finetuned-squad-seed-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-few-shot-k-512-finetuned-squad-seed-2
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.11.6
|
anas-awadalla/bart-base-few-shot-k-512-finetuned-squad-seed-0
|
anas-awadalla
| 2022-09-30T12:24:07Z | 51 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-09-30T12:19:59Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bart-base-few-shot-k-512-finetuned-squad-seed-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-few-shot-k-512-finetuned-squad-seed-0
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.11.6
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.