modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-08 06:28:05
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 546
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-08 06:27:40
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
danielsc/bert_test
|
danielsc
| 2022-11-16T18:24:37Z | 51 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"exbert",
"autotrain-compatible",
"automatic-speech-recognition",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-11-14T20:59:26Z |
---
language: en
tags:
- exbert
- autotrain-compatible
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
pipeline_tag: automatic-speech-recognition
---
# BERT base model (cased)
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1810.04805) and first released in
[this repository](https://github.com/google-research/bert). This model is case-sensitive: it makes a difference between
english and English.
Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
## Model description
BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=bert) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='bert-base-cased')
>>> unmasker("Hello I'm a [MASK] model.")
[{'sequence': "[CLS] Hello I'm a fashion model. [SEP]",
'score': 0.09019174426794052,
'token': 4633,
'token_str': 'fashion'},
{'sequence': "[CLS] Hello I'm a new model. [SEP]",
'score': 0.06349995732307434,
'token': 1207,
'token_str': 'new'},
{'sequence': "[CLS] Hello I'm a male model. [SEP]",
'score': 0.06228214129805565,
'token': 2581,
'token_str': 'male'},
{'sequence': "[CLS] Hello I'm a professional model. [SEP]",
'score': 0.0441727414727211,
'token': 1848,
'token_str': 'professional'},
{'sequence': "[CLS] Hello I'm a super model. [SEP]",
'score': 0.03326151892542839,
'token': 7688,
'token_str': 'super'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('bert-base-cased')
model = BertModel.from_pretrained("bert-base-cased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('bert-base-cased')
model = TFBertModel.from_pretrained("bert-base-cased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='bert-base-cased')
>>> unmasker("The man worked as a [MASK].")
[{'sequence': '[CLS] The man worked as a lawyer. [SEP]',
'score': 0.04804691672325134,
'token': 4545,
'token_str': 'lawyer'},
{'sequence': '[CLS] The man worked as a waiter. [SEP]',
'score': 0.037494491785764694,
'token': 17989,
'token_str': 'waiter'},
{'sequence': '[CLS] The man worked as a cop. [SEP]',
'score': 0.035512614995241165,
'token': 9947,
'token_str': 'cop'},
{'sequence': '[CLS] The man worked as a detective. [SEP]',
'score': 0.031271643936634064,
'token': 9140,
'token_str': 'detective'},
{'sequence': '[CLS] The man worked as a doctor. [SEP]',
'score': 0.027423162013292313,
'token': 3995,
'token_str': 'doctor'}]
>>> unmasker("The woman worked as a [MASK].")
[{'sequence': '[CLS] The woman worked as a nurse. [SEP]',
'score': 0.16927455365657806,
'token': 7439,
'token_str': 'nurse'},
{'sequence': '[CLS] The woman worked as a waitress. [SEP]',
'score': 0.1501094549894333,
'token': 15098,
'token_str': 'waitress'},
{'sequence': '[CLS] The woman worked as a maid. [SEP]',
'score': 0.05600163713097572,
'token': 13487,
'token_str': 'maid'},
{'sequence': '[CLS] The woman worked as a housekeeper. [SEP]',
'score': 0.04838843643665314,
'token': 26458,
'token_str': 'housekeeper'},
{'sequence': '[CLS] The woman worked as a cook. [SEP]',
'score': 0.029980547726154327,
'token': 9834,
'token_str': 'cook'}]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The BERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size
of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
## Evaluation results
When fine-tuned on downstream tasks, this model achieves the following results:
Glue test results:
| Task | MNLI-(m/mm) | QQP | QNLI | SST-2 | CoLA | STS-B | MRPC | RTE | Average |
|:----:|:-----------:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:|:-------:|
| | 84.6/83.4 | 71.2 | 90.5 | 93.5 | 52.1 | 85.8 | 88.9 | 66.4 | 79.6 |
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1810-04805,
author = {Jacob Devlin and
Ming{-}Wei Chang and
Kenton Lee and
Kristina Toutanova},
title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language
Understanding},
journal = {CoRR},
volume = {abs/1810.04805},
year = {2018},
url = {http://arxiv.org/abs/1810.04805},
archivePrefix = {arXiv},
eprint = {1810.04805},
timestamp = {Tue, 30 Oct 2018 20:39:56 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=bert-base-cased">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
vwxyzjn/CartPole-v1-dqn-seed1
|
vwxyzjn
| 2022-11-16T18:12:42Z | 0 | 0 | null |
[
"tensorboard",
"CartPole-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-10-18T14:30:08Z |
---
tags:
- CartPole-v1
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 65.80 +/- 13.11
name: mean_reward
verified: false
---
# (CleanRL) **DQN** Agent Playing **CartPole-v1**
This is a trained model of a DQN agent playing CartPole-v1.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/dqn.py).
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/vwxyzjn/CartPole-v1-dqn-seed1/raw/main/dqn.py
curl -OL https://huggingface.co/vwxyzjn/CartPole-v1-dqn-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/vwxyzjn/CartPole-v1-dqn-seed1/raw/main/poetry.lock
poetry install --all-extras
python dqn.py --cuda False --save-model --upload-model --total-timesteps 500
```
# Hyperparameters
```python
{'batch_size': 128,
'buffer_size': 10000,
'capture_video': False,
'cuda': False,
'end_e': 0.05,
'env_id': 'CartPole-v1',
'exp_name': 'dqn',
'exploration_fraction': 0.5,
'gamma': 0.99,
'hf_entity': '',
'learning_rate': 0.00025,
'learning_starts': 10000,
'save_model': True,
'seed': 1,
'start_e': 1,
'target_network_frequency': 500,
'torch_deterministic': True,
'total_timesteps': 500,
'track': False,
'train_frequency': 10,
'upload_model': True,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
Den4ikAI/rugpt3-QA-old
|
Den4ikAI
| 2022-11-16T18:04:46Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-14T11:32:59Z |
---
license: mit
widget:
- text: "Q: Что такое любовь?\nA:"
example_title: "test"
---
Внимание!!! Этот репозиторий больше обновляться не будет из за переделки датасета! Новая модель доступна на [Den4ikAI/rugpt3-QA](https://huggingface.co/Den4ikAI/rugpt3-QA)
Здесь выкладывалися чекпоинты модели rugpt3-medium обученной на данных с otvet.mail.ru
Старый датасет [тык](https://huggingface.co/datasets/Den4ikAI/mailru-QA-old)
Текущий чекпоинт: 180k 0 эпох
|
Ashenhard/Ashenhard-style
|
Ashenhard
| 2022-11-16T17:59:16Z | 0 | 3 | null |
[
"text-to-image",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2022-11-16T10:04:06Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
---
I'm a digital artist learning these new tools to work with, this is my first style model
I'm on Instagram: @ashenhard84 and Twitter: @ashenhard
This model was trained with 85 images, at 8500 steps 1e-6 in Shivam Shrirao Google colab.
I think the potential of this model is to merge it with others.
The token is **Ashenhard style**
**Generated by the model without merge:**




**Generated by the model merged with (A) Anything V3 at 0.4 - (B) Ashenhard:**


**Testing Img2Img with the model+anything**

**Generated by the model merged with (A) Ashenhard at 0.4 - (B) F222:**


|
guumaster/skrichy-diffusion
|
guumaster
| 2022-11-16T15:59:57Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2022-11-16T15:44:15Z |
---
license: creativeml-openrail-m
---
|
projecte-aina/roberta-base-ca-cased-pos
|
projecte-aina
| 2022-11-16T15:22:48Z | 136 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"token-classification",
"catalan",
"part of speech tagging",
"pos",
"CaText",
"Catalan Textual Corpus",
"ca",
"dataset:universal_dependencies",
"arxiv:1907.11692",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
language:
- ca
license: apache-2.0
tags:
- "catalan"
- "part of speech tagging"
- "pos"
- "CaText"
- "Catalan Textual Corpus"
datasets:
- "universal_dependencies"
metrics:
- f1
inference:
parameters:
aggregation_strategy: "first"
model-index:
- name: roberta-base-ca-cased-pos
results:
- task:
type: token-classification
dataset:
type: universal_dependencies
name: Ancora-ca-POS
metrics:
- name: F1
type: f1
value: 0.9893832385244624
widget:
- text: "Em dic Lluïsa i visc a Santa Maria del Camí."
- text: "L'Aina, la Berta i la Norma són molt amigues."
- text: "El Martí llegeix el Cavall Fort."
---
# Catalan BERTa (roberta-base-ca) finetuned for Part-of-speech-tagging (POS)
## Table of Contents
<details>
<summary>Click to expand</summary>
- [Model description](#model-description)
- [Intended uses and limitations](#intended-use)
- [How to use](#how-to-use)
- [Limitations and bias](#limitations-and-bias)
- [Training](#training)
- [Training data](#training-data)
- [Training procedure](#training-procedure)
- [Evaluation](#evaluation)
- [Variable and metrics](#variable-and-metrics)
- [Evaluation results](#evaluation-results)
- [Additional information](#additional-information)
- [Author](#author)
- [Contact information](#contact-information)
- [Copyright](#copyright)
- [Licensing information](#licensing-information)
- [Funding](#funding)
- [Citing information](#citing-information)
- [Disclaimer](#disclaimer)
</details>
## Model description
The **roberta-base-ca-cased-pos** is a Part-of-speech-tagging (POS) model for the Catalan language fine-tuned from the roberta-base-ca model, a [RoBERTa](https://arxiv.org/abs/1907.11692) base model pre-trained on a medium-size corpus collected from publicly available corpora and crawlers.
## Intended uses and limitations
**roberta-base-ca-cased-pos** model can be used to Part-of-speech-tagging (POS) a text. The model is limited by its training dataset and may not generalize well for all use cases.
## How to use
Here is how to use this model:
```python
from transformers import pipeline
from pprint import pprint
nlp = pipeline("token-classification", model="projecte-aina/roberta-base-ca-cased-pos")
example = "Em dic Lluïsa i visc a Santa Maria del Camí."
pos_results = nlp(example)
pprint(pos_results)
```
## Limitations and bias
At the time of submission, no measures have been taken to estimate the bias embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated.
## Training
### Training data
We used the POS dataset in Catalan from the [Universal Dependencies Treebank](https://huggingface.co/datasets/universal_dependencies) we refer to _Ancora-ca-pos_ for training and evaluation.
### Training procedure
The model was trained with a batch size of 16 and a learning rate of 5e-5 for 5 epochs. We then selected the best checkpoint using the downstream task metric in the corresponding development set and then evaluated it on the test set.
## Evaluation
### Variable and metrics
This model was finetuned maximizing F1 score.
## Evaluation results
We evaluated the _roberta-base-ca-cased-pos_ on the Ancora-ca-ner test set against standard multilingual and monolingual baselines:
| Model | AnCora-Ca-POS (F1) |
| ------------|:-------------|
| roberta-base-ca-cased-pos |**98.93** |
| mBERT | 98.82 |
| XLM-RoBERTa | 98.89 |
| WikiBERT-ca | 97.60 |
For more details, check the fine-tuning and evaluation scripts in the official [GitHub repository](https://github.com/projecte-aina/club).
## Additional information
### Author
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es)
### Contact information
For further information, send an email to aina@bsc.es
### Copyright
Copyright (c) 2022 Text Mining Unit at Barcelona Supercomputing Center
### Licensing information
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
### Funding
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
### Citation Information
If you use any of these resources (datasets or models) in your work, please cite our latest paper:
```bibtex
@inproceedings{armengol-estape-etal-2021-multilingual,
title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan",
author = "Armengol-Estap{\'e}, Jordi and
Carrino, Casimiro Pio and
Rodriguez-Penagos, Carlos and
de Gibert Bonet, Ona and
Armentano-Oller, Carme and
Gonzalez-Agirre, Aitor and
Melero, Maite and
Villegas, Marta",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.437",
doi = "10.18653/v1/2021.findings-acl.437",
pages = "4933--4946",
}
```
### Disclaimer
<details>
<summary>Click to expand</summary>
The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.
When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence.
In no event shall the owner and creator of the models (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.
|
Watwat100/smol16
|
Watwat100
| 2022-11-16T15:22:10Z | 2 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-11-16T15:21:35Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 10 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 10,
"warmup_steps": 1,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
monakth/distillbert-base-uncased-fine-tuned-squadv2
|
monakth
| 2022-11-16T15:16:35Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad_v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-11-16T15:15:31Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: distillbert-base-uncased-fine-tuned-squad-squadv
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distillbert-base-uncased-fine-tuned-squad-squadv
This model is a fine-tuned version of [monakth/distillbert-base-uncased-fine-tuned-squad](https://huggingface.co/monakth/distillbert-base-uncased-fine-tuned-squad) on the squad_v2 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
projecte-aina/roberta-base-ca-cased-sts
|
projecte-aina
| 2022-11-16T15:10:28Z | 121 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"catalan",
"semantic textual similarity",
"sts-ca",
"CaText",
"Catalan Textual Corpus",
"ca",
"dataset:projecte-aina/sts-ca",
"arxiv:1907.11692",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
pipeline_tag: text-classification
language:
- ca
license: apache-2.0
tags:
- "catalan"
- "semantic textual similarity"
- "sts-ca"
- "CaText"
- "Catalan Textual Corpus"
datasets:
- "projecte-aina/sts-ca"
metrics:
- "combined_score"
model-index:
- name: roberta-base-ca-cased-sts
results:
- task:
type: text-classification
dataset:
type: projecte-aina/sts-ca
name: STS-ca
metrics:
- name: Pearson
type: Pearson
value: 0.797
---
# Catalan BERTa (roberta-base-ca) finetuned for Semantic Textual Similarity.
## Table of Contents
<details>
<summary>Click to expand</summary>
- [Model description](#model-description)
- [Intended uses and limitations](#intended-use)
- [How to use](#how-to-use)
- [Limitations and bias](#limitations-and-bias)
- [Training](#training)
- [Training data](#training-data)
- [Training procedure](#training-procedure)
- [Evaluation](#evaluation)
- [Variable and metrics](#variable-and-metrics)
- [Evaluation results](#evaluation-results)
- [Additional information](#additional-information)
- [Author](#author)
- [Contact information](#contact-information)
- [Copyright](#copyright)
- [Licensing information](#licensing-information)
- [Funding](#funding)
- [Citing information](#citing-information)
- [Disclaimer](#disclaimer)
</details>
## Model description
The **roberta-base-ca-cased-sts** is a Semantic Textual Similarity (STS) model for the Catalan language fine-tuned from the roberta-base-ca model, a [RoBERTa](https://arxiv.org/abs/1907.11692) base model pre-trained on a medium-size corpus collected from publicly available corpora and crawlers.
## Intended uses and limitations
**roberta-base-ca-cased-sts** model can be used to assess the similarity between two snippets of text. The model is limited by its training dataset and may not generalize well for all use cases.
## How to use
To get the correct<sup>1</sup> model's prediction scores with values between 0.0 and 5.0, use the following code:
```python
from transformers import pipeline, AutoTokenizer
from scipy.special import logit
model = 'projecte-aina/roberta-base-ca-cased-sts'
tokenizer = AutoTokenizer.from_pretrained(model)
pipe = pipeline('text-classification', model=model, tokenizer=tokenizer)
def prepare(sentence_pairs):
sentence_pairs_prep = []
for s1, s2 in sentence_pairs:
sentence_pairs_prep.append(f"{tokenizer.cls_token} {s1}{tokenizer.sep_token}{tokenizer.sep_token} {s2}{tokenizer.sep_token}")
return sentence_pairs_prep
sentence_pairs = [("El llibre va caure per la finestra.", "El llibre va sortir volant."),
("M'agrades.", "T'estimo."),
("M'agrada el sol i la calor", "A la Garrotxa plou molt.")]
predictions = pipe(prepare(sentence_pairs), add_special_tokens=False)
# convert back to scores to the original 0 and 5 interval
for prediction in predictions:
prediction['score'] = logit(prediction['score'])
print(predictions)
```
Expected output:
```
[{'label': 'SIMILARITY', 'score': 2.118301674983813},
{'label': 'SIMILARITY', 'score': 2.1799755855125853},
{'label': 'SIMILARITY', 'score': 0.9511617858568939}]
```
<sup>1</sup> _**avoid using the widget** scores since they are normalized and do not reflect the original annotation values._
## Limitations and bias
At the time of submission, no measures have been taken to estimate the bias embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated.
## Training
### Training data
We used the STS dataset in Catalan called [STS-ca](https://huggingface.co/datasets/projecte-aina/sts-ca) for training and evaluation.
### Training procedure
The model was trained with a batch size of 16 and a learning rate of 5e-5 for 5 epochs. We then selected the best checkpoint using the downstream task metric in the corresponding development set, and then evaluated it on the test set.
## Evaluation
### Variable and metrics
This model was finetuned maximizing the average score between the Pearson and Spearman correlations.
## Evaluation results
We evaluated the _roberta-base-ca-cased-sts_ on the STS-ca test set against standard multilingual and monolingual baselines:
| Model | STS-ca (Pearson score) |
| ------------|:-------------|
| roberta-base-ca-cased-sts | 79.73 |
| mBERT | 74.26 |
| XLM-RoBERTa | 61.61 |
For more details, check the fine-tuning and evaluation scripts in the official [GitHub repository](https://github.com/projecte-aina/club).
## Additional information
### Author
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es)
### Contact information
For further information, send an email to aina@bsc.es
### Copyright
Copyright (c) 2022 Text Mining Unit at Barcelona Supercomputing Center
### Licensing information
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
### Funding
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
### Citation Information
If you use any of these resources (datasets or models) in your work, please cite our latest paper:
```bibtex
@inproceedings{armengol-estape-etal-2021-multilingual,
title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan",
author = "Armengol-Estap{\'e}, Jordi and
Carrino, Casimiro Pio and
Rodriguez-Penagos, Carlos and
de Gibert Bonet, Ona and
Armentano-Oller, Carme and
Gonzalez-Agirre, Aitor and
Melero, Maite and
Villegas, Marta",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.437",
doi = "10.18653/v1/2021.findings-acl.437",
pages = "4933--4946",
}
```
### Disclaimer
<details>
<summary>Click to expand</summary>
The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.
When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence.
In no event shall the owner and creator of the models (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.
|
projecte-aina/roberta-base-ca-cased-qa
|
projecte-aina
| 2022-11-16T15:05:08Z | 116 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"question-answering",
"catalan",
"qa",
"ca",
"dataset:xquad-ca",
"dataset:viquiquad",
"arxiv:1907.11692",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
language:
- ca
license: apache-2.0
tags:
- "catalan"
- "qa"
datasets:
- "xquad-ca"
- "viquiquad"
metrics:
- "f1"
- "exact match"
widget:
- text: "Quan va començar el Super3?"
context: "El Super3 o Club Super3 és un univers infantil català creat a partir d'un programa emès per Televisió de Catalunya des del 1991. Està format per un canal de televisió, la revista Súpers!, la Festa dels Súpers i un club que té un milió i mig de socis."
- text: "Quants eren els germans Marx?"
context: "Els germans Marx van ser un grup de còmics dels Estats Units que originàriament estava compost per cinc germans (entre parèntesis els noms artístics): Leonard (Chico), Adolph (Harpo), Julius (Groucho), Milton (Gummo) i Herbert (Zeppo)."
- text: "On van ser els Jocs Olímpics de 1992?"
context: "Els Jocs Olímpics d'estiu de 1992, oficialment Jocs Olímpics de la XXV Olimpíada, es van celebrar a la ciutat de Barcelona entre els dies 25 de juliol i 9 d'agost de 1992. Hi participaren 9.356 atletes (6.652 homes i 2.704 dones) de 169 comitès nacionals, que competiren en 32 esports i 286 especialitats."
- text: "Qui va dissenyar la Sagrada Família?"
context: "El Temple Expiatori de la Sagrada Família, conegut habitualment com la Sagrada Família, és una basílica catòlica situada a la ciutat de Barcelona. És un dels exemples més coneguts del modernisme català i un edifici únic al món, que ha esdevingut tot un símbol de la ciutat. Obra inacabada de l'arquitecte català Antoni Gaudí, és al barri de la Sagrada Família, al districte de l'Eixample de la ciutat."
- text: "Quin és el tercer volcà més gran de la Terra?"
context: "El Teide (o Pic del Teide) és un estratovolcà i muntanya de Tenerife, Illes Canàries (28.27 N, 16.6 O). Amb una altitud de 3718 m sobre el nivell del mar i amb aproximadament uns 7000 m sobre el llit marí adjacent, és la muntanya més alta d'Espanya, la muntanya més alta de totes les illes atlàntiques i el tercer volcà més gran de la Terra."
---
# Catalan BERTa (roberta-base-ca) finetuned for Question Answering.
## Table of Contents
<details>
<summary>Click to expand</summary>
- [Model description](#model-description)
- [Intended uses and limitations](#intended-use)
- [How to use](#how-to-use)
- [Limitations and bias](#limitations-and-bias)
- [Training](#training)
- [Training data](#training-data)
- [Training procedure](#training-procedure)
- [Evaluation](#evaluation)
- [Variable and metrics](#variable-and-metrics)
- [Evaluation results](#evaluation-results)
- [Additional information](#additional-information)
- [Author](#author)
- [Contact information](#contact-information)
- [Copyright](#copyright)
- [Licensing information](#licensing-information)
- [Funding](#funding)
- [Citing information](#citing-information)
- [Disclaimer](#disclaimer)
</details>
## Model description
The **roberta-base-ca-cased-qa** is a Question Answering (QA) model for the Catalan language fine-tuned from the roberta-base-ca model, a [RoBERTa](https://arxiv.org/abs/1907.11692) base model pre-trained on a medium-size corpus collected from publicly available corpora and crawlers.
## Intended uses and limitations
**roberta-base-ca-cased-qa** model can be used for extractive question answering. The model is limited by its training dataset and may not generalize well for all use cases.
## How to use
Here is how to use this model:
```python
from transformers import pipeline
nlp = pipeline("question-answering", model="projecte-aina/roberta-base-ca-cased-qa")
text = "Quan va començar el Super3?"
context = "El Super3 o Club Super3 és un univers infantil català creat a partir d'un programa emès per Televisió de Catalunya des del 1991. Està format per un canal de televisió, la revista Súpers!, la Festa dels Súpers i un club que té un milió i mig de socis."
qa_results = nlp(text, context)
print(qa_results)
```
## Limitations and bias
At the time of submission, no measures have been taken to estimate the bias embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated.
## Training
### Training data
We used the QA dataset in Catalan called [CatalanQA](https://huggingface.co/datasets/projecte-aina/catalanqa) for training and evaluation, and the [XQuAD-ca](https://huggingface.co/datasets/projecte-aina/xquad-ca) test set for evaluation.
### Training procedure
The model was trained with a batch size of 16 and a learning rate of 5e-5 for 5 epochs. We then selected the best checkpoint using the downstream task metric in the corresponding development set and then evaluated it on the test set.
## Evaluation
### Variable and metrics
This model was finetuned maximizing F1 score.
### Evaluation results
We evaluated the _roberta-base-ca-cased-qa_ on the CatalanQA and XQuAD-ca test sets against standard multilingual and monolingual baselines:
| Model | ViquiQuAD (F1/EM) | XQuAD-ca (F1/EM) |
| ------------|:-------------:| -----:|
| roberta-base-ca-cased-qa | **86.99/73.25** | **67.81/49.43** |
| mBERT | 86.97/72.22 | 67.15/46.51 |
| XLM-RoBERTa | 85.50/70.47 | 67.10/46.42 |
| WikiBERT-ca | 85.45/70.75 | 65.21/36.60 |
For more details, check the fine-tuning and evaluation scripts in the official [GitHub repository](https://github.com/projecte-aina/club).
## Additional information
### Author
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es)
### Contact information
For further information, send an email to aina@bsc.es
### Copyright
Copyright (c) 2022 Text Mining Unit at Barcelona Supercomputing Center
### Licensing information
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
### Funding
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
### Citation Information
If you use any of these resources (datasets or models) in your work, please cite our latest paper:
```bibtex
@inproceedings{armengol-estape-etal-2021-multilingual,
title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan",
author = "Armengol-Estap{\'e}, Jordi and
Carrino, Casimiro Pio and
Rodriguez-Penagos, Carlos and
de Gibert Bonet, Ona and
Armentano-Oller, Carme and
Gonzalez-Agirre, Aitor and
Melero, Maite and
Villegas, Marta",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.437",
doi = "10.18653/v1/2021.findings-acl.437",
pages = "4933--4946",
}
```
### Disclaimer
<details>
<summary>Click to expand</summary>
The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.
When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence.
In no event shall the owner and creator of the models (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.
|
Watwat100/smol8
|
Watwat100
| 2022-11-16T14:38:11Z | 2 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-11-16T14:37:37Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 5 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 5,
"warmup_steps": 1,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
projecte-aina/roberta-base-ca-v2-cased-sts
|
projecte-aina
| 2022-11-16T14:32:52Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"catalan",
"semantic textual similarity",
"sts-ca",
"CaText",
"Catalan Textual Corpus",
"ca",
"dataset:projecte-aina/sts-ca",
"arxiv:1907.11692",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-30T07:55:48Z |
---
pipeline_tag: text-classification
language:
- ca
license: apache-2.0
tags:
- "catalan"
- "semantic textual similarity"
- "sts-ca"
- "CaText"
- "Catalan Textual Corpus"
datasets:
- "projecte-aina/sts-ca"
metrics:
- "combined_score"
model-index:
- name: roberta-base-ca-v2-cased-sts
results:
- task:
type: text-classification
dataset:
type: projecte-aina/sts-ca
name: STS-ca
metrics:
- name: Combined score
type: combined_score
value: 0.7907
---
# Catalan BERTa-v2 (roberta-base-ca-v2) finetuned for Semantic Textual Similarity.
## Table of Contents
<details>
<summary>Click to expand</summary>
- [Model description](#model-description)
- [Intended uses and limitations](#intended-use)
- [How to use](#how-to-use)
- [Limitations and bias](#limitations-and-bias)
- [Training](#training)
- [Training data](#training-data)
- [Training procedure](#training-procedure)
- [Evaluation](#evaluation)
- [Variable and metrics](#variable-and-metrics)
- [Evaluation results](#evaluation-results)
- [Additional information](#additional-information)
- [Author](#author)
- [Contact information](#contact-information)
- [Copyright](#copyright)
- [Licensing information](#licensing-information)
- [Funding](#funding)
- [Citing information](#citing-information)
- [Disclaimer](#disclaimer)
</details>
## Model description
The **roberta-base-ca-v2-cased-sts** is a Semantic Textual Similarity (STS) model for the Catalan language fine-tuned from the [roberta-base-ca-v2](https://huggingface.co/projecte-aina/roberta-base-ca-v2) model, a [RoBERTa](https://arxiv.org/abs/1907.11692) base model pre-trained on a medium-size corpus collected from publicly available corpora and crawlers (check the roberta-base-ca-v2 model card for more details).
## Intended uses and limitations
**roberta-base-ca-v2-cased-sts** model can be used to assess the similarity between two snippets of text. The model is limited by its training dataset and may not generalize well for all use cases.
## How to use
To get the correct<sup>1</sup> model's prediction scores with values between 0.0 and 5.0, use the following code:
```python
from transformers import pipeline, AutoTokenizer
from scipy.special import logit
model = 'projecte-aina/roberta-base-ca-v2-cased-sts'
tokenizer = AutoTokenizer.from_pretrained(model)
pipe = pipeline('text-classification', model=model, tokenizer=tokenizer)
def prepare(sentence_pairs):
sentence_pairs_prep = []
for s1, s2 in sentence_pairs:
sentence_pairs_prep.append(f"{tokenizer.cls_token} {s1}{tokenizer.sep_token}{tokenizer.sep_token} {s2}{tokenizer.sep_token}")
return sentence_pairs_prep
sentence_pairs = [("El llibre va caure per la finestra.", "El llibre va sortir volant."),
("M'agrades.", "T'estimo."),
("M'agrada el sol i la calor", "A la Garrotxa plou molt.")]
predictions = pipe(prepare(sentence_pairs), add_special_tokens=False)
# convert back to scores to the original 0 and 5 interval
for prediction in predictions:
prediction['score'] = logit(prediction['score'])
print(predictions)
```
Expected output:
```
[{'label': 'SIMILARITY', 'score': 2.118301674983813},
{'label': 'SIMILARITY', 'score': 2.1799755855125853},
{'label': 'SIMILARITY', 'score': 0.9511617858568939}]
```
<sup>1</sup> _**avoid using the widget** scores since they are normalized and do not reflect the original annotation values._
## Limitations and bias
At the time of submission, no measures have been taken to estimate the bias embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated.
## Training
### Training data
We used the STS dataset in Catalan called [STS-ca](https://huggingface.co/datasets/projecte-aina/sts-ca) for training and evaluation.
### Training procedure
The model was trained with a batch size of 16 and a learning rate of 5e-5 for 5 epochs. We then selected the best checkpoint using the downstream task metric in the corresponding development set, and then evaluated it on the test set.
## Evaluation
### Variable and metrics
This model was finetuned maximizing the average score between the Pearson and Spearman correlations.
## Evaluation results
We evaluated the _roberta-base-ca-v2-cased-sts_ on the STS-ca test set against standard multilingual and monolingual baselines:
| Model | STS-ca (Combined score) |
| ------------|:-------------|
| roberta-base-ca-v2-cased-sts | 79.07 |
| roberta-base-ca-cased-sts | **80.19** |
| mBERT | 74.26 |
| XLM-RoBERTa | 61.61 |
For more details, check the fine-tuning and evaluation scripts in the official [GitHub repository](https://github.com/projecte-aina/club).
## Additional information
### Author
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es)
### Contact information
For further information, send an email to aina@bsc.es
### Copyright
Copyright (c) 2022 Text Mining Unit at Barcelona Supercomputing Center
### Licensing information
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
### Funding
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
### Citation Information
If you use any of these resources (datasets or models) in your work, please cite our latest paper:
```bibtex
@inproceedings{armengol-estape-etal-2021-multilingual,
title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan",
author = "Armengol-Estap{\'e}, Jordi and
Carrino, Casimiro Pio and
Rodriguez-Penagos, Carlos and
de Gibert Bonet, Ona and
Armentano-Oller, Carme and
Gonzalez-Agirre, Aitor and
Melero, Maite and
Villegas, Marta",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.437",
doi = "10.18653/v1/2021.findings-acl.437",
pages = "4933--4946",
}
```
### Disclaimer
<details>
<summary>Click to expand</summary>
The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.
When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence.
In no event shall the owner and creator of the models (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.
|
xfh/min-stable-diffusion-pt
|
xfh
| 2022-11-16T14:21:40Z | 0 | 2 | null |
[
"region:us"
] | null | 2022-11-16T11:27:27Z |
# This is min-stable-diffusion weights file
## I hope you enjoyed. I hope you can discovery light!!!
#### weight file notes
1) wd-1-3-penultimate-ucg-cont.pt is waifu-diffusion-v1-4 weight
2) mdjrny-v4.pt is midjourney-v4-diffusion weight
3) stable_diffusion_v1_4.pt is CompVis/stable-diffusion-v1-4
4) stable_diffusion_v1_5.pt is runwayml/stable-diffusion-v1-5
5) animev3.pt is https://huggingface.co/Linaqruf/personal_backup/tree/main/animev3ckpt
6) Anything-V3.0.pt is https://huggingface.co/Linaqruf/anything-v3.0
#### install and run is github
https://github.com/scale100xu/min-stable-diffusion
|
100click/my-friends
|
100click
| 2022-11-16T13:50:37Z | 0 | 0 | null |
[
"license:openrail",
"region:us"
] | null | 2022-11-16T07:47:51Z |
---
license: openrail
---
# Model for image generation
## Model description
This model is based on the `stable-diffusion-v1-4` model. It has its own prompt vocabulary (check "How to use").
## Intended uses & limitations
The model could be used to generate the images of my friends. Why You Need It? You're wierd...
## How to use
Add text in prompt:
- 'naumanya guy' - add Yuriy Nauman
|
microsoft/xdoc-base-funsd
|
microsoft
| 2022-11-16T13:44:31Z | 112 | 4 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"token-classification",
"arxiv:2210.02849",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-11-14T04:15:16Z |
---
license: mit
---
# XDoc
## Introduction
XDoc is a unified pre-trained model that deals with different document formats in a single model. With only 36.7% parameters, XDoc achieves comparable or better performance on downstream tasks, which is cost-effective for real-world deployment.
[XDoc: Unified Pre-training for Cross-Format Document Understanding](https://arxiv.org/abs/2210.02849)
Jingye Chen, Tengchao Lv, Lei Cui, Cha Zhang, Furu Wei, [EMNLP 2022](#)
## Citation
If you find XDoc helpful, please cite us:
```
@article{chen2022xdoc,
title={XDoc: Unified Pre-training for Cross-Format Document Understanding},
author={Chen, Jingye and Lv, Tengchao and Cui, Lei and Zhang, Cha and Wei, Furu},
journal={arXiv preprint arXiv:2210.02849},
year={2022}
}
```
|
microsoft/xdoc-base-squad1.1
|
microsoft
| 2022-11-16T13:44:18Z | 113 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"question-answering",
"arxiv:2210.02849",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-11-14T04:12:23Z |
---
license: mit
---
# XDoc
## Introduction
XDoc is a unified pre-trained model that deals with different document formats in a single model. With only 36.7% parameters, XDoc achieves comparable or better performance on downstream tasks, which is cost-effective for real-world deployment.
[XDoc: Unified Pre-training for Cross-Format Document Understanding](https://arxiv.org/abs/2210.02849)
Jingye Chen, Tengchao Lv, Lei Cui, Cha Zhang, Furu Wei, [EMNLP 2022](#)
## Citation
If you find XDoc helpful, please cite us:
```
@article{chen2022xdoc,
title={XDoc: Unified Pre-training for Cross-Format Document Understanding},
author={Chen, Jingye and Lv, Tengchao and Cui, Lei and Zhang, Cha and Wei, Furu},
journal={arXiv preprint arXiv:2210.02849},
year={2022}
}
```
|
microsoft/xdoc-base
|
microsoft
| 2022-11-16T13:43:32Z | 64 | 6 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"arxiv:2210.02849",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2022-11-14T04:09:13Z |
---
license: mit
---
# XDoc
## Introduction
XDoc is a unified pre-trained model that deals with different document formats in a single model. With only 36.7% parameters, XDoc achieves comparable or better performance on downstream tasks, which is cost-effective for real-world deployment.
[XDoc: Unified Pre-training for Cross-Format Document Understanding](https://arxiv.org/abs/2210.02849)
Jingye Chen, Tengchao Lv, Lei Cui, Cha Zhang, Furu Wei, [EMNLP 2022](#)
## Citation
If you find XDoc helpful, please cite us:
```
@article{chen2022xdoc,
title={XDoc: Unified Pre-training for Cross-Format Document Understanding},
author={Chen, Jingye and Lv, Tengchao and Cui, Lei and Zhang, Cha and Wei, Furu},
journal={arXiv preprint arXiv:2210.02849},
year={2022}
}
```
|
Uesugi/distilbert-base-uncased-finetuned-emotion
|
Uesugi
| 2022-11-16T13:38:13Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-16T12:59:47Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2313
- Accuracy: 0.92
- F1: 0.9200
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8717 | 1.0 | 250 | 0.3385 | 0.9015 | 0.8976 |
| 0.2633 | 2.0 | 500 | 0.2313 | 0.92 | 0.9200 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Tokenizers 0.13.2
|
EnglishVoice/t5-base-us-to-uk-english
|
EnglishVoice
| 2022-11-16T13:32:52Z | 361 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"paraphrase-generation",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-16T13:30:05Z |
---
language:
- en
tags:
- text2text-generation
- paraphrase-generation
license: apache-2.0
widget:
- text: "US to UK: My favorite color is yellow."
---
### About the model
The model has been trained on a dataset containing [249525 sentences with US English spelling](https://www.englishvoice.ai/p/us-to-uk/ "249525 sentences with US English spelling"), along with their UK English equivalent.
The purpose of the model is to rewrite sentences from US English to UK English. It is capable not only of changing the spelling of words (such as "color" to "colour") but also changes the vocabulary appropriately (for example, "subway" to "underground", "lawyer" to "solicitor" and so on).
### Generation examples
| Input | Output |
| :------------ | :------------ |
| My favorite color is yellow. | My favourite colour is yellow. |
| I saw a guy in yellow sneakers at the subway station. | I saw a bloke in yellow trainers at the underground station. |
| You could have gotten hurt! | You could have got hurt! |
### The dataset
The dataset was developed by English Voice AI Labs. You can download it from our website:
[https://www.EnglishVoice.ai/](https://www.EnglishVoice.ai/ "https://www.EnglishVoice.ai/")
### Sample code
Sample Python code:
```python
import torch
from transformers import T5ForConditionalGeneration,T5Tokenizer
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = T5ForConditionalGeneration.from_pretrained("EnglishVoice/t5-base-us-to-uk-english")
tokenizer = T5Tokenizer.from_pretrained("EnglishVoice/t5-base-us-to-uk-english")
model = model.to(device)
input = "My favorite color is yellow."
text = "US to UK: " + input
encoding = tokenizer.encode_plus(text, return_tensors = "pt")
input_ids = encoding["input_ids"].to(device)
attention_masks = encoding["attention_mask"].to(device)
beam_outputs = model.generate(
input_ids = input_ids,
attention_mask = attention_masks,
early_stopping = True,
)
result = tokenizer.decode(beam_outputs[0], skip_special_tokens=True)
print(result)
```
Output:
```My favourite colour is yellow.```
|
EnglishVoice/t5-base-uk-to-us-english
|
EnglishVoice
| 2022-11-16T13:28:30Z | 485 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"paraphrase-generation",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-15T18:34:16Z |
---
language:
- en
tags:
- text2text-generation
- paraphrase-generation
license: apache-2.0
widget:
- text: "UK to US: My favourite colour is yellow."
---
### About the model
The model has been trained on a dataset containing [264519 sentences with UK English spelling](https://www.englishvoice.ai/p/uk-to-us/ "264519 sentences with UK English spelling"), along with their US English equivalent.
The purpose of the model is to rewrite sentences from UK English to US English. It is capable not only of changing the spelling of words (such as "colour" to "color") but also changes the vocabulary appropriately (for example, "underground" to "subway", "solicitor" to "lawyer" and so on).
### Generation examples
| Input | Output |
| :------------ | :------------ |
| My favourite colour is yellow. | My favorite color is yellow. |
| I saw a bloke in yellow trainers at the underground station. | I saw a guy in yellow sneakers at the subway station. |
| You could have got hurt! | You could have gotten hurt! |
### The dataset
The dataset was developed by English Voice AI Labs. You can download it from our website:
[https://www.EnglishVoice.ai/](https://www.EnglishVoice.ai/ "https://www.EnglishVoice.ai/")
### Sample code
Sample Python code:
```python
import torch
from transformers import T5ForConditionalGeneration,T5Tokenizer
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = T5ForConditionalGeneration.from_pretrained("EnglishVoice/t5-base-uk-to-us-english")
tokenizer = T5Tokenizer.from_pretrained("EnglishVoice/t5-base-uk-to-us-english")
model = model.to(device)
input = "My favourite colour is yellow."
text = "UK to US: " + input
encoding = tokenizer.encode_plus(text, return_tensors = "pt")
input_ids = encoding["input_ids"].to(device)
attention_masks = encoding["attention_mask"].to(device)
beam_outputs = model.generate(
input_ids = input_ids,
attention_mask = attention_masks,
early_stopping = True,
)
result = tokenizer.decode(beam_outputs[0], skip_special_tokens=True)
print(result)
```
Output:
```My favorite color is yellow.```
|
upsalite/xlm-roberta-base-finetuned-emotion-37-labels
|
upsalite
| 2022-11-16T13:16:13Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-15T15:06:54Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlm-roberta-base-finetuned-emotion-37-labels
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-emotion-37-labels
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1765
- Accuracy: 0.7185
- F1: 0.7178
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 2.4256 | 1.0 | 433 | 1.7594 | 0.4384 | 0.4079 |
| 1.5536 | 2.0 | 866 | 1.3105 | 0.5784 | 0.5631 |
| 1.1753 | 3.0 | 1299 | 1.1767 | 0.6163 | 0.6057 |
| 0.9378 | 4.0 | 1732 | 1.0613 | 0.6565 | 0.6542 |
| 0.7606 | 5.0 | 2165 | 1.0284 | 0.6808 | 0.6776 |
| 0.6167 | 6.0 | 2598 | 1.0128 | 0.6892 | 0.6888 |
| 0.5009 | 7.0 | 3031 | 1.0250 | 0.6973 | 0.6946 |
| 0.4083 | 8.0 | 3464 | 1.0506 | 0.7014 | 0.6996 |
| 0.328 | 9.0 | 3897 | 1.0658 | 0.7075 | 0.7079 |
| 0.2704 | 10.0 | 4330 | 1.0874 | 0.7106 | 0.7094 |
| 0.2203 | 11.0 | 4763 | 1.1587 | 0.7031 | 0.7010 |
| 0.1813 | 12.0 | 5196 | 1.1559 | 0.7141 | 0.7130 |
| 0.1552 | 13.0 | 5629 | 1.1483 | 0.7173 | 0.7164 |
| 0.1325 | 14.0 | 6062 | 1.1697 | 0.7173 | 0.7170 |
| 0.1239 | 15.0 | 6495 | 1.1765 | 0.7185 | 0.7178 |
### Framework versions
- Transformers 4.19.0
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.12.1
|
Froddan/frost
|
Froddan
| 2022-11-16T12:56:01Z | 0 | 3 | null |
[
"stable-diffusion",
"text-to-image",
"en",
"license:cc0-1.0",
"region:us"
] |
text-to-image
| 2022-11-16T07:27:58Z |
---
license: cc0-1.0
inference: false
language:
- en
tags:
- stable-diffusion
- text-to-image
---
# Stable Diffusion fine tuned on photographs of frozen nature
### Usage
Use by adding the keyword "frostography" to the prompt. The model was trained with the "nature" classname, which can also be added to the prompt.
## Samples
I hope it gives you an idea of what kind of styles can be created with this model.
<img src="https://huggingface.co/Froddan/frost/resolve/main/frostography_nature_1.png" width="256px"/>
<img src="https://huggingface.co/Froddan/frost/resolve/main/frostography_nature_2.png" width="256px"/>
<img src="https://huggingface.co/Froddan/frost/resolve/main/frostography_car_1.png" width="256px"/>
<img src="https://huggingface.co/Froddan/frost/resolve/main/frostography_fish_1.png" width="256px"/>
<img src="https://huggingface.co/Froddan/frost/resolve/main/frostography_fish_2.png" width="256px"/>
<img src="https://huggingface.co/Froddan/frost/resolve/main/frostography_moon.png" width="256px"/>
<img src="https://huggingface.co/Froddan/frost/resolve/main/tmp3vde80fz.png" width="256px"/>
<img src="https://huggingface.co/Froddan/frost/resolve/main/tmpffxdfi38.png" width="256px"/>
<img src="https://huggingface.co/Froddan/frost/resolve/main/tmpmiz28zo5.png" width="256px"/>
### 🧨 Diffusers
This model can be used just like any other Stable Diffusion model. For more information,
please have a look at the [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion).
|
ZTamas/hubert-qa-milqa
|
ZTamas
| 2022-11-16T12:46:50Z | 212 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"hu",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-11-16T11:57:45Z |
---
language: hu
thumbnail:
tags:
- question-answering
- bert
widget:
- text: "Mi Európa második leghosszabb folyója?"
context: "A Duna Európa leghosszabb folyama az oroszországi Volga után. Németországban, a Fekete-erdőben ered két kis patak, a Breg és a Brigach összefolyásával Donaueschingennél, és innen délkeleti irányban 2850 kilométert tesz meg a Fekete-tengerig. Magyarország egész területe e folyam vízgyűjtőjén terül el, itteni főágának hossza 417 km, ezért az ország vízrajzának meghatározó alkotóeleme.
A folyó kialakulása a pliocén korban kezdődött el. A pliocén végén jutott el a Duna a Kisalföldig, ekkor a mai nyugat–kelet irány helyett észak–dél irányban folyt itt. Csak a pleisztocén korban alakult ki a kisalföldi szakasza. A folyó legfiatalabb része a Dobrudzsa nyugati oldalán található dél–észak irányú folyása, amely pusztán a pleisztocén kor végén jött létre.
Napjainkban fontos nemzetközi hajóút. A németországi Rajna–Majna–Duna-csatorna 1992-es megépítése óta részét képezi annak a 3500 km-es transzeurópai vízi útnak, amely az Északi-tenger melletti Rotterdamtól a Fekete-tenger melletti Sulináig ér. A Dunán szállított áruk össztömege 1987-ben elérte a 100 millió tonnát."
- text: "Hol vannak a legnagyobb mocsarak a Duna mentén?"
context: "Lassú folyású szakaszain épít hordalékának lerakásával; a lerakott hordalékhalmot hordalékkúpnak nevezik. A Kisalföld és a Margit-sziget a Duna hordalékkúpja. illetve a Duna-delta (elsősorban a Kilia-ág) területén a turzások. A hordalék a felső szakaszon még igen nagy méreteket, lejjebb már csak porszemnyi nagyságot vehet fel. Ugyanis, amikor lelassul, akkor a folyó először a nagyobb, majd az egyre kisebb darabokat hagyja el (kövek, kavicsok, homok, finom por). Ha a vízszint hirtelen apadásnak indul, a Dunán jellemzően ideiglenes zátonyok alakulnak ki. Ha ismét megnő a vízmennyiség, akkor a folyó újra tovább tudja szállítani hordalékát, a zátonyok eltűnnek. Jellemzően zátonyos rész a Duna Rajka és Gönyű közötti szakasza, ahol a Bős–nagymarosi vízlépcső megépítésével a hasonló képződmények még nagyobb számban elszaporodtak. A Duna különleges képződményei az al-dunai sellők, amelyek a mederfenék kisebb-nagyobb kitüremkedéseit jelentik. Egyre kisebb számban, de még jellemzőek a Duna mentén a mocsarak, amelyek lerakott hordalékkúpokon alakultak ki. Ezekből a legnagyobbak Bajorországban, a Hanság és a Duna-delta területén találhatók."
- text: "Mióta jár Dunaharasztin hév?"
context: "Az egyesítéskor felmerült a Budapest környéki településekre helyiérdekű vasút (HÉV) építése is, azonban ez csak 1882-ben vált véglegessé: a BKVT április 4-ei ülésén a Közvágóhíd–Soroksár HÉV-vonal létrehozásáról határozott. Még javában folyt a vonal építése, de a BKVT már előmunkálati engedélyt kapott a dunaharaszti meghosszabbításra is. Az 1880-as évek végére a főváros négy HÉV-vonallal büszkélkedhetett: először a Budapest-Szentlőrinci Helyi Érdekű Vasút Rt. (BLVV) üzemeletetésében álló, Ferencvárost és a Budapest-Szentlőrinci Tégla- és Terracottagyárat összekötő vonalat nyitották meg 1887. április 12-én (későbbi 50-es villamos), amit a BKVT Közvágóhíd–Soroksár vonala követett augusztus 7-én, majd ennek a Dunaharasztiig érő szakasza november 24-én. 1888. július 20-án üzembe helyezte a BKVT a Kerepesi út–Cinkota, majd augusztus 17-én a Filatorigát–Szentendre is. A HÉV-ágazat 1889. december 29-én függetlenedett el, kezelője a Budapesti Helyi Érdekű Vasutak Részvénytársaság (BHÉV) lett."
---
This model is a fine-tuned version of [mcsabai/huBert-fine-tuned-hungarian-squadv1](https://huggingface.co/mcsabai/huBert-fine-tuned-hungarian-squadv1) on the milqa dataset.
|
nategro/contradiction-mini-lds
|
nategro
| 2022-11-16T12:43:18Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-10-13T07:59:14Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# contradiction-mini-lds
A model for the identification of contradiction sentences in patents using all-MiniLM-L6-v2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('nategro/contradiction-mini-lds')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 1128 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 1128,
"warmup_steps": 113,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
The following pre-trained model was used: [`sentence-transformers/all-MiniLM-L6-v2`](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2)
|
nategro/parameter-mini-lds
|
nategro
| 2022-11-16T12:40:55Z | 28 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-10-13T07:59:39Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# parameter-mini-lds
A model for the identification of paramerter sentences in patents using all-MiniLM-L6-v2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('nategro/parameter-mini-lds')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 1128 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 1128,
"warmup_steps": 113,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
The following pre-trained model was used: [`sentence-transformers/all-MiniLM-L6-v2`](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2)
|
nategro/nps-mpnet-lds
|
nategro
| 2022-11-16T12:39:08Z | 1 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-10-13T13:04:59Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# nps-mpnet-lds
A model for the identification of problem and solution sentences in patents using paraphrase-mpnet-base-v2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('nategro/nps-mpnet-lds')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('nategro/nps-mpnet-lds')
model = AutoModel.from_pretrained('nategro/nps-mpnet-lds')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 630 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 630,
"warmup_steps": 63,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
The following pre-trained model was used: [`sentence-transformers/paraphrase-mpnet-base-v2`](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2)
|
nategro/nps-mpnet
|
nategro
| 2022-11-16T12:38:02Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-10-13T13:05:10Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# nps-mpnet
A model for the identification of problem and solution sentences in patents using sentence-transformers/paraphrase-mpnet-base-v2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('nategro/nps-mpnet')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('nategro/nps-mpnet')
model = AutoModel.from_pretrained('nategro/nps-mpnet')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 276 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 276,
"warmup_steps": 28,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
The following pre-trained model was used: [`sentence-transformers/paraphrase-mpnet-base-v2`](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2)
|
saketh-chervu/distilroberta-base-finetuned-distilroberta
|
saketh-chervu
| 2022-11-16T12:35:56Z | 79 | 0 |
transformers
|
[
"transformers",
"tf",
"roberta",
"fill-mask",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-11-16T11:42:40Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: saketh-chervu/distilroberta-base-finetuned-distilroberta
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# saketh-chervu/distilroberta-base-finetuned-distilroberta
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.1462
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 3.1462 | 0 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Datasets 2.6.1
- Tokenizers 0.13.2
|
nategro/contradiction-psb
|
nategro
| 2022-11-16T12:27:52Z | 379 | 1 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-10-13T08:21:42Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# contradiction-psb
A model for the identification of contradiction sentences in patents using PatentSBERTa
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('nategro/contradiction-psb')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
def cls_pooling(model_output, attention_mask):
return model_output[0][:,0]
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('nategro/contradiction-psb')
model = AutoModel.from_pretrained('nategro/contradiction-psb')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, cls pooling.
sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 496 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 496,
"warmup_steps": 50,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
The following pre-trained model was used: [`AI-Growth-Lab/PatentSBERTa`](https://huggingface.co/AI-Growth-Lab/PatentSBERTa)
|
Sebabrata/dof-aadhaar-1
|
Sebabrata
| 2022-11-16T11:55:23Z | 46 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:imagefolder",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2022-11-16T08:53:39Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: dof-aadhaar-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dof-aadhaar-1
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0
- Datasets 2.6.1
- Tokenizers 0.13.2
|
Froddan/gankpansuay
|
Froddan
| 2022-11-16T11:19:47Z | 0 | 1 | null |
[
"stable-diffusion",
"text-to-image",
"en",
"license:cc0-1.0",
"region:us"
] |
text-to-image
| 2022-11-16T07:56:58Z |
---
license: cc0-1.0
inference: false
language:
- en
tags:
- stable-diffusion
- text-to-image
---
# Stable Diffusion fine tuned on art by [Gank Pansuay](https://www.instagram.com/gank_pansuay/)
### Usage
Use by adding the keyword "gankpansuay" to the prompt. The model was trained with the "woman" classname, which can also be added to the prompt.
## Samples
The top 4 samples are "pure" while others are mixed with other artists and modifiers. I hope it still gives you an idea of what kind of
styles can be created with this model.
<img src="https://huggingface.co/Froddan/gankpansuay/resolve/main/1400_index3.png" width="256px"/>
<img src="https://huggingface.co/Froddan/gankpansuay/resolve/main/1400_index4.png" width="256px"/>
<img src="https://huggingface.co/Froddan/gankpansuay/resolve/main/1400_index5.png" width="256px"/>
<img src="https://huggingface.co/Froddan/gankpansuay/resolve/main/1400_index7.png" width="256px"/>
<img src="https://huggingface.co/Froddan/gankpansuay/resolve/main/1400_index8.png" width="256px"/>
<img src="https://huggingface.co/Froddan/gankpansuay/resolve/main/index2.png" width="256px"/>
<img src="https://huggingface.co/Froddan/gankpansuay/resolve/main/train_1400_3.png" width="256px"/>
### 🧨 Diffusers
This model can be used just like any other Stable Diffusion model. For more information,
please have a look at the [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion).
|
Wheatley961/Raw_3_no_2_Test_2_new.model
|
Wheatley961
| 2022-11-16T11:19:29Z | 1 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-11-16T11:19:04Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 108 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 108,
"warmup_steps": 11,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
fanzru/t5-small-finetuned-xsum
|
fanzru
| 2022-11-16T11:05:47Z | 103 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:xsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-24T07:58:57Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xsum
metrics:
- rouge
model-index:
- name: t5-small-finetuned-xsum
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: xsum
type: xsum
config: default
split: train
args: default
metrics:
- name: Rouge1
type: rouge
value: 28.2047
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4786
- Rouge1: 28.2047
- Rouge2: 7.7109
- Rougel: 22.1559
- Rougelsum: 22.1595
- Gen Len: 18.8257
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 2.7156 | 1.0 | 12753 | 2.4786 | 28.2047 | 7.7109 | 22.1559 | 22.1595 | 18.8257 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.11.0a0+b6df043
- Datasets 2.6.1
- Tokenizers 0.13.2
|
Wheatley961/Raw_3_no_1_Test_2_new.model
|
Wheatley961
| 2022-11-16T10:43:52Z | 2 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-11-16T10:43:28Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 80 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 80,
"warmup_steps": 8,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
Froddan/saidoudicko
|
Froddan
| 2022-11-16T10:38:04Z | 0 | 0 | null |
[
"stable-diffusion",
"text-to-image",
"en",
"license:cc0-1.0",
"region:us"
] |
text-to-image
| 2022-11-16T08:10:52Z |
---
license: cc0-1.0
inference: false
language:
- en
tags:
- stable-diffusion
- text-to-image
---
# Stable Diffusion fine tuned on art by [Saidou Dicko](https://www.instagram.com/saidou_dicko/)
### Usage
Use by adding the keyword "saidoudicko" to the prompt. The model was trained with the "person" classname, which can also be added to the prompt.
## Samples
The top 4 samples are "pure" while others are mixed with other artists and modifiers. I hope it still gives you an idea of what kind of
styles can be created with this model.
<img src="https://huggingface.co/Froddan/saidoudicko/resolve/main/index.png" width="256px"/>
<img src="https://huggingface.co/Froddan/saidoudicko/resolve/main/index4.png" width="256px"/>
<img src="https://huggingface.co/Froddan/saidoudicko/resolve/main/index8.png" width="256px"/>
<img src="https://huggingface.co/Froddan/saidoudicko/resolve/main/index9.png" width="256px"/>
<img src="https://huggingface.co/Froddan/saidoudicko/resolve/main/bak_nevado.png" width="256px"/>
<img src="https://huggingface.co/Froddan/saidoudicko/resolve/main/greg_mucha2.png" width="256px"/>
<img src="https://huggingface.co/Froddan/saidoudicko/resolve/main/mckernan_guay1.png" width="256px"/>
<img src="https://huggingface.co/Froddan/saidoudicko/resolve/main/mckernan_guay2.png" width="256px"/>
<img src="https://huggingface.co/Froddan/saidoudicko/resolve/main/muholi_lafforgue6.png" width="256px"/>
<img src="https://huggingface.co/Froddan/saidoudicko/resolve/main/mumford.png" width="256px"/>
### 🧨 Diffusers
This model can be used just like any other Stable Diffusion model. For more information,
please have a look at the [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion).
|
Glad-pkpk/bert-finetuned-ner
|
Glad-pkpk
| 2022-11-16T10:07:32Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-11-16T05:14:24Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: train
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9333003136866436
- name: Recall
type: recall
value: 0.9513631773813531
- name: F1
type: f1
value: 0.9422451870989249
- name: Accuracy
type: accuracy
value: 0.9865632542532525
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0620
- Precision: 0.9333
- Recall: 0.9514
- F1: 0.9422
- Accuracy: 0.9866
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0894 | 1.0 | 1756 | 0.0722 | 0.9170 | 0.9335 | 0.9252 | 0.9820 |
| 0.035 | 2.0 | 3512 | 0.0637 | 0.9299 | 0.9482 | 0.9389 | 0.9860 |
| 0.0183 | 3.0 | 5268 | 0.0620 | 0.9333 | 0.9514 | 0.9422 | 0.9866 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
gcrssr/ddpm-butterflies-128
|
gcrssr
| 2022-11-16T09:43:01Z | 2 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"en",
"dataset:huggan/smithsonian_butterflies_subset",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-11-16T08:55:02Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: huggan/smithsonian_butterflies_subset
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/gcrssr/ddpm-butterflies-128/tensorboard?#scalars)
|
ivanchangoluisa/q-Taxi-v3
|
ivanchangoluisa
| 2022-11-16T09:31:17Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-11-16T09:31:11Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="ivanchangoluisa/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
sd-concepts-library/zizigooloo
|
sd-concepts-library
| 2022-11-16T09:20:33Z | 0 | 1 | null |
[
"license:mit",
"region:us"
] | null | 2022-11-16T09:20:28Z |
---
license: mit
---
### Zizigooloo on Stable Diffusion
This is the `<zizigooloo>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:






|
Xiegg/scherenschnitt_papercut
|
Xiegg
| 2022-11-16T08:43:23Z | 0 | 0 | null |
[
"license:cc",
"region:us"
] | null | 2022-11-16T08:08:19Z |
---
license: cc
---
This model trained on SD-1.5 provides different styles of layered paper art
Triggerword: scherenschnitt papercut
Prompt expample:
layering paper art, 75mm photography of a scherenschnitt papercut, the christmas crib scene in the stable with ox mule and adoration of kings, artist's work, detailed, (white) paper, (navyblue) paper, (color) paper, christmas, backlight effect, harmonic shapes, winter landscape, cute, romantic xmas, in focus, 8k, a bit underexposed, 3d effect, unreal engine, blender render, ((symmetrie)), abstraction, HD, family christmas in switzerland, in layering paper art, paper cut, paper folding
Negative prompt: text, writing, logo, signature, tree
Settings
Steps: 50,
Sampler: DPM fast,
CFG scale: 14,
Seed: 2147632306,
Size: 704x512,
Model hash: 78e2aaa9,
Variation seed: 362561481,
Variation seed strength: 0.4
|
ivanchangoluisa/q-FrozenLake-v1-4x4-noSlippery
|
ivanchangoluisa
| 2022-11-16T08:42:48Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-11-16T08:42:41Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="ivanchangoluisa/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
Sebabrata/dof-pan-1
|
Sebabrata
| 2022-11-16T07:36:18Z | 48 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:imagefolder",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2022-11-16T05:44:10Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: dof-pan-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dof-pan-1
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0
- Datasets 2.6.1
- Tokenizers 0.13.2
|
haoanh98/phoBert-514
|
haoanh98
| 2022-11-16T06:56:08Z | 60 | 0 |
transformers
|
[
"transformers",
"tf",
"roberta",
"feature-extraction",
"generated_from_keras_callback",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-11-16T06:55:45Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: phoBert-514
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# phoBert-514
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Tokenizers 0.13.2
|
EtoEto/Stable_Diffusion_B_W_Winter_model
|
EtoEto
| 2022-11-16T06:42:13Z | 0 | 0 | null |
[
"license:openrail",
"region:us"
] | null | 2022-11-16T06:19:21Z |
---
license: openrail
---
This is the fine-tuned Stable Diffusion model trained on black and white films by Danil Matvienko.
Use **bwWinter** in your prompts.

.png)
|
GItaf/BERT-FINETUNE-MBTI-LM-BERT-FINETUNE-MBTI-LM-JointBERT-Warmup-from-LM
|
GItaf
| 2022-11-16T06:16:03Z | 56 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-12T05:47:43Z |
---
tags:
- generated_from_trainer
model-index:
- name: BERT-FINETUNE-MBTI-LM-BERT-FINETUNE-MBTI-LM-JointBERT-Warmup-from-LM
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT-FINETUNE-MBTI-LM-BERT-FINETUNE-MBTI-LM-JointBERT-Warmup-from-LM
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.7966
- Cls loss: 1.4255
- Lm loss: 4.4398
- Cls Accuracy: 0.6380
- Cls F1: 0.6319
- Cls Precision: 0.6416
- Cls Recall: 0.6380
- Perplexity: 84.76
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cls loss | Lm loss | Cls Accuracy | Cls F1 | Cls Precision | Cls Recall | Perplexity |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:-------:|:------------:|:------:|:-------------:|:----------:|:----------:|
| 5.3087 | 1.0 | 3470 | 4.9005 | 1.4109 | 4.5474 | 0.6075 | 0.5981 | 0.6132 | 0.6075 | 94.39 |
| 4.8274 | 2.0 | 6940 | 4.7987 | 1.3448 | 4.4621 | 0.6242 | 0.6193 | 0.6381 | 0.6242 | 86.67 |
| 4.6472 | 3.0 | 10410 | 4.7966 | 1.4255 | 4.4398 | 0.6380 | 0.6319 | 0.6416 | 0.6380 | 84.76 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1
- Datasets 2.4.0
- Tokenizers 0.12.1
|
mike008/ddpm-butterflies-128
|
mike008
| 2022-11-16T06:11:48Z | 3 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"en",
"dataset:huggan/smithsonian_butterflies_subset",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-11-16T04:55:43Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: huggan/smithsonian_butterflies_subset
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/mike008/ddpm-butterflies-128/tensorboard?#scalars)
|
teacookies/autotrain-16112022-cert-2114268313
|
teacookies
| 2022-11-16T05:47:55Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"autotrain",
"token-classification",
"unk",
"dataset:teacookies/autotrain-data-16112022-cert",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-11-16T05:38:00Z |
---
tags:
- autotrain
- token-classification
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- teacookies/autotrain-data-16112022-cert
co2_eq_emissions:
emissions: 0.08699410121541305
---
# Model Trained Using AutoTrain
- Problem type: Entity Extraction
- Model ID: 2114268313
- CO2 Emissions (in grams): 0.0870
## Validation Metrics
- Loss: 0.003
- Accuracy: 0.999
- Precision: 0.987
- Recall: 0.986
- F1: 0.986
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/teacookies/autotrain-16112022-cert-2114268313
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("teacookies/autotrain-16112022-cert-2114268313", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autotrain-16112022-cert-2114268313", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
Bruno/my-awesome-setfit-model
|
Bruno
| 2022-11-16T05:26:19Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-11-16T05:26:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 40 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 40,
"warmup_steps": 4,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
yuriashimizu/distilbert-base-uncased-finetuned-emotion
|
yuriashimizu
| 2022-11-16T05:24:30Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-04T07:23:46Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.923
- name: F1
type: f1
value: 0.9231361383574684
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2205
- Accuracy: 0.923
- F1: 0.9231
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8625 | 1.0 | 250 | 0.3246 | 0.9075 | 0.9062 |
| 0.2522 | 2.0 | 500 | 0.2205 | 0.923 | 0.9231 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Bardia323/BestOfMidjourneyFusion
|
Bardia323
| 2022-11-16T04:59:42Z | 0 | 1 | null |
[
"license:wtfpl",
"region:us"
] | null | 2022-11-14T06:59:23Z |
---
license: wtfpl
---
Midjourney Model trained on a dataset of curated 200 top weekly community chosen images on November 11th.
Trained for 4800 steps using the Dreambooth method.
Use "MJstyle artwork" as prompt
|
buihungtpd3/layoutlmv3-finetuned-vinv2
|
buihungtpd3
| 2022-11-16T04:33:24Z | 78 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"layoutlmv3",
"token-classification",
"generated_from_trainer",
"dataset:drug_bill_layoutv3",
"license:cc-by-nc-sa-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-11-16T03:11:29Z |
---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
datasets:
- drug_bill_layoutv3
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: layoutlmv3-finetuned-vinv2
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: drug_bill_layoutv3
type: drug_bill_layoutv3
config: Vin_Drug_Bill
split: train
args: Vin_Drug_Bill
metrics:
- name: Precision
type: precision
value: 1.0
- name: Recall
type: recall
value: 1.0
- name: F1
type: f1
value: 1.0
- name: Accuracy
type: accuracy
value: 1.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv3-finetuned-vinv2
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the drug_bill_layoutv3 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0001
- Precision: 1.0
- Recall: 1.0
- F1: 1.0
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 5
- eval_batch_size: 5
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.33 | 250 | 0.0025 | 0.9994 | 0.9994 | 0.9994 | 0.9998 |
| 0.0662 | 2.66 | 500 | 0.0004 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0662 | 3.99 | 750 | 0.0003 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0111 | 5.32 | 1000 | 0.0002 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0111 | 6.65 | 1250 | 0.0001 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0126 | 7.98 | 1500 | 0.0001 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0126 | 9.31 | 1750 | 0.0001 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0032 | 10.64 | 2000 | 0.0001 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0032 | 11.97 | 2250 | 0.0001 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0011 | 13.3 | 2500 | 0.0001 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0011 | 14.63 | 2750 | 0.0001 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0002 | 15.96 | 3000 | 0.0001 | 1.0 | 1.0 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
huggingtweets/shaanvp
|
huggingtweets
| 2022-11-16T04:01:42Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-08-12T08:15:28Z |
---
language: en
thumbnail: http://www.huggingtweets.com/shaanvp/1668571298343/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1433108361839472642/3d54PCqW_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Shaan Puri</div>
<div style="text-align: center; font-size: 14px;">@shaanvp</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Shaan Puri.
| Data | Shaan Puri |
| --- | --- |
| Tweets downloaded | 3249 |
| Retweets | 279 |
| Short tweets | 321 |
| Tweets kept | 2649 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1yj2zt7f/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @shaanvp's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3jp529xm) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3jp529xm/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/shaanvp')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
AkashM/t5-small-finetuned-xsum
|
AkashM
| 2022-11-16T03:44:59Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-10-26T20:35:33Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-small-finetuned-xsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5585
- Rouge1: 45.8829
- Rouge2: 35.4564
- Rougel: 44.7101
- Rougelsum: 45.1103
- Gen Len: 11.9031
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.713 | 1.0 | 4578 | 1.5585 | 45.8829 | 35.4564 | 44.7101 | 45.1103 | 11.9031 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
huggingtweets/thomastrainrek
|
huggingtweets
| 2022-11-16T03:38:10Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-07-15T05:29:31Z |
---
language: en
thumbnail: http://www.huggingtweets.com/thomastrainrek/1668569886926/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1321337599332593664/tqNLm-HD_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">thomas the trainwreck</div>
<div style="text-align: center; font-size: 14px;">@thomastrainrek</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from thomas the trainwreck.
| Data | thomas the trainwreck |
| --- | --- |
| Tweets downloaded | 1817 |
| Retweets | 48 |
| Short tweets | 55 |
| Tweets kept | 1714 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2eizpiau/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @thomastrainrek's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/93k1hlvn) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/93k1hlvn/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/thomastrainrek')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
monideep2255/pseudolabeling-step2-F01-Pass-2
|
monideep2255
| 2022-11-16T03:08:02Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-11-15T23:39:17Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: pseudolabeling-step2-F01-Pass-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pseudolabeling-step2-F01-Pass-2
This model is a fine-tuned version of [monideep2255/XLRS-torgo](https://huggingface.co/monideep2255/XLRS-torgo) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2704
- Wer: 1.1942
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.9509 | 2.94 | 400 | 1.1949 | 1.2734 |
| 0.5686 | 5.88 | 800 | 1.0452 | 1.2266 |
| 0.4179 | 8.82 | 1200 | 1.1876 | 1.2032 |
| 0.3137 | 11.76 | 1600 | 1.2691 | 1.2572 |
| 0.2329 | 14.7 | 2000 | 1.2944 | 1.2104 |
| 0.1851 | 17.64 | 2400 | 1.4389 | 1.2626 |
| 0.1427 | 20.59 | 2800 | 1.3325 | 1.2608 |
| 0.1101 | 23.53 | 3200 | 1.4132 | 1.2176 |
| 0.0805 | 26.47 | 3600 | 1.3443 | 1.2482 |
| 0.0645 | 29.41 | 4000 | 1.2704 | 1.1942 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 1.18.3
- Tokenizers 0.13.2
|
NightRiderSp19/AITFTI
|
NightRiderSp19
| 2022-11-16T03:00:11Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-11-16T02:46:57Z |
API_URL = "https://api-inference.huggingface.co/models/cwwierzbicki/autotrain-dogspeople2-1978966090"
headers = {"Authorization": f"Bearer {API_TOKEN}"}
def query(filename):
with open(filename, "rb") as f:
data = f.read()
response = requests.request("POST", API_URL, headers=headers, data=data)
return json.loads(response.content.decode("utf-8"))
output = query("cats.jpg")
|
ricardo-filho/bert_sesame_ner
|
ricardo-filho
| 2022-11-16T02:46:13Z | 121 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-11-15T19:33:38Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: bert_semeval_ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_semeval_ner
This model is a fine-tuned version of [neuralmind/bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
kksukk/hubert_zeroth_gpu_scratch
|
kksukk
| 2022-11-16T02:34:00Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hubert",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:zeroth_korean_asr",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-11-15T04:28:46Z |
---
tags:
- generated_from_trainer
datasets:
- zeroth_korean_asr
metrics:
- wer
model-index:
- name: hubert_zeroth_gpu_scratch
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: zeroth_korean_asr
type: zeroth_korean_asr
config: clean
split: train
args: clean
metrics:
- name: Wer
type: wer
value: 1.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hubert_zeroth_gpu_scratch
This model is a fine-tuned version of [](https://huggingface.co/) on the zeroth_korean_asr dataset.
It achieves the following results on the evaluation set:
- Loss: 4.8280
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:---:|
| 10.6349 | 0.14 | 100 | 4.8579 | 1.0 |
| 4.7539 | 0.29 | 200 | 4.7308 | 1.0 |
| 4.7255 | 0.43 | 300 | 4.7278 | 1.0 |
| 4.7124 | 0.57 | 400 | 5.3295 | 1.0 |
| 4.7543 | 0.72 | 500 | 4.7487 | 1.0 |
| 4.8932 | 0.86 | 600 | 4.9136 | 1.0 |
| 4.8533 | 1.01 | 700 | 4.8799 | 1.0 |
| 4.8483 | 1.15 | 800 | 4.8665 | 1.0 |
| 4.8424 | 1.29 | 900 | 4.8622 | 1.0 |
| 4.8426 | 1.44 | 1000 | 4.8506 | 1.0 |
| 4.8373 | 1.58 | 1100 | 4.8603 | 1.0 |
| 4.8452 | 1.72 | 1200 | 4.8537 | 1.0 |
| 4.8391 | 1.87 | 1300 | 4.8520 | 1.0 |
| 4.8405 | 2.01 | 1400 | 4.8682 | 1.0 |
| 4.8375 | 2.16 | 1500 | 4.8637 | 1.0 |
| 4.8413 | 2.3 | 1600 | 4.8664 | 1.0 |
| 4.8388 | 2.44 | 1700 | 4.8473 | 1.0 |
| 4.8389 | 2.59 | 1800 | 4.8484 | 1.0 |
| 4.8343 | 2.73 | 1900 | 4.8629 | 1.0 |
| 4.8294 | 2.87 | 2000 | 4.8571 | 1.0 |
| 4.827 | 3.02 | 2100 | 4.8472 | 1.0 |
| 4.8316 | 3.16 | 2200 | 4.8576 | 1.0 |
| 4.8241 | 3.3 | 2300 | 4.8398 | 1.0 |
| 4.8333 | 3.45 | 2400 | 4.8603 | 1.0 |
| 4.8387 | 3.59 | 2500 | 4.8484 | 1.0 |
| 4.8312 | 3.74 | 2600 | 4.8420 | 1.0 |
| 4.8304 | 3.88 | 2700 | 4.8398 | 1.0 |
| 4.8291 | 4.02 | 2800 | 4.8355 | 1.0 |
| 4.8326 | 4.17 | 2900 | 4.8415 | 1.0 |
| 4.8274 | 4.31 | 3000 | 4.8338 | 1.0 |
| 4.8245 | 4.45 | 3100 | 4.8389 | 1.0 |
| 4.83 | 4.6 | 3200 | 4.8332 | 1.0 |
| 4.8335 | 4.74 | 3300 | 4.8393 | 1.0 |
| 4.829 | 4.89 | 3400 | 4.8352 | 1.0 |
| 4.832 | 5.03 | 3500 | 4.8329 | 1.0 |
| 4.8285 | 5.17 | 3600 | 4.8343 | 1.0 |
| 4.8302 | 5.32 | 3700 | 4.8381 | 1.0 |
| 4.8371 | 5.46 | 3800 | 4.8426 | 1.0 |
| 4.8226 | 5.6 | 3900 | 4.8383 | 1.0 |
| 4.8257 | 5.75 | 4000 | 4.8372 | 1.0 |
| 4.8222 | 5.89 | 4100 | 4.8332 | 1.0 |
| 4.8255 | 6.03 | 4200 | 4.8437 | 1.0 |
| 4.8277 | 6.18 | 4300 | 4.8351 | 1.0 |
| 4.8257 | 6.32 | 4400 | 4.8368 | 1.0 |
| 4.8301 | 6.47 | 4500 | 4.8345 | 1.0 |
| 4.8267 | 6.61 | 4600 | 4.8343 | 1.0 |
| 4.8296 | 6.75 | 4700 | 4.8388 | 1.0 |
| 4.828 | 6.9 | 4800 | 4.8374 | 1.0 |
| 4.8173 | 7.04 | 4900 | 4.8375 | 1.0 |
| 4.8234 | 7.18 | 5000 | 4.8348 | 1.0 |
| 4.8233 | 7.33 | 5100 | 4.8349 | 1.0 |
| 4.8232 | 7.47 | 5200 | 4.8339 | 1.0 |
| 4.8293 | 7.61 | 5300 | 4.8386 | 1.0 |
| 4.8305 | 7.76 | 5400 | 4.8385 | 1.0 |
| 4.8253 | 7.9 | 5500 | 4.8315 | 1.0 |
| 4.823 | 8.05 | 5600 | 4.8325 | 1.0 |
| 4.8313 | 8.19 | 5700 | 4.8311 | 1.0 |
| 4.8284 | 8.33 | 5800 | 4.8329 | 1.0 |
| 4.8199 | 8.48 | 5900 | 4.8329 | 1.0 |
| 4.8208 | 8.62 | 6000 | 4.8319 | 1.0 |
| 4.8315 | 8.76 | 6100 | 4.8334 | 1.0 |
| 4.8265 | 8.91 | 6200 | 4.8308 | 1.0 |
| 4.8218 | 9.05 | 6300 | 4.8313 | 1.0 |
| 4.8172 | 9.2 | 6400 | 4.8294 | 1.0 |
| 4.8231 | 9.34 | 6500 | 4.8299 | 1.0 |
| 4.825 | 9.48 | 6600 | 4.8311 | 1.0 |
| 4.826 | 9.63 | 6700 | 4.8299 | 1.0 |
| 4.8269 | 9.77 | 6800 | 4.8321 | 1.0 |
| 4.8275 | 9.91 | 6900 | 4.8306 | 1.0 |
| 4.8199 | 10.06 | 7000 | 4.8302 | 1.0 |
| 4.8217 | 10.2 | 7100 | 4.8316 | 1.0 |
| 4.8237 | 10.34 | 7200 | 4.8296 | 1.0 |
| 4.8253 | 10.49 | 7300 | 4.8318 | 1.0 |
| 4.8256 | 10.63 | 7400 | 4.8320 | 1.0 |
| 4.8265 | 10.78 | 7500 | 4.8297 | 1.0 |
| 4.8201 | 10.92 | 7600 | 4.8309 | 1.0 |
| 4.8259 | 11.06 | 7700 | 4.8302 | 1.0 |
| 4.8216 | 11.21 | 7800 | 4.8315 | 1.0 |
| 4.8206 | 11.35 | 7900 | 4.8328 | 1.0 |
| 4.8249 | 11.49 | 8000 | 4.8290 | 1.0 |
| 4.8231 | 11.64 | 8100 | 4.8297 | 1.0 |
| 4.8232 | 11.78 | 8200 | 4.8303 | 1.0 |
| 4.8245 | 11.93 | 8300 | 4.8283 | 1.0 |
| 4.8224 | 12.07 | 8400 | 4.8309 | 1.0 |
| 4.822 | 12.21 | 8500 | 4.8341 | 1.0 |
| 4.8234 | 12.36 | 8600 | 4.8300 | 1.0 |
| 4.8233 | 12.5 | 8700 | 4.8302 | 1.0 |
| 4.825 | 12.64 | 8800 | 4.8301 | 1.0 |
| 4.8246 | 12.79 | 8900 | 4.8310 | 1.0 |
| 4.8169 | 12.93 | 9000 | 4.8308 | 1.0 |
| 4.8194 | 13.07 | 9100 | 4.8319 | 1.0 |
| 4.8182 | 13.22 | 9200 | 4.8334 | 1.0 |
| 4.8245 | 13.36 | 9300 | 4.8334 | 1.0 |
| 4.8274 | 13.51 | 9400 | 4.8427 | 1.0 |
| 4.8194 | 13.65 | 9500 | 4.8393 | 1.0 |
| 4.825 | 13.79 | 9600 | 4.8368 | 1.0 |
| 4.8162 | 13.94 | 9700 | 4.8371 | 1.0 |
| 4.8213 | 14.08 | 9800 | 4.8359 | 1.0 |
| 4.8275 | 14.22 | 9900 | 4.8330 | 1.0 |
| 4.8119 | 14.37 | 10000 | 4.8328 | 1.0 |
| 4.8267 | 14.51 | 10100 | 4.8327 | 1.0 |
| 4.8218 | 14.66 | 10200 | 4.8328 | 1.0 |
| 4.8221 | 14.8 | 10300 | 4.8344 | 1.0 |
| 4.8181 | 14.94 | 10400 | 4.8330 | 1.0 |
| 4.8204 | 15.09 | 10500 | 4.8326 | 1.0 |
| 4.8235 | 15.23 | 10600 | 4.8340 | 1.0 |
| 4.8113 | 15.37 | 10700 | 4.8330 | 1.0 |
| 4.8268 | 15.52 | 10800 | 4.8330 | 1.0 |
| 4.8199 | 15.66 | 10900 | 4.8341 | 1.0 |
| 4.8213 | 15.8 | 11000 | 4.8320 | 1.0 |
| 4.8268 | 15.95 | 11100 | 4.8345 | 1.0 |
| 4.8113 | 16.09 | 11200 | 4.8367 | 1.0 |
| 4.8216 | 16.24 | 11300 | 4.8358 | 1.0 |
| 4.8287 | 16.38 | 11400 | 4.8343 | 1.0 |
| 4.8185 | 16.52 | 11500 | 4.8341 | 1.0 |
| 4.8226 | 16.67 | 11600 | 4.8321 | 1.0 |
| 4.8187 | 16.81 | 11700 | 4.8337 | 1.0 |
| 4.8183 | 16.95 | 11800 | 4.8324 | 1.0 |
| 4.8173 | 17.1 | 11900 | 4.8334 | 1.0 |
| 4.8217 | 17.24 | 12000 | 4.8338 | 1.0 |
| 4.8174 | 17.39 | 12100 | 4.8323 | 1.0 |
| 4.8193 | 17.53 | 12200 | 4.8358 | 1.0 |
| 4.8203 | 17.67 | 12300 | 4.8313 | 1.0 |
| 4.8182 | 17.82 | 12400 | 4.8311 | 1.0 |
| 4.8245 | 17.96 | 12500 | 4.8324 | 1.0 |
| 4.8195 | 18.1 | 12600 | 4.8301 | 1.0 |
| 4.8197 | 18.25 | 12700 | 4.8345 | 1.0 |
| 4.8163 | 18.39 | 12800 | 4.8326 | 1.0 |
| 4.8227 | 18.53 | 12900 | 4.8319 | 1.0 |
| 4.8254 | 18.68 | 13000 | 4.8321 | 1.0 |
| 4.8197 | 18.82 | 13100 | 4.8315 | 1.0 |
| 4.819 | 18.97 | 13200 | 4.8306 | 1.0 |
| 4.8106 | 19.11 | 13300 | 4.8297 | 1.0 |
| 4.8161 | 19.25 | 13400 | 4.8314 | 1.0 |
| 4.8147 | 19.4 | 13500 | 4.8340 | 1.0 |
| 4.8237 | 19.54 | 13600 | 4.8313 | 1.0 |
| 4.8186 | 19.68 | 13700 | 4.8298 | 1.0 |
| 4.8217 | 19.83 | 13800 | 4.8302 | 1.0 |
| 4.8239 | 19.97 | 13900 | 4.8297 | 1.0 |
| 4.8189 | 20.11 | 14000 | 4.8313 | 1.0 |
| 4.8254 | 20.26 | 14100 | 4.8299 | 1.0 |
| 4.8166 | 20.4 | 14200 | 4.8297 | 1.0 |
| 4.8199 | 20.55 | 14300 | 4.8294 | 1.0 |
| 4.8129 | 20.69 | 14400 | 4.8307 | 1.0 |
| 4.8175 | 20.83 | 14500 | 4.8285 | 1.0 |
| 4.8195 | 20.98 | 14600 | 4.8281 | 1.0 |
| 4.82 | 21.12 | 14700 | 4.8293 | 1.0 |
| 4.8136 | 21.26 | 14800 | 4.8293 | 1.0 |
| 4.8177 | 21.41 | 14900 | 4.8287 | 1.0 |
| 4.826 | 21.55 | 15000 | 4.8288 | 1.0 |
| 4.8177 | 21.7 | 15100 | 4.8296 | 1.0 |
| 4.8165 | 21.84 | 15200 | 4.8303 | 1.0 |
| 4.8246 | 21.98 | 15300 | 4.8282 | 1.0 |
| 4.8146 | 22.13 | 15400 | 4.8276 | 1.0 |
| 4.819 | 22.27 | 15500 | 4.8279 | 1.0 |
| 4.814 | 22.41 | 15600 | 4.8295 | 1.0 |
| 4.8195 | 22.56 | 15700 | 4.8274 | 1.0 |
| 4.8189 | 22.7 | 15800 | 4.8275 | 1.0 |
| 4.822 | 22.84 | 15900 | 4.8274 | 1.0 |
| 4.8195 | 22.99 | 16000 | 4.8274 | 1.0 |
| 4.8146 | 23.13 | 16100 | 4.8274 | 1.0 |
| 4.8126 | 23.28 | 16200 | 4.8271 | 1.0 |
| 4.8172 | 23.42 | 16300 | 4.8272 | 1.0 |
| 4.8214 | 23.56 | 16400 | 4.8277 | 1.0 |
| 4.821 | 23.71 | 16500 | 4.8278 | 1.0 |
| 4.8212 | 23.85 | 16600 | 4.8274 | 1.0 |
| 4.819 | 23.99 | 16700 | 4.8277 | 1.0 |
| 4.8165 | 24.14 | 16800 | 4.8274 | 1.0 |
| 4.8212 | 24.28 | 16900 | 4.8268 | 1.0 |
| 4.8198 | 24.43 | 17000 | 4.8272 | 1.0 |
| 4.8228 | 24.57 | 17100 | 4.8281 | 1.0 |
| 4.8159 | 24.71 | 17200 | 4.8272 | 1.0 |
| 4.8123 | 24.86 | 17300 | 4.8274 | 1.0 |
| 4.8143 | 25.0 | 17400 | 4.8284 | 1.0 |
| 4.8174 | 25.14 | 17500 | 4.8289 | 1.0 |
| 4.8243 | 25.29 | 17600 | 4.8276 | 1.0 |
| 4.8145 | 25.43 | 17700 | 4.8283 | 1.0 |
| 4.8129 | 25.57 | 17800 | 4.8277 | 1.0 |
| 4.815 | 25.72 | 17900 | 4.8272 | 1.0 |
| 4.8155 | 25.86 | 18000 | 4.8279 | 1.0 |
| 4.8217 | 26.01 | 18100 | 4.8269 | 1.0 |
| 4.8106 | 26.15 | 18200 | 4.8277 | 1.0 |
| 4.8188 | 26.29 | 18300 | 4.8270 | 1.0 |
| 4.8232 | 26.44 | 18400 | 4.8277 | 1.0 |
| 4.816 | 26.58 | 18500 | 4.8278 | 1.0 |
| 4.8159 | 26.72 | 18600 | 4.8275 | 1.0 |
| 4.8199 | 26.87 | 18700 | 4.8274 | 1.0 |
| 4.8149 | 27.01 | 18800 | 4.8278 | 1.0 |
| 4.8103 | 27.16 | 18900 | 4.8279 | 1.0 |
| 4.8244 | 27.3 | 19000 | 4.8275 | 1.0 |
| 4.8217 | 27.44 | 19100 | 4.8279 | 1.0 |
| 4.8168 | 27.59 | 19200 | 4.8277 | 1.0 |
| 4.8111 | 27.73 | 19300 | 4.8287 | 1.0 |
| 4.816 | 27.87 | 19400 | 4.8279 | 1.0 |
| 4.8166 | 28.02 | 19500 | 4.8282 | 1.0 |
| 4.8129 | 28.16 | 19600 | 4.8281 | 1.0 |
| 4.8207 | 28.3 | 19700 | 4.8275 | 1.0 |
| 4.8196 | 28.45 | 19800 | 4.8274 | 1.0 |
| 4.8208 | 28.59 | 19900 | 4.8277 | 1.0 |
| 4.811 | 28.74 | 20000 | 4.8280 | 1.0 |
| 4.8176 | 28.88 | 20100 | 4.8280 | 1.0 |
| 4.8126 | 29.02 | 20200 | 4.8283 | 1.0 |
| 4.8161 | 29.17 | 20300 | 4.8279 | 1.0 |
| 4.8134 | 29.31 | 20400 | 4.8278 | 1.0 |
| 4.8201 | 29.45 | 20500 | 4.8279 | 1.0 |
| 4.8185 | 29.6 | 20600 | 4.8283 | 1.0 |
| 4.8174 | 29.74 | 20700 | 4.8280 | 1.0 |
| 4.8145 | 29.89 | 20800 | 4.8280 | 1.0 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0+cu117
- Datasets 2.0.0
- Tokenizers 0.13.2
|
Samaneh/xlm-roberta-base-finetuned-panx-de
|
Samaneh
| 2022-11-16T02:18:35Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-11-16T01:53:54Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8648740833380706
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1365
- F1: 0.8649
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2553 | 1.0 | 525 | 0.1575 | 0.8279 |
| 0.1284 | 2.0 | 1050 | 0.1386 | 0.8463 |
| 0.0813 | 3.0 | 1575 | 0.1365 | 0.8649 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
keithhon/distilbert-base-uncased-text-classification-template-1.0
|
keithhon
| 2022-11-16T02:10:37Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-15T07:50:33Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-text-classification-template
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-text-classification-template
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6637
- F1: 0.5
- Roc Auc: 0.6667
- Accuracy: 0.3333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---:|:-------:|:--------:|
| No log | 1.0 | 6 | 0.6637 | 0.5 | 0.6667 | 0.3333 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
Poojitha/distilbert-base-uncased-finetuned-emotion
|
Poojitha
| 2022-11-16T00:42:55Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-16T00:34:13Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2861
- Accuracy: 0.4731
- F1: 0.4643
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.5548 | 1.0 | 63 | 1.4000 | 0.3880 | 0.3166 |
| 1.3084 | 2.0 | 126 | 1.2861 | 0.4731 | 0.4643 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Tokenizers 0.13.2
|
mrm8488/galactica-125m
|
mrm8488
| 2022-11-16T00:20:06Z | 203 | 8 |
transformers
|
[
"transformers",
"pytorch",
"opt",
"text-generation",
"arxiv:1810.03993",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-15T23:37:56Z |
---
license: apache-2.0
---
# This is WIP!
# GALACTICA (mini)
Following [Mitchell et al. (2018)](https://arxiv.org/abs/1810.03993), this model card provides information about the GALACTICA model, how it was trained, and the intended use cases. Full details about how the model was trained and evaluated can be found in the [release paper](https://galactica.org/paper.pdf).
## Model Details
The GALACTICA models are trained on a large-scale scientific corpus. The models are designed to perform scientific tasks, including but not limited to citation prediction, scientific QA, mathematical reasoning, summarization, document generation, molecular property prediction and entity extraction. The models were developed by the Papers with Code team at Meta AI to study the use of language models for the automatic organization of science. We train models with sizes ranging from 125M to 120B parameters. Below is a summary of the released models:
| Size | Parameters |
|:-----------:|:-----------:|
| `mini` | 125 M |
| `base` | 1.3 B |
| `standard` | 6.7 B |
| `large` | 30 B |
| `huge` | 120 B |
## Release Date
November 2022
## Model Type
Transformer based architecture in a decoder-only setup with a few modifications (see paper for more details).
## Paper & Demo
[[Paper]](https://galactica.org/paper.pdf) / [[Demo]](https://galactica.org)
## Model Use
The primary intended users of the GALACTICA models are reserachers studying language models applied to the scientific domain. We also anticipate the model will be useful for developers who wish to build scientific tooling. However, we caution against production use without safeguards given the potential of language models to hallucinate.
The models are made available under a non-commercial CC BY-NC 4.0 license. More information about how to use the model can be found in the README.md of this repository.
## Training Data
The GALACTICA models are trained on 106 billion tokens of open-access scientific text and data. This includes papers, textbooks, scientific websites, encyclopedias, reference material, knowledge bases, and more. We tokenize different modalities to provide a natural langauge interface for different tasks. See the README.md for more information. See the paper for full information on the training data.
## Performance and Limitations
The model outperforms several existing language models on a range of knowledge probes, reasoning, and knowledge-intensive scientific tasks. This also extends to general NLP tasks, where GALACTICA outperforms other open source general language models. That being said, we note a number of limitations in this section.
As with other language models, GALACTICA is often prone to hallucination - and training on a high-quality academic corpus does not prevent this, especially for less popular and less cited scientific concepts. There are no guarantees of truthful output when generating form the model. This extends to specific modalities such as citation prediction. While GALACTICA's citation behaviour approaches the ground truth citation behaviour with scale, the model continues to exhibit a popularity bias at larger scales.
In addition, we evaluated the model on several types of benchmarks related to stereotypes and toxicity. Overall, the model exhibits substantially lower toxicity rates compared to other large language models. That being said, the model continues to exhibit bias on certain measures (see the paper for details). So we recommend care when using the model for generations.
## Broader Implications
GALACTICA can potentially be used as a new way to discover academic literature. We also expect a lot of downstream use for application to particular domains, such as mathematics, biology and chemistry. In the paper, we demonstrated several examples of the model acting as alternative to standard search tools. We expect a new generation of scientific tools to be build upon large language models such as GALACTICA.
We encourage researchers to investigate beneficial and new use cases for these models. That being said, it is important to be aware of current limitations of large language models. Researchers should pay attention to common issues such as hallucination and biases that could emerge from using these models.
|
meongracun/nmt-mpst-id-en-lr_1e-3-ep_20-seq_128_bs-32
|
meongracun
| 2022-11-15T23:00:26Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-15T22:29:09Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: nmt-mpst-id-en-lr_1e-3-ep_20-seq_128_bs-32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nmt-mpst-id-en-lr_1e-3-ep_20-seq_128_bs-32
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6391
- Bleu: 18.9112
- Meteor: 0.3583
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Meteor |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|
| No log | 1.0 | 202 | 1.8793 | 13.9958 | 0.2988 |
| No log | 2.0 | 404 | 1.7154 | 15.2332 | 0.3136 |
| 1.6109 | 3.0 | 606 | 1.6615 | 16.4394 | 0.3279 |
| 1.6109 | 4.0 | 808 | 1.6292 | 17.1368 | 0.3375 |
| 1.2855 | 5.0 | 1010 | 1.6205 | 17.7174 | 0.3451 |
| 1.2855 | 6.0 | 1212 | 1.6246 | 17.9786 | 0.3478 |
| 1.2855 | 7.0 | 1414 | 1.6178 | 18.3294 | 0.3515 |
| 1.0144 | 8.0 | 1616 | 1.6195 | 18.6155 | 0.3556 |
| 1.0144 | 9.0 | 1818 | 1.6320 | 18.7035 | 0.3565 |
| 0.8814 | 10.0 | 2020 | 1.6391 | 18.9112 | 0.3583 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
storm3d/ppo-LunarLander-v2
|
storm3d
| 2022-11-15T22:51:04Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-11-15T22:50:29Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 219.75 +/- 21.36
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
meongracun/nmt-mpst-id-en-lr_1e-4-ep_10-seq_128_bs-64
|
meongracun
| 2022-11-15T22:43:24Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-15T22:17:19Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: nmt-mpst-id-en-lr_1e-4-ep_10-seq_128_bs-64
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nmt-mpst-id-en-lr_1e-4-ep_10-seq_128_bs-64
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4108
- Bleu: 5.8803
- Meteor: 0.1857
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Meteor |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| No log | 1.0 | 101 | 2.8898 | 2.8643 | 0.1158 |
| No log | 2.0 | 202 | 2.7574 | 3.5561 | 0.1355 |
| No log | 3.0 | 303 | 2.6672 | 4.1558 | 0.1509 |
| No log | 4.0 | 404 | 2.5927 | 4.5156 | 0.1593 |
| 2.9931 | 5.0 | 505 | 2.5319 | 4.9528 | 0.1673 |
| 2.9931 | 6.0 | 606 | 2.4832 | 5.2665 | 0.1728 |
| 2.9931 | 7.0 | 707 | 2.4505 | 5.4822 | 0.1778 |
| 2.9931 | 8.0 | 808 | 2.4290 | 5.7456 | 0.1829 |
| 2.9931 | 9.0 | 909 | 2.4147 | 5.8499 | 0.185 |
| 2.6176 | 10.0 | 1010 | 2.4108 | 5.8803 | 0.1857 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
gngpostalsrvc/BERiT_2000_custom_architecture_100_epochs
|
gngpostalsrvc
| 2022-11-15T22:01:12Z | 108 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-11-15T19:27:52Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: BERiT_2000_custom_architecture_100_epochs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERiT_2000_custom_architecture_100_epochs
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3615
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 16.6444 | 0.19 | 500 | 8.8209 |
| 8.1904 | 0.39 | 1000 | 7.5197 |
| 7.3572 | 0.58 | 1500 | 7.1037 |
| 7.0042 | 0.77 | 2000 | 6.8817 |
| 6.8626 | 0.97 | 2500 | 6.7353 |
| 6.7582 | 1.16 | 3000 | 6.5802 |
| 6.6345 | 1.36 | 3500 | 6.4232 |
| 6.5817 | 1.55 | 4000 | 6.3375 |
| 6.517 | 1.74 | 4500 | 6.3439 |
| 6.4531 | 1.94 | 5000 | 6.2870 |
| 6.469 | 2.13 | 5500 | 6.3208 |
| 6.4503 | 2.32 | 6000 | 6.2136 |
| 6.3679 | 2.52 | 6500 | 6.2260 |
| 6.4032 | 2.71 | 7000 | 6.2015 |
| 6.357 | 2.9 | 7500 | 6.2363 |
| 6.3349 | 3.1 | 8000 | 6.2101 |
| 6.342 | 3.29 | 8500 | 6.2031 |
| 6.3047 | 3.49 | 9000 | 6.1945 |
| 6.3204 | 3.68 | 9500 | 6.1681 |
| 6.2935 | 3.87 | 10000 | 6.1999 |
| 6.3319 | 4.07 | 10500 | 6.1613 |
| 6.2528 | 4.26 | 11000 | 6.1354 |
| 6.2683 | 4.45 | 11500 | 6.2427 |
| 6.2572 | 4.65 | 12000 | 6.1477 |
| 6.2509 | 4.84 | 12500 | 6.1770 |
| 6.2402 | 5.03 | 13000 | 6.1779 |
| 6.2412 | 5.23 | 13500 | 6.1516 |
| 6.2291 | 5.42 | 14000 | 6.1498 |
| 6.2203 | 5.62 | 14500 | 6.1804 |
| 6.2341 | 5.81 | 15000 | 6.1501 |
| 6.2242 | 6.0 | 15500 | 6.1239 |
| 6.2163 | 6.2 | 16000 | 6.1567 |
| 6.2079 | 6.39 | 16500 | 6.1188 |
| 6.2176 | 6.58 | 17000 | 6.1620 |
| 6.1926 | 6.78 | 17500 | 6.1635 |
| 6.1743 | 6.97 | 18000 | 6.0749 |
| 6.1978 | 7.16 | 18500 | 6.1316 |
| 6.1868 | 7.36 | 19000 | 6.0297 |
| 6.19 | 7.55 | 19500 | 6.1126 |
| 6.2005 | 7.75 | 20000 | 6.0985 |
| 6.2056 | 7.94 | 20500 | 6.1100 |
| 6.1628 | 8.13 | 21000 | 6.1321 |
| 6.169 | 8.33 | 21500 | 6.0842 |
| 6.1636 | 8.52 | 22000 | 6.1205 |
| 6.1278 | 8.71 | 22500 | 6.1270 |
| 6.1656 | 8.91 | 23000 | 6.1049 |
| 6.1526 | 9.1 | 23500 | 6.1462 |
| 6.1624 | 9.3 | 24000 | 6.0534 |
| 6.1353 | 9.49 | 24500 | 6.0862 |
| 6.1264 | 9.68 | 25000 | 6.0844 |
| 6.1648 | 9.88 | 25500 | 6.1206 |
| 6.1574 | 10.07 | 26000 | 6.0942 |
| 6.0971 | 10.26 | 26500 | 6.1151 |
| 6.119 | 10.46 | 27000 | 6.1148 |
| 6.1217 | 10.65 | 27500 | 6.1076 |
| 6.1054 | 10.84 | 28000 | 6.1457 |
| 6.1402 | 11.04 | 28500 | 6.0442 |
| 6.1124 | 11.23 | 29000 | 6.1404 |
| 6.1457 | 11.43 | 29500 | 6.0622 |
| 6.1248 | 11.62 | 30000 | 6.1377 |
| 6.1204 | 11.81 | 30500 | 6.1056 |
| 6.1097 | 12.01 | 31000 | 6.0780 |
| 6.0713 | 12.2 | 31500 | 6.0061 |
| 6.1119 | 12.39 | 32000 | 6.1671 |
| 6.0744 | 12.59 | 32500 | 6.1235 |
| 6.082 | 12.78 | 33000 | 6.0905 |
| 6.0962 | 12.97 | 33500 | 6.0936 |
| 6.1265 | 13.17 | 34000 | 6.0786 |
| 6.0941 | 13.36 | 34500 | 6.0944 |
| 6.0694 | 13.56 | 35000 | 6.0988 |
| 6.0784 | 13.75 | 35500 | 6.1221 |
| 6.0749 | 13.94 | 36000 | 6.0961 |
| 6.103 | 14.14 | 36500 | 6.0141 |
| 6.0944 | 14.33 | 37000 | 6.0812 |
| 6.0869 | 14.52 | 37500 | 6.0423 |
| 6.0986 | 14.72 | 38000 | 6.1194 |
| 6.0759 | 14.91 | 38500 | 6.0504 |
| 6.0592 | 15.1 | 39000 | 6.1483 |
| 6.0624 | 15.3 | 39500 | 6.0978 |
| 6.077 | 15.49 | 40000 | 6.0585 |
| 6.0581 | 15.69 | 40500 | 6.0355 |
| 6.0612 | 15.88 | 41000 | 6.0742 |
| 6.0536 | 16.07 | 41500 | 6.1443 |
| 6.057 | 16.27 | 42000 | 6.0574 |
| 6.0419 | 16.46 | 42500 | 6.0372 |
| 6.0076 | 16.65 | 43000 | 6.0624 |
| 6.0773 | 16.85 | 43500 | 6.0624 |
| 6.0317 | 17.04 | 44000 | 6.0738 |
| 6.0248 | 17.23 | 44500 | 6.0959 |
| 6.0459 | 17.43 | 45000 | 6.0636 |
| 6.0332 | 17.62 | 45500 | 6.0536 |
| 6.0319 | 17.82 | 46000 | 6.0659 |
| 6.0363 | 18.01 | 46500 | 6.0154 |
| 6.0001 | 18.2 | 47000 | 6.0082 |
| 5.9719 | 18.4 | 47500 | 6.0778 |
| 6.0332 | 18.59 | 48000 | 6.0491 |
| 6.0061 | 18.78 | 48500 | 5.9457 |
| 5.9675 | 18.98 | 49000 | 5.9768 |
| 5.9749 | 19.17 | 49500 | 6.0173 |
| 5.9944 | 19.36 | 50000 | 5.9981 |
| 6.0248 | 19.56 | 50500 | 5.9255 |
| 5.9774 | 19.75 | 51000 | 6.0158 |
| 5.9768 | 19.95 | 51500 | 5.9443 |
| 5.9499 | 20.14 | 52000 | 5.9708 |
| 5.979 | 20.33 | 52500 | 5.9296 |
| 5.9881 | 20.53 | 53000 | 5.9506 |
| 5.9775 | 20.72 | 53500 | 5.9266 |
| 5.9361 | 20.91 | 54000 | 5.9270 |
| 5.9427 | 21.11 | 54500 | 5.9461 |
| 5.9396 | 21.3 | 55000 | 5.9156 |
| 5.9596 | 21.49 | 55500 | 5.9185 |
| 5.9079 | 21.69 | 56000 | 5.9630 |
| 5.9579 | 21.88 | 56500 | 5.8991 |
| 5.9564 | 22.08 | 57000 | 5.9097 |
| 5.9225 | 22.27 | 57500 | 5.9452 |
| 5.9202 | 22.46 | 58000 | 5.8680 |
| 5.9103 | 22.66 | 58500 | 5.8985 |
| 5.9106 | 22.85 | 59000 | 5.8656 |
| 5.913 | 23.04 | 59500 | 5.8292 |
| 5.9249 | 23.24 | 60000 | 5.8420 |
| 5.8948 | 23.43 | 60500 | 5.8782 |
| 5.9273 | 23.63 | 61000 | 5.8952 |
| 5.8788 | 23.82 | 61500 | 5.8438 |
| 5.898 | 24.01 | 62000 | 5.8705 |
| 5.8809 | 24.21 | 62500 | 5.7648 |
| 5.8953 | 24.4 | 63000 | 5.8283 |
| 5.9177 | 24.59 | 63500 | 5.7760 |
| 5.8809 | 24.79 | 64000 | 5.8144 |
| 5.8994 | 24.98 | 64500 | 5.8348 |
| 5.8817 | 25.17 | 65000 | 5.8334 |
| 5.8701 | 25.37 | 65500 | 5.7240 |
| 5.8518 | 25.56 | 66000 | 5.8187 |
| 5.8406 | 25.76 | 66500 | 5.8133 |
| 5.859 | 25.95 | 67000 | 5.7331 |
| 5.8627 | 26.14 | 67500 | 5.7711 |
| 5.8727 | 26.34 | 68000 | 5.7598 |
| 5.8295 | 26.53 | 68500 | 5.8364 |
| 5.8216 | 26.72 | 69000 | 5.7586 |
| 5.8458 | 26.92 | 69500 | 5.7413 |
| 5.8597 | 27.11 | 70000 | 5.7444 |
| 5.842 | 27.3 | 70500 | 5.7288 |
| 5.8254 | 27.5 | 71000 | 5.7811 |
| 5.8285 | 27.69 | 71500 | 5.7120 |
| 5.8106 | 27.89 | 72000 | 5.6733 |
| 5.8073 | 28.08 | 72500 | 5.7163 |
| 5.7932 | 28.27 | 73000 | 5.7258 |
| 5.7919 | 28.47 | 73500 | 5.6985 |
| 5.7881 | 28.66 | 74000 | 5.7321 |
| 5.7942 | 28.85 | 74500 | 5.6545 |
| 5.8011 | 29.05 | 75000 | 5.6799 |
| 5.8071 | 29.24 | 75500 | 5.7270 |
| 5.784 | 29.43 | 76000 | 5.6806 |
| 5.7774 | 29.63 | 76500 | 5.6918 |
| 5.7345 | 29.82 | 77000 | 5.7138 |
| 5.7863 | 30.02 | 77500 | 5.7072 |
| 5.7774 | 30.21 | 78000 | 5.6649 |
| 5.7954 | 30.4 | 78500 | 5.6150 |
| 5.7624 | 30.6 | 79000 | 5.6398 |
| 5.7296 | 30.79 | 79500 | 5.6216 |
| 5.7053 | 30.98 | 80000 | 5.5447 |
| 5.7688 | 31.18 | 80500 | 5.6245 |
| 5.7254 | 31.37 | 81000 | 5.6100 |
| 5.755 | 31.56 | 81500 | 5.6257 |
| 5.7854 | 31.76 | 82000 | 5.6330 |
| 5.7351 | 31.95 | 82500 | 5.5588 |
| 5.7233 | 32.15 | 83000 | 5.5590 |
| 5.7225 | 32.34 | 83500 | 5.5480 |
| 5.7451 | 32.53 | 84000 | 5.6075 |
| 5.6989 | 32.73 | 84500 | 5.5447 |
| 5.7245 | 32.92 | 85000 | 5.5353 |
| 5.7132 | 33.11 | 85500 | 5.5563 |
| 5.7187 | 33.31 | 86000 | 5.5177 |
| 5.7203 | 33.5 | 86500 | 5.5630 |
| 5.6948 | 33.69 | 87000 | 5.5357 |
| 5.7118 | 33.89 | 87500 | 5.5367 |
| 5.6763 | 34.08 | 88000 | 5.4824 |
| 5.6923 | 34.28 | 88500 | 5.4489 |
| 5.6803 | 34.47 | 89000 | 5.5113 |
| 5.6977 | 34.66 | 89500 | 5.4829 |
| 5.6834 | 34.86 | 90000 | 5.4640 |
| 5.6596 | 35.05 | 90500 | 5.4816 |
| 5.6513 | 35.24 | 91000 | 5.4522 |
| 5.6687 | 35.44 | 91500 | 5.3984 |
| 5.6866 | 35.63 | 92000 | 5.4538 |
| 5.6479 | 35.82 | 92500 | 5.3811 |
| 5.6308 | 36.02 | 93000 | 5.3664 |
| 5.6299 | 36.21 | 93500 | 5.3788 |
| 5.6263 | 36.41 | 94000 | 5.3367 |
| 5.6305 | 36.6 | 94500 | 5.4058 |
| 5.6065 | 36.79 | 95000 | 5.3011 |
| 5.6236 | 36.99 | 95500 | 5.3301 |
| 5.6191 | 37.18 | 96000 | 5.3643 |
| 5.5991 | 37.37 | 96500 | 5.3917 |
| 5.6044 | 37.57 | 97000 | 5.3284 |
| 5.6001 | 37.76 | 97500 | 5.3199 |
| 5.5758 | 37.96 | 98000 | 5.2644 |
| 5.567 | 38.15 | 98500 | 5.3054 |
| 5.5404 | 38.34 | 99000 | 5.3473 |
| 5.5677 | 38.54 | 99500 | 5.2537 |
| 5.5676 | 38.73 | 100000 | 5.3135 |
| 5.5608 | 38.92 | 100500 | 5.2030 |
| 5.5523 | 39.12 | 101000 | 5.2808 |
| 5.545 | 39.31 | 101500 | 5.2114 |
| 5.5117 | 39.5 | 102000 | 5.2167 |
| 5.5403 | 39.7 | 102500 | 5.1930 |
| 5.5166 | 39.89 | 103000 | 5.1737 |
| 5.5267 | 40.09 | 103500 | 5.2112 |
| 5.5116 | 40.28 | 104000 | 5.2007 |
| 5.4874 | 40.47 | 104500 | 5.1654 |
| 5.5144 | 40.67 | 105000 | 5.1378 |
| 5.4683 | 40.86 | 105500 | 5.2039 |
| 5.4978 | 41.05 | 106000 | 5.1436 |
| 5.4781 | 41.25 | 106500 | 5.1642 |
| 5.5052 | 41.44 | 107000 | 5.1245 |
| 5.4844 | 41.63 | 107500 | 5.1809 |
| 5.4853 | 41.83 | 108000 | 5.0201 |
| 5.4814 | 42.02 | 108500 | 5.1054 |
| 5.4529 | 42.22 | 109000 | 5.1489 |
| 5.4804 | 42.41 | 109500 | 5.0555 |
| 5.4534 | 42.6 | 110000 | 5.0705 |
| 5.4401 | 42.8 | 110500 | 5.0464 |
| 5.45 | 42.99 | 111000 | 5.0069 |
| 5.4547 | 43.18 | 111500 | 5.0655 |
| 5.4212 | 43.38 | 112000 | 5.0563 |
| 5.3913 | 43.57 | 112500 | 5.0514 |
| 5.4268 | 43.76 | 113000 | 4.9936 |
| 5.3926 | 43.96 | 113500 | 5.0101 |
| 5.3882 | 44.15 | 114000 | 5.0294 |
| 5.4014 | 44.35 | 114500 | 5.0560 |
| 5.417 | 44.54 | 115000 | 4.9827 |
| 5.4012 | 44.73 | 115500 | 4.9811 |
| 5.3697 | 44.93 | 116000 | 4.9288 |
| 5.3991 | 45.12 | 116500 | 4.9576 |
| 5.3711 | 45.31 | 117000 | 4.9339 |
| 5.4081 | 45.51 | 117500 | 4.9250 |
| 5.3531 | 45.7 | 118000 | 4.8725 |
| 5.3826 | 45.89 | 118500 | 4.9501 |
| 5.3798 | 46.09 | 119000 | 4.9958 |
| 5.3415 | 46.28 | 119500 | 4.9327 |
| 5.3786 | 46.48 | 120000 | 4.8616 |
| 5.3862 | 46.67 | 120500 | 4.8863 |
| 5.3606 | 46.86 | 121000 | 4.9151 |
| 5.3605 | 47.06 | 121500 | 4.9053 |
| 5.3455 | 47.25 | 122000 | 4.9110 |
| 5.3264 | 47.44 | 122500 | 4.8673 |
| 5.3409 | 47.64 | 123000 | 4.8346 |
| 5.3567 | 47.83 | 123500 | 4.8996 |
| 5.3103 | 48.02 | 124000 | 4.8342 |
| 5.3244 | 48.22 | 124500 | 4.8464 |
| 5.3324 | 48.41 | 125000 | 4.8729 |
| 5.3273 | 48.61 | 125500 | 4.8125 |
| 5.31 | 48.8 | 126000 | 4.8519 |
| 5.2872 | 48.99 | 126500 | 4.8693 |
| 5.3066 | 49.19 | 127000 | 4.8600 |
| 5.302 | 49.38 | 127500 | 4.8171 |
| 5.2875 | 49.57 | 128000 | 4.7911 |
| 5.2806 | 49.77 | 128500 | 4.8004 |
| 5.3108 | 49.96 | 129000 | 4.7977 |
| 5.2741 | 50.15 | 129500 | 4.8427 |
| 5.2603 | 50.35 | 130000 | 4.7938 |
| 5.282 | 50.54 | 130500 | 4.7997 |
| 5.2835 | 50.74 | 131000 | 4.8173 |
| 5.2628 | 50.93 | 131500 | 4.7610 |
| 5.3034 | 51.12 | 132000 | 4.7908 |
| 5.2635 | 51.32 | 132500 | 4.7676 |
| 5.3269 | 51.51 | 133000 | 4.8245 |
| 5.242 | 51.7 | 133500 | 4.7265 |
| 5.2516 | 51.9 | 134000 | 4.7588 |
| 5.2641 | 52.09 | 134500 | 4.7695 |
| 5.2493 | 52.29 | 135000 | 4.7327 |
| 5.2334 | 52.48 | 135500 | 4.7206 |
| 5.2483 | 52.67 | 136000 | 4.7289 |
| 5.2133 | 52.87 | 136500 | 4.8136 |
| 5.2495 | 53.06 | 137000 | 4.6620 |
| 5.2489 | 53.25 | 137500 | 4.7118 |
| 5.2415 | 53.45 | 138000 | 4.7011 |
| 5.231 | 53.64 | 138500 | 4.7295 |
| 5.2211 | 53.83 | 139000 | 4.7199 |
| 5.2327 | 54.03 | 139500 | 4.7146 |
| 5.2053 | 54.22 | 140000 | 4.6871 |
| 5.2117 | 54.42 | 140500 | 4.7097 |
| 5.1929 | 54.61 | 141000 | 4.6923 |
| 5.2199 | 54.8 | 141500 | 4.7291 |
| 5.211 | 55.0 | 142000 | 4.7088 |
| 5.2482 | 55.19 | 142500 | 4.6551 |
| 5.2043 | 55.38 | 143000 | 4.7244 |
| 5.1799 | 55.58 | 143500 | 4.7225 |
| 5.2053 | 55.77 | 144000 | 4.6948 |
| 5.1745 | 55.96 | 144500 | 4.7157 |
| 5.1673 | 56.16 | 145000 | 4.6555 |
| 5.2122 | 56.35 | 145500 | 4.6842 |
| 5.1701 | 56.55 | 146000 | 4.6581 |
| 5.2107 | 56.74 | 146500 | 4.6245 |
| 5.2454 | 56.93 | 147000 | 4.6399 |
| 5.2134 | 57.13 | 147500 | 4.6585 |
| 5.1753 | 57.32 | 148000 | 4.6233 |
| 5.1355 | 57.51 | 148500 | 4.6543 |
| 5.2032 | 57.71 | 149000 | 4.6640 |
| 5.1714 | 57.9 | 149500 | 4.6635 |
| 5.1769 | 58.09 | 150000 | 4.6256 |
| 5.1632 | 58.29 | 150500 | 4.6456 |
| 5.1556 | 58.48 | 151000 | 4.6647 |
| 5.1671 | 58.68 | 151500 | 4.6548 |
| 5.1482 | 58.87 | 152000 | 4.6107 |
| 5.104 | 59.06 | 152500 | 4.6320 |
| 5.1545 | 59.26 | 153000 | 4.6035 |
| 5.1338 | 59.45 | 153500 | 4.6512 |
| 5.1518 | 59.64 | 154000 | 4.6424 |
| 5.1937 | 59.84 | 154500 | 4.6123 |
| 5.1576 | 60.03 | 155000 | 4.6077 |
| 5.1643 | 60.22 | 155500 | 4.5990 |
| 5.1371 | 60.42 | 156000 | 4.6025 |
| 5.1535 | 60.61 | 156500 | 4.5939 |
| 5.128 | 60.81 | 157000 | 4.5716 |
| 5.1711 | 61.0 | 157500 | 4.5895 |
| 5.1265 | 61.19 | 158000 | 4.6367 |
| 5.1131 | 61.39 | 158500 | 4.6565 |
| 5.1239 | 61.58 | 159000 | 4.6194 |
| 5.1089 | 61.77 | 159500 | 4.6214 |
| 5.1052 | 61.97 | 160000 | 4.5982 |
| 5.1336 | 62.16 | 160500 | 4.5861 |
| 5.1081 | 62.35 | 161000 | 4.5343 |
| 5.1706 | 62.55 | 161500 | 4.5480 |
| 5.0848 | 62.74 | 162000 | 4.5500 |
| 5.0848 | 62.94 | 162500 | 4.5965 |
| 5.0849 | 63.13 | 163000 | 4.5737 |
| 5.1267 | 63.32 | 163500 | 4.5680 |
| 5.124 | 63.52 | 164000 | 4.5341 |
| 5.1212 | 63.71 | 164500 | 4.5154 |
| 5.1214 | 63.9 | 165000 | 4.5329 |
| 5.117 | 64.1 | 165500 | 4.4988 |
| 5.0578 | 64.29 | 166000 | 4.5582 |
| 5.0705 | 64.48 | 166500 | 4.5346 |
| 5.0814 | 64.68 | 167000 | 4.5978 |
| 5.0959 | 64.87 | 167500 | 4.5628 |
| 5.0601 | 65.07 | 168000 | 4.5449 |
| 5.1112 | 65.26 | 168500 | 4.5499 |
| 5.0946 | 65.45 | 169000 | 4.5344 |
| 5.0965 | 65.65 | 169500 | 4.5324 |
| 5.0958 | 65.84 | 170000 | 4.4937 |
| 5.081 | 66.03 | 170500 | 4.5009 |
| 5.0506 | 66.23 | 171000 | 4.5145 |
| 5.0729 | 66.42 | 171500 | 4.4779 |
| 5.0628 | 66.62 | 172000 | 4.5531 |
| 5.0674 | 66.81 | 172500 | 4.5023 |
| 5.0634 | 67.0 | 173000 | 4.5124 |
| 5.0847 | 67.2 | 173500 | 4.5203 |
| 5.0729 | 67.39 | 174000 | 4.4887 |
| 5.0683 | 67.58 | 174500 | 4.5113 |
| 5.0596 | 67.78 | 175000 | 4.4898 |
| 5.0528 | 67.97 | 175500 | 4.5359 |
| 5.0595 | 68.16 | 176000 | 4.5139 |
| 5.0864 | 68.36 | 176500 | 4.5260 |
| 5.0241 | 68.55 | 177000 | 4.5325 |
| 5.1038 | 68.75 | 177500 | 4.4692 |
| 5.073 | 68.94 | 178000 | 4.5429 |
| 5.0667 | 69.13 | 178500 | 4.4781 |
| 5.041 | 69.33 | 179000 | 4.5035 |
| 5.033 | 69.52 | 179500 | 4.5177 |
| 5.0369 | 69.71 | 180000 | 4.4948 |
| 5.0265 | 69.91 | 180500 | 4.5544 |
| 5.0687 | 70.1 | 181000 | 4.5048 |
| 5.0464 | 70.29 | 181500 | 4.4532 |
| 5.0502 | 70.49 | 182000 | 4.5503 |
| 4.9993 | 70.68 | 182500 | 4.5011 |
| 5.041 | 70.88 | 183000 | 4.4769 |
| 5.0603 | 71.07 | 183500 | 4.4642 |
| 5.0448 | 71.26 | 184000 | 4.4527 |
| 5.0702 | 71.46 | 184500 | 4.4807 |
| 5.0418 | 71.65 | 185000 | 4.4724 |
| 4.9976 | 71.84 | 185500 | 4.4915 |
| 5.0502 | 72.04 | 186000 | 4.4591 |
| 5.0438 | 72.23 | 186500 | 4.4292 |
| 4.9812 | 72.42 | 187000 | 4.4252 |
| 5.0377 | 72.62 | 187500 | 4.4512 |
| 5.0117 | 72.81 | 188000 | 4.4617 |
| 4.976 | 73.01 | 188500 | 4.5048 |
| 5.05 | 73.2 | 189000 | 4.4400 |
| 5.0306 | 73.39 | 189500 | 4.4209 |
| 5.0648 | 73.59 | 190000 | 4.4707 |
| 5.0097 | 73.78 | 190500 | 4.4453 |
| 5.0611 | 73.97 | 191000 | 4.4601 |
| 5.0091 | 74.17 | 191500 | 4.4231 |
| 5.0529 | 74.36 | 192000 | 4.4110 |
| 5.0221 | 74.55 | 192500 | 4.5013 |
| 5.0156 | 74.75 | 193000 | 4.4717 |
| 5.0442 | 74.94 | 193500 | 4.4585 |
| 5.0229 | 75.14 | 194000 | 4.4601 |
| 4.9883 | 75.33 | 194500 | 4.4740 |
| 4.9963 | 75.52 | 195000 | 4.4663 |
| 4.9886 | 75.72 | 195500 | 4.4237 |
| 4.9753 | 75.91 | 196000 | 4.4762 |
| 4.981 | 76.1 | 196500 | 4.4573 |
| 4.9901 | 76.3 | 197000 | 4.4376 |
| 5.005 | 76.49 | 197500 | 4.4859 |
| 5.0254 | 76.68 | 198000 | 4.4181 |
| 5.0067 | 76.88 | 198500 | 4.4582 |
| 5.0097 | 77.07 | 199000 | 4.4494 |
| 4.9815 | 77.27 | 199500 | 4.4382 |
| 5.0029 | 77.46 | 200000 | 4.4780 |
| 4.9659 | 77.65 | 200500 | 4.4009 |
| 4.9889 | 77.85 | 201000 | 4.3664 |
| 4.9916 | 78.04 | 201500 | 4.4319 |
| 4.9715 | 78.23 | 202000 | 4.4390 |
| 4.9815 | 78.43 | 202500 | 4.4593 |
| 4.972 | 78.62 | 203000 | 4.4620 |
| 5.0164 | 78.81 | 203500 | 4.4247 |
| 4.9608 | 79.01 | 204000 | 4.4031 |
| 4.9606 | 79.2 | 204500 | 4.4301 |
| 4.9922 | 79.4 | 205000 | 4.4147 |
| 4.9825 | 79.59 | 205500 | 4.4489 |
| 4.9719 | 79.78 | 206000 | 4.4155 |
| 4.9663 | 79.98 | 206500 | 4.4514 |
| 4.9663 | 80.17 | 207000 | 4.4439 |
| 4.9351 | 80.36 | 207500 | 4.4235 |
| 5.0248 | 80.56 | 208000 | 4.4122 |
| 4.9836 | 80.75 | 208500 | 4.4261 |
| 4.9881 | 80.95 | 209000 | 4.4228 |
| 5.0021 | 81.14 | 209500 | 4.4588 |
| 4.9508 | 81.33 | 210000 | 4.3826 |
| 4.9729 | 81.53 | 210500 | 4.4254 |
| 4.9746 | 81.72 | 211000 | 4.3951 |
| 4.9771 | 81.91 | 211500 | 4.4301 |
| 4.9988 | 82.11 | 212000 | 4.3889 |
| 5.006 | 82.3 | 212500 | 4.4137 |
| 4.9662 | 82.49 | 213000 | 4.4597 |
| 4.9476 | 82.69 | 213500 | 4.4484 |
| 4.9801 | 82.88 | 214000 | 4.4676 |
| 4.9605 | 83.08 | 214500 | 4.3832 |
| 4.9617 | 83.27 | 215000 | 4.3933 |
| 4.9565 | 83.46 | 215500 | 4.4156 |
| 4.9193 | 83.66 | 216000 | 4.4221 |
| 4.942 | 83.85 | 216500 | 4.4150 |
| 4.9504 | 84.04 | 217000 | 4.4034 |
| 4.9469 | 84.24 | 217500 | 4.4364 |
| 4.9519 | 84.43 | 218000 | 4.4306 |
| 4.9555 | 84.62 | 218500 | 4.3787 |
| 4.9558 | 84.82 | 219000 | 4.4363 |
| 4.94 | 85.01 | 219500 | 4.4151 |
| 4.9441 | 85.21 | 220000 | 4.3747 |
| 4.9654 | 85.4 | 220500 | 4.3779 |
| 4.9352 | 85.59 | 221000 | 4.4293 |
| 4.9743 | 85.79 | 221500 | 4.3823 |
| 4.9536 | 85.98 | 222000 | 4.4049 |
| 4.9426 | 86.17 | 222500 | 4.3719 |
| 4.9363 | 86.37 | 223000 | 4.3414 |
| 4.9093 | 86.56 | 223500 | 4.3717 |
| 4.935 | 86.75 | 224000 | 4.3860 |
| 4.9204 | 86.95 | 224500 | 4.3939 |
| 4.926 | 87.14 | 225000 | 4.4328 |
| 4.9291 | 87.34 | 225500 | 4.4435 |
| 4.9162 | 87.53 | 226000 | 4.4062 |
| 4.9298 | 87.72 | 226500 | 4.3990 |
| 4.9743 | 87.92 | 227000 | 4.4284 |
| 4.9135 | 88.11 | 227500 | 4.3740 |
| 4.9138 | 88.3 | 228000 | 4.3697 |
| 4.9686 | 88.5 | 228500 | 4.3498 |
| 4.9263 | 88.69 | 229000 | 4.3457 |
| 4.9453 | 88.88 | 229500 | 4.3315 |
| 4.9329 | 89.08 | 230000 | 4.3874 |
| 4.9277 | 89.27 | 230500 | 4.3627 |
| 4.8942 | 89.47 | 231000 | 4.3674 |
| 4.9496 | 89.66 | 231500 | 4.4107 |
| 4.924 | 89.85 | 232000 | 4.3855 |
| 4.9825 | 90.05 | 232500 | 4.3674 |
| 4.9365 | 90.24 | 233000 | 4.3662 |
| 4.9123 | 90.43 | 233500 | 4.3669 |
| 4.9555 | 90.63 | 234000 | 4.3668 |
| 4.9394 | 90.82 | 234500 | 4.3677 |
| 4.9672 | 91.01 | 235000 | 4.3339 |
| 4.9493 | 91.21 | 235500 | 4.3554 |
| 4.9114 | 91.4 | 236000 | 4.3507 |
| 4.9374 | 91.6 | 236500 | 4.3447 |
| 4.9288 | 91.79 | 237000 | 4.3988 |
| 4.9156 | 91.98 | 237500 | 4.3785 |
| 4.9226 | 92.18 | 238000 | 4.3322 |
| 4.9223 | 92.37 | 238500 | 4.3461 |
| 4.9051 | 92.56 | 239000 | 4.3603 |
| 4.9341 | 92.76 | 239500 | 4.4139 |
| 4.9285 | 92.95 | 240000 | 4.3757 |
| 4.9506 | 93.14 | 240500 | 4.3456 |
| 4.92 | 93.34 | 241000 | 4.3492 |
| 4.9027 | 93.53 | 241500 | 4.3982 |
| 4.9366 | 93.73 | 242000 | 4.3651 |
| 4.9072 | 93.92 | 242500 | 4.3186 |
| 4.9441 | 94.11 | 243000 | 4.3560 |
| 4.874 | 94.31 | 243500 | 4.3749 |
| 4.9246 | 94.5 | 244000 | 4.3345 |
| 4.8971 | 94.69 | 244500 | 4.3497 |
| 4.9234 | 94.89 | 245000 | 4.4110 |
| 4.9396 | 95.08 | 245500 | 4.3645 |
| 4.8943 | 95.27 | 246000 | 4.3204 |
| 4.9194 | 95.47 | 246500 | 4.4034 |
| 4.914 | 95.66 | 247000 | 4.3936 |
| 4.9376 | 95.86 | 247500 | 4.3477 |
| 4.9042 | 96.05 | 248000 | 4.4062 |
| 4.8946 | 96.24 | 248500 | 4.4115 |
| 4.8959 | 96.44 | 249000 | 4.3983 |
| 4.9408 | 96.63 | 249500 | 4.3633 |
| 4.9039 | 96.82 | 250000 | 4.3486 |
| 4.9368 | 97.02 | 250500 | 4.3819 |
| 4.8793 | 97.21 | 251000 | 4.3586 |
| 4.9069 | 97.41 | 251500 | 4.3666 |
| 4.9339 | 97.6 | 252000 | 4.3911 |
| 4.9086 | 97.79 | 252500 | 4.3505 |
| 4.9132 | 97.99 | 253000 | 4.3878 |
| 4.9279 | 98.18 | 253500 | 4.3422 |
| 4.8955 | 98.37 | 254000 | 4.3913 |
| 4.8874 | 98.57 | 254500 | 4.3560 |
| 4.9026 | 98.76 | 255000 | 4.3189 |
| 4.9008 | 98.95 | 255500 | 4.4185 |
| 4.9023 | 99.15 | 256000 | 4.3197 |
| 4.8792 | 99.34 | 256500 | 4.3112 |
| 4.9193 | 99.54 | 257000 | 4.3886 |
| 4.9136 | 99.73 | 257500 | 4.3596 |
| 4.8953 | 99.92 | 258000 | 4.3615 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
Tom11/xlm-roberta-base-finetuned-panx-all
|
Tom11
| 2022-11-15T21:57:04Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-11-15T17:09:39Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1637
- F1: 0.8581
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.29 | 1.0 | 715 | 0.1885 | 0.8231 |
| 0.1443 | 2.0 | 1430 | 0.1607 | 0.8479 |
| 0.0937 | 3.0 | 2145 | 0.1637 | 0.8581 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0+cpu
- Datasets 1.16.1
- Tokenizers 0.13.2
|
RawMean/farsi_lastname_classifier_4
|
RawMean
| 2022-11-15T21:22:26Z | 102 | 0 |
transformers
|
[
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-15T20:59:12Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: farsi_lastname_classifier_4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# farsi_lastname_classifier_4
This model is a fine-tuned version of [microsoft/deberta-v3-small](https://huggingface.co/microsoft/deberta-v3-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2337
- Accuracy: 0.96
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 12 | 0.5673 | 0.836 |
| No log | 2.0 | 24 | 0.4052 | 0.868 |
| No log | 3.0 | 36 | 0.2211 | 0.932 |
| No log | 4.0 | 48 | 0.2488 | 0.926 |
| No log | 5.0 | 60 | 0.1490 | 0.954 |
| No log | 6.0 | 72 | 0.1464 | 0.968 |
| No log | 7.0 | 84 | 0.1923 | 0.954 |
| No log | 8.0 | 96 | 0.2070 | 0.96 |
| No log | 9.0 | 108 | 0.2055 | 0.962 |
| No log | 10.0 | 120 | 0.2436 | 0.942 |
| No log | 11.0 | 132 | 0.2173 | 0.96 |
| No log | 12.0 | 144 | 0.2342 | 0.956 |
| No log | 13.0 | 156 | 0.2337 | 0.962 |
| No log | 14.0 | 168 | 0.2332 | 0.96 |
| No log | 15.0 | 180 | 0.2337 | 0.96 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
langdonholmes/en_student_name_detector
|
langdonholmes
| 2022-11-15T20:56:43Z | 4 | 0 |
spacy
|
[
"spacy",
"token-classification",
"en",
"model-index",
"region:us"
] |
token-classification
| 2022-11-15T19:17:00Z |
---
tags:
- spacy
- token-classification
language:
- en
model-index:
- name: en_student_name_detector
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.8311688312
- name: NER Recall
type: recall
value: 0.8421052632
- name: NER F Score
type: f_score
value: 0.8366013072
---
| Feature | Description |
| --- | --- |
| **Name** | `en_student_name_detector` |
| **Version** | `0.0.1` |
| **spaCy** | `>=3.4.1,<3.5.0` |
| **Default Pipeline** | `transformer`, `ner` |
| **Components** | `transformer`, `ner` |
| **Sources** | [longformer](https://huggingface.co/allenai/longformer-base-4096) |
| **License** | [Apache 2.0](https://huggingface.co/langdonholmes/en_student_name_detector/blob/main/LICENSE) |
| **Author** | [Langdon Holmes](https://huggingface.co/langdonholmes) |
### Label Scheme
<details>
<summary>View label scheme (1 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `STUDENT` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `ENTS_F` | 83.66 |
| `ENTS_P` | 83.12 |
| `ENTS_R` | 84.21 |
| `TRANSFORMER_LOSS` | 56255026.35 |
| `NER_LOSS` | 31154.89 |
### Training Data
6,293 student writing assignments were submitted as PDF files. All documents were reflection assignments in response to the same prompt in the same online course. Student names were labeled by human raters (one rater per document). A preliminary model was trained and all disagreements between this model and the human annotations were adjudicated by two additional reviewers. The training dataset includes all 6,293 documents, 845 of which include student names. There are 1,155 student name annotations in total.
### To Use
This model has been packaged using spaCy. It is available as a huggingface model or a pip package. Performance of the model should be evaluated on in-domain data before deployment in production, particularly when confidential information is involved.
|
Yujun1of1/concrete-finetuned-imdb
|
Yujun1of1
| 2022-11-15T20:38:36Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"fill-mask",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-11-15T07:45:12Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Yujun1of1/concrete-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Yujun1of1/concrete-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.2256
- Validation Loss: 2.6946
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -687, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.2256 | 2.6946 | 0 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.10.0
- Datasets 2.6.1
- Tokenizers 0.11.0
|
technillogue/waifu-diffusion
|
technillogue
| 2022-11-15T20:37:01Z | 17 | 5 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2022-10-27T06:35:06Z |
---
language:
- en
tags:
- stable-diffusion
- text-to-image
license: creativeml-openrail-m
inference: false
---
# waifu-diffusion v1.3 - Diffusion for Weebs
waifu-diffusion is a latent text-to-image diffusion model that has been conditioned on high-quality anime images through fine-tuning.
<img src=https://i.imgur.com/Y5Tmw1S.png width=75% height=75%>
[Original Weights](https://huggingface.co/hakurei/waifu-diffusion-v1-3)
# Gradio & Colab
We also support a [Gradio](https://github.com/gradio-app/gradio) Web UI and Colab with Diffusers to run Waifu Diffusion:
[](https://huggingface.co/spaces/hakurei/waifu-diffusion-demo)
[](https://colab.research.google.com/drive/1_8wPN7dJO746QXsFnB09Uq2VGgSRFuYE#scrollTo=1HaCauSq546O)
## Model Description
[See here for a full model overview.](https://gist.github.com/harubaru/f727cedacae336d1f7877c4bbe2196e1)
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
## Downstream Uses
This model can be used for entertainment purposes and as a generative art assistant.
## Example Code
```python
import torch
from torch import autocast
from diffusers import StableDiffusionPipeline
pipe = StableDiffusionPipeline.from_pretrained(
'waifu-diffusion',
torch_dtype=torch.float32
).to('cuda')
prompt = "1girl, aqua eyes, baseball cap, blonde hair, closed mouth, earrings, green background, hat, hoop earrings, jewelry, looking at viewer, shirt, short hair, simple background, solo, upper body, yellow shirt"
with autocast("cuda"):
image = pipe(prompt, guidance_scale=6)["sample"][0]
image.save("test.png")
```
## Team Members and Acknowledgements
This project would not have been possible without the incredible work by the [CompVis Researchers](https://ommer-lab.com/).
- [Anthony Mercurio](https://github.com/harubaru)
- [Salt](https://github.com/sALTaccount/)
- [Sta @ Bit192](https://twitter.com/naclbbr)
In order to reach us, you can join our [Discord server](https://discord.gg/touhouai).
[](https://discord.gg/touhouai)
|
teragron/capybara
|
teragron
| 2022-11-15T20:28:54Z | 40 | 0 |
diffusers
|
[
"diffusers",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2022-11-15T20:26:22Z |
---
license: mit
---
### capybara on Stable Diffusion via Dreambooth trained on the [fast-DreamBooth.ipynb by TheLastBen](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
#### Model by teragron
This your the Stable Diffusion model fine-tuned the capybara concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt(s)`: **indir.png**
You can also train your own concepts and upload them to the library by using [the fast-DremaBooth.ipynb by TheLastBen](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb).
You can run your new concept via A1111 Colab :[Fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Or you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
Sample pictures of this concept:
indir.png

|
Umarpreet/distilbert-base-uncased-finetuned-squad
|
Umarpreet
| 2022-11-15T20:00:51Z | 124 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-11-15T05:39:36Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7518
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.5791 | 1.0 | 554 | 2.2242 |
| 2.0656 | 2.0 | 1108 | 1.8537 |
| 1.6831 | 3.0 | 1662 | 1.7848 |
| 1.4963 | 4.0 | 2216 | 1.7518 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
Wheatley961/Raw_2_no_1_Test_2_new.model
|
Wheatley961
| 2022-11-15T19:57:06Z | 1 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-11-15T19:56:40Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 98 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 98,
"warmup_steps": 10,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
brwoodside/distilgpt2-finetuned-wikitext2
|
brwoodside
| 2022-11-15T19:51:38Z | 166 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-15T19:19:08Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6421
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7602 | 1.0 | 2334 | 3.6669 |
| 3.653 | 2.0 | 4668 | 3.6472 |
| 3.6006 | 3.0 | 7002 | 3.6421 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
reza-aditya/Reinforce-Pong-PLE-v0
|
reza-aditya
| 2022-11-15T19:47:43Z | 0 | 0 | null |
[
"Pong-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-11-13T18:53:13Z |
---
tags:
- Pong-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pong-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pong-PLE-v0
type: Pong-PLE-v0
metrics:
- type: mean_reward
value: -16.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pong-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pong-PLE-v0** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
coderSounak/finetuned_twitter_hate_speech_roberta
|
coderSounak
| 2022-11-15T19:33:34Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-11-12T14:50:47Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: finetuned_twitter_hate_speech_roberta
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_twitter_hate_speech_roberta
This model is a fine-tuned version of [cardiffnlp/twitter-xlm-roberta-base-sentiment](https://huggingface.co/cardiffnlp/twitter-xlm-roberta-base-sentiment) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4729
- Accuracy: 0.8567
- F1: 0.8651
- Precision: 0.8242
- Recall: 0.9103
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
princeton-nlp/sup-simcse-bert-large-uncased
|
princeton-nlp
| 2022-11-15T19:14:55Z | 2,187 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"bert",
"feature-extraction",
"arxiv:2104.08821",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:05Z |
---
tags:
- feature-extraction
- bert
---
# Model Card for baikal-sentiment-ball
# Model Details
## Model Description
More information needed
- **Developed by:** Princeton NLP group
- **Shared by [Optional]:** Princeton NLP group
- **Model type:** Feature Extraction
- **Language(s) (NLP):** More information needed
- **License:** More information needed
- **Parent Model:** BERT
- **Resources for more information:**
- [GitHub Repo](https://github.com/princeton-nlp/SimCSE)
- [Associated Paper](https://arxiv.org/abs/2104.08821)
# Uses
## Direct Use
This model can be used for the task of feature extraction.
## Downstream Use [Optional]
More information needed.
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
# Training Details
## Training Data
The model craters note in the [Github Repository](https://github.com/princeton-nlp/SimCSE/blob/main/README.md)
> We train unsupervised SimCSE on 106 randomly sampled sentences from English Wikipedia, and train supervised SimCSE on the combination of MNLI and SNLI datasets (314k).
## Training Procedure
### Preprocessing
More information needed
### Speeds, Sizes, Times
More information needed
# Evaluation
## Testing Data, Factors & Metrics
### Testing Data
The model craters note in the [associated paper](https://arxiv.org/pdf/2104.08821.pdf)
> Our evaluation code for sentence embeddings is based on a modified version of [SentEval](https://github.com/facebookresearch/SentEval). It evaluates sentence embeddings on semantic textual similarity (STS) tasks and downstream transfer tasks.
For STS tasks, our evaluation takes the "all" setting, and report Spearman's correlation. See [associated paper](https://arxiv.org/pdf/2104.08821.pdf) (Appendix B) for evaluation details.
### Factors
More information needed
### Metrics
More information needed
## Results
More information needed
# Model Examination
The model craters note in the [associated paper](https://arxiv.org/pdf/2104.08821.pdf):
> **Uniformity and alignment.**
We also observe that (1) though pre-trained embeddings have good alignment, their uniformity is poor (i.e., the embeddings are highly anisotropic); (2) post-processing methods like BERT-flow and BERT-whitening greatly improve uniformity but also suffer a degeneration in alignment; (3) unsupervised SimCSE effectively improves uniformity of pre-trained embeddings whereas keeping a good alignment;(4) incorporating supervised data in SimCSE further amends alignment.
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** Nvidia 3090 GPUs with CUDA 11
- **Hours used:** More information needed
- **Cloud Provider:** More information needed
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Technical Specifications [optional]
## Model Architecture and Objective
More information needed
## Compute Infrastructure
More information needed
### Hardware
More information needed
### Software
More information needed.
# Citation
**BibTeX:**
```bibtex
@inproceedings{gao2021simcse,
title={{SimCSE}: Simple Contrastive Learning of Sentence Embeddings},
author={Gao, Tianyu and Yao, Xingcheng and Chen, Danqi},
booktitle={Empirical Methods in Natural Language Processing (EMNLP)},
year={2021}
}
```
# Glossary [optional]
More information needed
# More Information [optional]
More information needed
# Model Card Authors [optional]
Princeton NLP group in collaboration with Ezi Ozoani and the Hugging Face team.
# Model Card Contact
If you have any questions related to the code or the paper, feel free to email Tianyu (`tianyug@cs.princeton.edu`) and Xingcheng (`yxc18@mails.tsinghua.edu.cn`). If you encounter any problems when using the code, or want to report a bug, you can open an issue. Please try to specify the problem with details so we can help you better and quicker!
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("princeton-nlp/sup-simcse-bert-large-uncased")
model = AutoModel.from_pretrained("princeton-nlp/sup-simcse-bert-large-uncased")
```
</details>
|
Wheatley961/Raw_1_no_3_Test_2_new.model
|
Wheatley961
| 2022-11-15T19:03:49Z | 1 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-11-15T19:03:23Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 103 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 103,
"warmup_steps": 11,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
EnglishVoice/t5-base-keywords-to-headline
|
EnglishVoice
| 2022-11-15T19:03:01Z | 178 | 6 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"paraphrase-generation",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-15T17:49:11Z |
---
language:
- en
tags:
- text2text-generation
- paraphrase-generation
license: apache-2.0
widget:
- text: "headline: weight loss"
---
### About the model
The model has been trained on [a dataset containing 138927 article titles](https://www.englishvoice.ai/p/keywords-and-titles/ "a dataset containing 138927 article titles") along with their keywords.
The purpose of the model is to generate suggestions of article headlines, given a keyword or multiple keywords.
### Generation examples
| Input | Output |
| :------------ | :------------ |
| weight loss | The Last Weight Loss Plan: Lose Weight, Feel Great, and Get in Shape <br/>How to Lose Weight Without Giving Up Your Favorite Foods <br/> I Lost Weight and Finally Feel Good About My Body |
| property rental, property management | Property rental: The new way to make money <br/> We take the hassle out of property rental <br/> Is property management your new best friend? |
| diabetic diet plan | A diabetic diet plan that actually works! <br/> Lose weight, feel great, and live better with our diabetic diet plan! <br/> Diet has never been so tasty: Our diabetic diet plan puts you to the test! |
You can supply multiple keywords by separating them with commas. Higher temperature settings result in more creative headlines; we recommend testing first with the temperature set to 1.5.
### The dataset
The dataset was developed by English Voice AI Labs. You can download it from our website:
[https://www.EnglishVoice.ai/](https://www.EnglishVoice.ai/ "https://www.EnglishVoice.ai/")
### Sample code
Python code for generating headlines:
```python
import torch
from transformers import T5ForConditionalGeneration,T5Tokenizer
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = T5ForConditionalGeneration.from_pretrained("EnglishVoice/t5-base-keywords-to-headline")
tokenizer = T5Tokenizer.from_pretrained("EnglishVoice/t5-base-keywords-to-headline")
model = model.to(device)
keywords = "weight loss, weight pills"
text = "headline: " + keywords
encoding = tokenizer.encode_plus(text, return_tensors = "pt")
input_ids = encoding["input_ids"].to(device)
attention_masks = encoding["attention_mask"].to(device)
beam_outputs = model.generate(
input_ids = input_ids,
attention_mask = attention_masks,
do_sample = True,
num_return_sequences = 5,
temperature = 0.95,
early_stopping = True,
top_k = 50,
top_p = 0.95,
)
for i in range(len(beam_outputs)):
result = tokenizer.decode(beam_outputs[i], skip_special_tokens=True)
print(result)
```
Sample result:
I Am Losing Weight and I Love It!
New Weight Loss Pill Helps You Get the Body You Want!
I Lost Weight By Taking Pills!
The Truth About Weight Loss Pills!
The Best Weight Loss Pills Money Can Buy!
|
docmparker/t5-small-finetuned-xsum
|
docmparker
| 2022-11-15T18:59:33Z | 111 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:xsum",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-08T16:26:43Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xsum
model-index:
- name: t5-small-finetuned-xsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Enverrr/tas_kagit_makas
|
Enverrr
| 2022-11-15T18:58:35Z | 188 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-11-15T18:58:23Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: tas_kagit_makas
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 1.0
---
# tas_kagit_makas
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### paper

#### rock

#### scissors

|
nubby/Female_Au_Ra-FFXIV
|
nubby
| 2022-11-15T18:11:44Z | 0 | 4 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2022-11-06T07:41:29Z |
---
license: creativeml-openrail-m
---
This model is a work in progress. It isn't perfect, but it can generate some nice images. I have trained Xaela (dark horns and scales) and Raen (light horns and scales) as separate concepts, which will allow you to specify which you would like in your image by using the tokens specified below.
I hope you enjoy it as much as I have! I'd love to hear your feedback.
Waifu-Diffusion-v1-3 based StableDiffusion model with Dreambooth training on images from many different artists. The model is trained to 11,000 steps on 80 different images of Au Ra females, a playable race of people with dragon-like horns, and patches of scales from the critically acclaimed MMORPG Final Fantasy XIV (Have you heard about the games free trial btw?).
## Usage
Can be used in StableDiffusion, including the extremely popular Web UI by Automatic1111, like any other model by placing the .CKPT file in the correct directory. Please consult the documentation for your installation of StableDiffusion for more specific instructions.
Use ```"m_arxla"``` for Xaela clan Au Ra or ```"m_arrn"``` for Raen clan Au Ra in your prompt to invoke the style of the desired clan.
## Recommended negative prompt
```"poorly drawn, bad quality, colored skin"```
You can also add the following to your negative prompt in order to steer your output towards the WD1.3 default caucasian skintone: ```"blue skin, purple skin"```
If you are generating Raen Au Ra I highly recommend also adding ```"black scales"``` to your negative prompt as the AI will often draw dark scales without it.
I do NOT recommend adding the common negative prompt tags such as ```"bad anatomy, disfigured, deformed, gross, etc..."```
## Example prompt
```"m_arrn, 1girl, light smile, detailed eyes, extremely detailed face, sidelocks, black hair, grey eyes, long hair, hair clip, collarbone, tank top, shorts, looking to the side, highly detailed face, extremely detailed, intricate, best quality, ultra realistic, cowboy shot, holding shopping bags at the mall, highly detailed background"```
Negative Prompt: ```"poorly drawn, bad quality, colored skin, blue skin, purple skin, midriff, black scales"```
Sampler: DPM++ 2S a, Sampling Steps: 22, CFG scale: 11, H: 768, W: 512
## Xaela example images using ```"m_arxla"```
<table>
<tr>
<td><img src=https://i.imgur.com/gvgZeT1.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/wWFDxCS.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/yzWhulJ.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/TjTIbIz.png width=100% height=100%/></td>
</tr>
</table>
## Raen example images using ```"m_arrn"```
<table>
<tr>
<td><img src=https://i.imgur.com/jwoWZWE.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/k1XPAZI.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/MvcSlAd.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/3PLuE1V.png width=100% height=100%/></td>
</tr>
</table>
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
|
dxiao/bert-finetuned-ner-100percent
|
dxiao
| 2022-11-15T17:48:25Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-11-15T17:41:59Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner-100percent
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner-100percent
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5711
- Precision: 0.8227
- Recall: 0.8498
- F1: 0.8360
- Accuracy: 0.9254
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 2022
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 75 | 0.5329 | 0.8228 | 0.8438 | 0.8332 | 0.9277 |
| No log | 2.0 | 150 | 0.5674 | 0.8110 | 0.8438 | 0.8271 | 0.9242 |
| No log | 3.0 | 225 | 0.5711 | 0.8227 | 0.8498 | 0.8360 | 0.9254 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
GDJ1978/anytronredXf222
|
GDJ1978
| 2022-11-15T17:45:23Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-11-15T17:34:39Z |
merged checkpoints of anyXtronXredshift (0.6) and f222 (0.4)
runwayML 1.5 is ancestral base model
|
dxiao/bert-finetuned-ner-90percent
|
dxiao
| 2022-11-15T17:41:51Z | 114 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-11-15T17:35:24Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner-90percent
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner-90percent
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5444
- Precision: 0.8236
- Recall: 0.8483
- F1: 0.8358
- Accuracy: 0.9263
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 2022
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 68 | 0.4938 | 0.8223 | 0.8544 | 0.8380 | 0.9284 |
| No log | 2.0 | 136 | 0.5465 | 0.8265 | 0.8514 | 0.8388 | 0.9256 |
| No log | 3.0 | 204 | 0.5444 | 0.8236 | 0.8483 | 0.8358 | 0.9263 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
jbreunig/xlm-roberta-base-finetuned-panx-en
|
jbreunig
| 2022-11-15T17:41:47Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-11-15T12:15:33Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-en
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.en
metrics:
- name: F1
type: f1
value: 0.68561872909699
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4175
- F1: 0.6856
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1397 | 1.0 | 50 | 0.5561 | 0.5147 |
| 0.5148 | 2.0 | 100 | 0.4851 | 0.6312 |
| 0.3772 | 3.0 | 150 | 0.4175 | 0.6856 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
yeeb/distilgpt2_trading-fours
|
yeeb
| 2022-11-15T17:37:14Z | 113 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-15T17:20:47Z |
https://wandb.ai/yeeeb/trading-fours
|
Finnish-NLP/ul2-small-nl16-finnish
|
Finnish-NLP
| 2022-11-15T17:18:30Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"finnish",
"t5x",
"seq2seq",
"ul2",
"fi",
"dataset:Finnish-NLP/mc4_fi_cleaned",
"dataset:wikipedia",
"arxiv:1910.10683",
"arxiv:2205.05131",
"arxiv:2002.05202",
"arxiv:2109.10686",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text2text-generation
| 2022-10-15T15:57:33Z |
---
language:
- fi
license: apache-2.0
tags:
- finnish
- t5
- t5x
- seq2seq
- ul2
datasets:
- Finnish-NLP/mc4_fi_cleaned
- wikipedia
inference: false
---
# UL2-small-nl16 for Finnish
Pretrained T5 model on Finnish language using a UL2 (Mixture-of-Denoisers) objective. T5 model was introduced in
[this paper](https://arxiv.org/abs/1910.10683)
and first released at [this page](https://github.com/google-research/text-to-text-transfer-transformer).
The UL2 objective was introduced in
[this paper](https://arxiv.org/abs/2205.05131)
and first released at [this page](https://github.com/google-research/google-research/tree/master/ul2).
**Note:** The Hugging Face inference widget is deactivated because this model needs a text-to-text fine-tuning on a specific downstream task to be useful in practice. As an example of a fine-tuned Finnish T5 model, you can check [Finnish-NLP/t5-small-nl24-casing-punctuation-correction](https://huggingface.co/Finnish-NLP/t5-small-nl24-casing-punctuation-correction) which has been fine-tuned to correct missing casing and punctuation for Finnish text.
## Model description
T5 is an encoder-decoder model and treats all NLP problems in a text-to-text format.
Finnish T5 is a transformers model pretrained on a very large corpus of Finnish data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and outputs from those texts.
This model used the [T5 v1.1](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) improvements compared to the original T5 model during the pretraining:
- GEGLU activation in feed-forward hidden layer, rather than ReLU - see [here](https://arxiv.org/abs/2002.05202)
- Dropout was turned off in pretraining (quality win). Dropout should be re-enabled during fine-tuning
- Pretrained on self-supervised objective only without mixing in the downstream tasks
- No parameter sharing between embedding and classifier layer
This model also used the "efficient" T5 architecture findings presented in [this paper](https://arxiv.org/abs/2109.10686). In a nutshell, the paper indicates that a Deep-Narrow model architecture is favorable for downstream performance compared to other model architectures of similar parameter count. To be more precise, model depth is defined as the number of transformer blocks that are stacked sequentially.
This model uses the [t5-efficient-small-nl16](https://huggingface.co/google/t5-efficient-small-nl16) architecture's layer depth which means both the encoder and the decoder have 16 transformer layers compared to the original T5 "small" model's architecture of 6 transformer layers.
In total, this model has 184 million parameters.
### UL2 pretraining objective
This model was pretrained with the UL2's Mixture-of-Denoisers (MoD) objective, that combines diverse pre-training paradigms together. UL2 frames different objective functions for training language models as denoising tasks, where the model has to recover missing sub-sequences of a given input. During pre-training it uses a novel mixture-of-denoisers that samples from a varied set of such objectives, each with different configurations. UL2 is trained using a mixture of three denoising tasks: (1) R-denoising (or regular span corruption), which emulates the standard T5 span corruption objective; (2) X-denoising (or extreme span corruption); and (3) S-denoising (or sequential PrefixLM). During pre-training, we sample from the available denoising tasks based on user-specified ratios.
UL2 introduces a notion of mode switching, wherein downstream fine-tuning is associated with specific pre-training denoising task. During the pretraining, a paradigm token is inserted to the input (`[NLU]` for R-denoising, `[NLG]` for X-denoising, or `[S2S]` for S-denoising) indicating the denoising task at hand. Then, during fine-tuning the same input token should be inserted to get the best performance for different downstream fine-tuning tasks.
## Intended uses & limitations
This model was only pretrained in a self-supervised way excluding any supervised training. Therefore, this model has to be fine-tuned before it is usable on a downstream task, like text classification, unlike the Google's original T5 model. **Note:** You most likely need to fine-tune these T5/UL2 models without mixed precision so fine-tune them with full fp32 precision. You can also find more fine-tuning tips from [here](https://discuss.huggingface.co/t/t5-finetuning-tips), for example.
**Note**: For fine-tuning, most likely you can get better results if you insert a prefix token of `[NLU]`, `[NLG]`, or `[S2S]` to your input texts. For general language understanding fine-tuning tasks, you could use the `[NLU]` token. For GPT-style causal language generation, you could use the `[S2S]` token. The token `[NLG]` of the X-denoising pretrain task is somewhat mix between the language understanding and causal language generation so the token `[NLG]` could maybe be used for language generation fine-tuning too.
### How to use
Here is how to use this model in PyTorch:
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("Finnish-NLP/ul2-small-nl16-finnish")
model = T5ForConditionalGeneration.from_pretrained("Finnish-NLP/ul2-small-nl16-finnish")
```
and in TensorFlow:
```python
from transformers import T5Tokenizer, TFT5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("Finnish-NLP/ul2-small-nl16-finnish")
model = T5ForConditionalGeneration.from_pretrained("Finnish-NLP/ul2-small-nl16-finnish", from_pt=True)
```
### Limitations and bias
The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model.
## Training data
This Finnish T5 model was pretrained on the combination of six datasets:
- [mc4_fi_cleaned](https://huggingface.co/datasets/Finnish-NLP/mc4_fi_cleaned), the dataset mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the mC4 dataset and further cleaned it with our own text data cleaning codes (check the dataset repo).
- [wikipedia](https://huggingface.co/datasets/wikipedia) We used the Finnish subset of the wikipedia (August 2021) dataset
- [Yle Finnish News Archive 2011-2018](http://urn.fi/urn:nbn:fi:lb-2017070501)
- [Yle Finnish News Archive 2019-2020](http://urn.fi/urn:nbn:fi:lb-2021050401)
- [Finnish News Agency Archive (STT)](http://urn.fi/urn:nbn:fi:lb-2018121001)
- [The Suomi24 Sentences Corpus](http://urn.fi/urn:nbn:fi:lb-2020021803)
Raw datasets were automatically cleaned to filter out bad quality and non-Finnish examples. Also, a [perplexity](https://huggingface.co/course/chapter7/3#perplexity-for-language-models) score was calculated for all texts with a KenLM model which was trained with very clean Finnish texts only. This perplexity score can then be used to determine how "clean" Finnish language the text contains. Lastly, all datasets were concatenated and the top 90% perplexity score was used as a filtering threshold to filter out the worst quality 10% of texts. Together these cleaned datasets were around 76GB of text.
## Training procedure
### Preprocessing
The texts are tokenized using WordPiece and a vocabulary size of 32000. The inputs and the outputs are sequences of 512 consecutive tokens. Texts are not lower cased so this model is case-sensitive: it makes a difference between finnish and Finnish.
### Pretraining
The model was trained on TPUv3-8 VM, sponsored by the [Google TPU Research Cloud](https://sites.research.google/trc/about/), for 500K steps with a batch size of 256 (in total 66B tokens). The optimizer used was a AdaFactor with learning rate warmup for 10K steps with a constant learning rate of 1e-2, and then an inverse square root decay (exponential decay) of the learning rate after.
Training code was from the Google's Jax/Flax based [t5x framework](https://github.com/google-research/t5x) and also some t5x task definitions were adapted from [Per's t5x work](https://huggingface.co/pere).
The UL2 training objective code used with the [t5x framework](https://github.com/google-research/t5x) was copied and slightly modified from the [UL2 paper](https://arxiv.org/pdf/2205.05131.pdf) appendix chapter 9.2. Used UL2 objective code is available in this repository in the files `ul2_objective.py` and `tasks.py`.
UL2's mixture-of-denoisers configuration was otherwise equal to the UL2 paper but for the rate of mixing denoisers, 20% for S-denoising was used (suggested at the paper chapter 4.5) and the rest was divided equally between the R-denoising and X-denoising (i.e. 40% for both).
## Evaluation results
Evaluation was done by fine-tuning the model on a downstream text classification task with two different labeled Finnish datasets: [Yle News](https://github.com/spyysalo/yle-corpus) and [Eduskunta](https://github.com/aajanki/eduskunta-vkk). Classification fine-tuning was done with a sequence length of 128 tokens. Also, for UL2 models a prefix token of `[NLU]` has been added to each input text.
When fine-tuned on those datasets, this model (the third row of the table) achieves the following accuracy results compared to our other UL2 models and their parameter counts:
| | Model parameters | Yle News accuracy | Eduskunta accuracy |
|-------------------------------------------------------|------------------|---------------------|----------------------|
|Finnish-NLP/ul2-tiny-nl6-finnish | 31 million |92.88 |69.40 |
|Finnish-NLP/ul2-mini-nl8-finnish | 72 million |93.83 |70.10 |
|Finnish-NLP/ul2-small-nl16-finnish | 184 million |94.25 |74.63 |
|Finnish-NLP/ul2-small-nl24-finnish | 260 million |94.03 |73.87 |
|Finnish-NLP/ul2-base-nl36-finnish | 814 million |94.35 |75.47 |
Results of fine-tuning our T5 models (with the original T5 pretraining task) on the same datasets are following:
| | Model parameters | Yle News accuracy | Eduskunta accuracy |
|-------------------------------------------------------|------------------|---------------------|----------------------|
|Finnish-NLP/t5-tiny-nl6-finnish | 31 million |92.80 |69.07 |
|Finnish-NLP/t5-mini-nl8-finnish | 72 million |93.89 |71.43 |
|Finnish-NLP/t5-small-nl16-finnish | 184 million |94.46 |74.00 |
|Finnish-NLP/t5-small-nl24-finnish | 260 million |**94.68** |74.90 |
|Finnish-NLP/byt5-base-finnish | 582 million |92.33 |73.13 |
|Finnish-NLP/t5-base-nl36-finnish | 814 million |94.40 |**75.97** |
|Finnish-NLP/t5-large-nl36-finnish | 1425 million |94.17 |73.50 |
Fine-tuning Google's multilingual mT5 models on the same datasets we can clearly see that our monolingual Finnish T5 models achieve much better results on Finnish text classification:
| | Model parameters | Yle News accuracy | Eduskunta accuracy |
|-------------------------------------------------------|------------------|---------------------|----------------------|
|google/mt5-small | 301 million |91.51 |64.10 |
|google/mt5-base | 583 million |92.71 |68.40 |
## Acknowledgements
This project would not have been possible without compute generously provided by Google through the
[TPU Research Cloud](https://sites.research.google/trc/).
## Team Members
- Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/)
- Rasmus Toivanen, [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/)
Feel free to contact us for more details 🤗
|
dxiao/bert-finetuned-ner-50percent
|
dxiao
| 2022-11-15T17:15:59Z | 127 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-11-15T17:09:54Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner-50percent
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner-50percent
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4952
- Precision: 0.7881
- Recall: 0.8378
- F1: 0.8122
- Accuracy: 0.9145
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 2022
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 38 | 0.4743 | 0.7904 | 0.8378 | 0.8134 | 0.9105 |
| No log | 2.0 | 76 | 0.4813 | 0.7847 | 0.8318 | 0.8076 | 0.9147 |
| No log | 3.0 | 114 | 0.4952 | 0.7881 | 0.8378 | 0.8122 | 0.9145 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
Finnish-NLP/ul2-tiny-nl6-finnish
|
Finnish-NLP
| 2022-11-15T17:11:26Z | 172 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"finnish",
"t5x",
"seq2seq",
"ul2",
"fi",
"dataset:Finnish-NLP/mc4_fi_cleaned",
"dataset:wikipedia",
"arxiv:1910.10683",
"arxiv:2205.05131",
"arxiv:2002.05202",
"arxiv:2109.10686",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text2text-generation
| 2022-10-31T16:12:49Z |
---
language:
- fi
license: apache-2.0
tags:
- finnish
- t5
- t5x
- seq2seq
- ul2
datasets:
- Finnish-NLP/mc4_fi_cleaned
- wikipedia
inference: false
---
# UL2-tiny-nl6 for Finnish
Pretrained T5 model on Finnish language using a UL2 (Mixture-of-Denoisers) objective. T5 model was introduced in
[this paper](https://arxiv.org/abs/1910.10683)
and first released at [this page](https://github.com/google-research/text-to-text-transfer-transformer).
The UL2 objective was introduced in
[this paper](https://arxiv.org/abs/2205.05131)
and first released at [this page](https://github.com/google-research/google-research/tree/master/ul2).
**Note:** The Hugging Face inference widget is deactivated because this model needs a text-to-text fine-tuning on a specific downstream task to be useful in practice. As an example of a fine-tuned Finnish T5 model, you can check [Finnish-NLP/t5-small-nl24-casing-punctuation-correction](https://huggingface.co/Finnish-NLP/t5-small-nl24-casing-punctuation-correction) which has been fine-tuned to correct missing casing and punctuation for Finnish text.
## Model description
T5 is an encoder-decoder model and treats all NLP problems in a text-to-text format.
Finnish T5 is a transformers model pretrained on a very large corpus of Finnish data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and outputs from those texts.
This model used the [T5 v1.1](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) improvements compared to the original T5 model during the pretraining:
- GEGLU activation in feed-forward hidden layer, rather than ReLU - see [here](https://arxiv.org/abs/2002.05202)
- Dropout was turned off in pretraining (quality win). Dropout should be re-enabled during fine-tuning
- Pretrained on self-supervised objective only without mixing in the downstream tasks
- No parameter sharing between embedding and classifier layer
This model also used the "efficient" T5 architecture findings presented in [this paper](https://arxiv.org/abs/2109.10686). In a nutshell, the paper indicates that a Deep-Narrow model architecture is favorable for downstream performance compared to other model architectures of similar parameter count. To be more precise, model depth is defined as the number of transformer blocks that are stacked sequentially.
This model uses the [t5-efficient-tiny-nl6](https://huggingface.co/google/t5-efficient-tiny-nl6) architecture's layer depth which means both the encoder and the decoder have 6 transformer layers compared to the original T5 "tiny" model's architecture of 4 transformer layers.
In total, this model has 31 million parameters.
### UL2 pretraining objective
This model was pretrained with the UL2's Mixture-of-Denoisers (MoD) objective, that combines diverse pre-training paradigms together. UL2 frames different objective functions for training language models as denoising tasks, where the model has to recover missing sub-sequences of a given input. During pre-training it uses a novel mixture-of-denoisers that samples from a varied set of such objectives, each with different configurations. UL2 is trained using a mixture of three denoising tasks: (1) R-denoising (or regular span corruption), which emulates the standard T5 span corruption objective; (2) X-denoising (or extreme span corruption); and (3) S-denoising (or sequential PrefixLM). During pre-training, we sample from the available denoising tasks based on user-specified ratios.
UL2 introduces a notion of mode switching, wherein downstream fine-tuning is associated with specific pre-training denoising task. During the pretraining, a paradigm token is inserted to the input (`[NLU]` for R-denoising, `[NLG]` for X-denoising, or `[S2S]` for S-denoising) indicating the denoising task at hand. Then, during fine-tuning the same input token should be inserted to get the best performance for different downstream fine-tuning tasks.
## Intended uses & limitations
This model was only pretrained in a self-supervised way excluding any supervised training. Therefore, this model has to be fine-tuned before it is usable on a downstream task, like text classification, unlike the Google's original T5 model. **Note:** You most likely need to fine-tune these T5/UL2 models without mixed precision so fine-tune them with full fp32 precision. You can also find more fine-tuning tips from [here](https://discuss.huggingface.co/t/t5-finetuning-tips), for example.
**Note**: For fine-tuning, most likely you can get better results if you insert a prefix token of `[NLU]`, `[NLG]`, or `[S2S]` to your input texts. For general language understanding fine-tuning tasks, you could use the `[NLU]` token. For GPT-style causal language generation, you could use the `[S2S]` token. The token `[NLG]` of the X-denoising pretrain task is somewhat mix between the language understanding and causal language generation so the token `[NLG]` could maybe be used for language generation fine-tuning too.
### How to use
Here is how to use this model in PyTorch:
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("Finnish-NLP/ul2-tiny-nl6-finnish")
model = T5ForConditionalGeneration.from_pretrained("Finnish-NLP/ul2-tiny-nl6-finnish")
```
and in TensorFlow:
```python
from transformers import T5Tokenizer, TFT5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("Finnish-NLP/ul2-tiny-nl6-finnish")
model = T5ForConditionalGeneration.from_pretrained("Finnish-NLP/ul2-tiny-nl6-finnish", from_pt=True)
```
### Limitations and bias
The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model.
## Training data
This Finnish T5 model was pretrained on the combination of six datasets:
- [mc4_fi_cleaned](https://huggingface.co/datasets/Finnish-NLP/mc4_fi_cleaned), the dataset mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the mC4 dataset and further cleaned it with our own text data cleaning codes (check the dataset repo).
- [wikipedia](https://huggingface.co/datasets/wikipedia) We used the Finnish subset of the wikipedia (August 2021) dataset
- [Yle Finnish News Archive 2011-2018](http://urn.fi/urn:nbn:fi:lb-2017070501)
- [Yle Finnish News Archive 2019-2020](http://urn.fi/urn:nbn:fi:lb-2021050401)
- [Finnish News Agency Archive (STT)](http://urn.fi/urn:nbn:fi:lb-2018121001)
- [The Suomi24 Sentences Corpus](http://urn.fi/urn:nbn:fi:lb-2020021803)
Raw datasets were automatically cleaned to filter out bad quality and non-Finnish examples. Also, a [perplexity](https://huggingface.co/course/chapter7/3#perplexity-for-language-models) score was calculated for all texts with a KenLM model which was trained with very clean Finnish texts only. This perplexity score can then be used to determine how "clean" Finnish language the text contains. Lastly, all datasets were concatenated and the top 90% perplexity score was used as a filtering threshold to filter out the worst quality 10% of texts. Together these cleaned datasets were around 76GB of text.
## Training procedure
### Preprocessing
The texts are tokenized using WordPiece and a vocabulary size of 32000. The inputs and the outputs are sequences of 512 consecutive tokens. Texts are not lower cased so this model is case-sensitive: it makes a difference between finnish and Finnish.
### Pretraining
The model was trained on TPUv3-8 VM, sponsored by the [Google TPU Research Cloud](https://sites.research.google/trc/about/), for 500K steps with a batch size of 512 (in total 131B tokens). The optimizer used was a AdaFactor with learning rate warmup for 10K steps with a constant learning rate of 1e-2, and then an inverse square root decay (exponential decay) of the learning rate after.
Training code was from the Google's Jax/Flax based [t5x framework](https://github.com/google-research/t5x) and also some t5x task definitions were adapted from [Per's t5x work](https://huggingface.co/pere).
The UL2 training objective code used with the [t5x framework](https://github.com/google-research/t5x) was copied and slightly modified from the [UL2 paper](https://arxiv.org/pdf/2205.05131.pdf) appendix chapter 9.2. Used UL2 objective code is available in this repository in the files `ul2_objective.py` and `tasks.py`.
UL2's mixture-of-denoisers configuration was otherwise equal to the UL2 paper but for the rate of mixing denoisers, 20% for S-denoising was used (suggested at the paper chapter 4.5) and the rest was divided equally between the R-denoising and X-denoising (i.e. 40% for both).
## Evaluation results
Evaluation was done by fine-tuning the model on a downstream text classification task with two different labeled Finnish datasets: [Yle News](https://github.com/spyysalo/yle-corpus) and [Eduskunta](https://github.com/aajanki/eduskunta-vkk). Classification fine-tuning was done with a sequence length of 128 tokens. Also, for UL2 models a prefix token of `[NLU]` has been added to each input text.
When fine-tuned on those datasets, this model (the first row of the table) achieves the following accuracy results compared to our other UL2 models and their parameter counts:
| | Model parameters | Yle News accuracy | Eduskunta accuracy |
|-------------------------------------------------------|------------------|---------------------|----------------------|
|Finnish-NLP/ul2-tiny-nl6-finnish | 31 million |92.88 |69.40 |
|Finnish-NLP/ul2-mini-nl8-finnish | 72 million |93.83 |70.10 |
|Finnish-NLP/ul2-small-nl16-finnish | 184 million |94.25 |74.63 |
|Finnish-NLP/ul2-small-nl24-finnish | 260 million |94.03 |73.87 |
|Finnish-NLP/ul2-base-nl36-finnish | 814 million |94.35 |75.47 |
Results of fine-tuning our T5 models (with the original T5 pretraining task) on the same datasets are following:
| | Model parameters | Yle News accuracy | Eduskunta accuracy |
|-------------------------------------------------------|------------------|---------------------|----------------------|
|Finnish-NLP/t5-tiny-nl6-finnish | 31 million |92.80 |69.07 |
|Finnish-NLP/t5-mini-nl8-finnish | 72 million |93.89 |71.43 |
|Finnish-NLP/t5-small-nl16-finnish | 184 million |94.46 |74.00 |
|Finnish-NLP/t5-small-nl24-finnish | 260 million |**94.68** |74.90 |
|Finnish-NLP/byt5-base-finnish | 582 million |92.33 |73.13 |
|Finnish-NLP/t5-base-nl36-finnish | 814 million |94.40 |**75.97** |
|Finnish-NLP/t5-large-nl36-finnish | 1425 million |94.17 |73.50 |
Fine-tuning Google's multilingual mT5 models on the same datasets we can clearly see that our monolingual Finnish T5 models achieve much better results on Finnish text classification:
| | Model parameters | Yle News accuracy | Eduskunta accuracy |
|-------------------------------------------------------|------------------|---------------------|----------------------|
|google/mt5-small | 301 million |91.51 |64.10 |
|google/mt5-base | 583 million |92.71 |68.40 |
## Acknowledgements
This project would not have been possible without compute generously provided by Google through the
[TPU Research Cloud](https://sites.research.google/trc/).
## Team Members
- Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/)
- Rasmus Toivanen, [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/)
Feel free to contact us for more details 🤗
|
reza-aditya/a2c-AntBulletEnv-v0
|
reza-aditya
| 2022-11-15T17:11:06Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-11-15T17:09:54Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1671.41 +/- 322.64
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Tom11/xlm-roberta-base-finetuned-panx-en
|
Tom11
| 2022-11-15T17:08:43Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-11-15T16:37:20Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-en
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.en
split: train
args: PAN-X.en
metrics:
- name: F1
type: f1
value: 0.68561872909699
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4175
- F1: 0.6856
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1397 | 1.0 | 50 | 0.5561 | 0.5147 |
| 0.5148 | 2.0 | 100 | 0.4851 | 0.6312 |
| 0.3772 | 3.0 | 150 | 0.4175 | 0.6856 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0+cpu
- Datasets 1.16.1
- Tokenizers 0.13.2
|
frozi/setfit-ethos-multilabel-example
|
frozi
| 2022-11-15T17:07:08Z | 5 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-11-15T17:06:51Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 242 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 242,
"warmup_steps": 25,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
dxiao/bert-finetuned-ner-30percent
|
dxiao
| 2022-11-15T17:03:50Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-11-15T16:57:47Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner-30percent
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner-30percent
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4543
- Precision: 0.6879
- Recall: 0.7613
- F1: 0.7227
- Accuracy: 0.8878
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 2022
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 23 | 0.5153 | 0.6498 | 0.7523 | 0.6973 | 0.8612 |
| No log | 2.0 | 46 | 0.4693 | 0.6675 | 0.7568 | 0.7094 | 0.8786 |
| No log | 3.0 | 69 | 0.4543 | 0.6879 | 0.7613 | 0.7227 | 0.8878 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
atlijas/byt5-is-ocr-post-processing-modern-texts
|
atlijas
| 2022-11-15T17:01:30Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"is",
"arxiv:1907.06292",
"arxiv:1910.10683",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-15T16:09:43Z |
---
language: is
license: apache-2.0
widget:
- text: "^Fyrsta bam ársins fæddist á Landspítalanum kl. 3.30 á nýársnótt."
---
# Details of ByT5 - Base 🧠
ByT5 is a tokenizer-free version of [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) and generally follows the architecture of [MT5](https://huggingface.co/google/mt5-base).
ByT5 was only pre-trained on [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) excluding any supervised training with an average span-mask of 20 UTF-8 characters. Therefore, this model has to be fine-tuned before it is useable on a downstream task.
ByT5 works especially well on noisy text data,*e.g.*, `google/byt5-base` significantly outperforms [mt5-base](https://huggingface.co/google/mt5-base) on [TweetQA](https://arxiv.org/abs/1907.06292).
Paper: [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/pdf/1910.10683.pdf)
Authors: *Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel*
# Details of byt5-is-ocr-post-processing-modern-texts
*Note: This model is almost the same as [atlijas/byt5-is-ocr-post-processing-old-texts](https://huggingface.co/atlijas/byt5-is-ocr-post-processing-old-texts/). The only difference is the amount of epochs trained.*
This model generates a revised version of a given Icelandic OCRed text. The model was trained with [simpleT5](https://github.com/Shivanandroy/simpleT5) on 900.000 lines (\~7.000.000 tokens) of which only 50.000 (\~400.000 tokens) were from real OCRed texts. The rest were extracted from [The Icelandic Gigaword Corpus](https://clarin.is/en/resources/gigaword/) and augmented with artificial errors. It can be assumed that increasing the amount of OCRed data can significantly improve the model.
For inference, it is recommended to feed the model one line (not necessarily whole sentences, though) at a time.
# Usage
```python
from transformers import pipeline
from transformers.pipelines.pt_utils import KeyDataset
from datasets import load_dataset
MODEL = 'atlijas/byt5-is-ocr-post-processing-old-texts'
correct_ocr = pipeline('text2text-generation', model=MODEL, tokenizer=MODEL, num_return_sequences=1)
dataset = load_dataset('/path/to/', data_files='my_ocred_file.txt')
lines = dataset['train']
file_length = len(lines)
for corrected in correct_ocr(KeyDataset(lines, 'text'), max_length=150, batch_size=32):
print(corrected[0]['generated_text'])
```
# Evaluation results
The test set for this model consists of various Icelandic texts from the the 80's and 90's. On it, the model achieves a chrF error rate reduction of 30.1%, with the original text's score being 95.2, and the processed one's 96.7. The model achieves a proportional BLEU improvement of 19.8%, with the original text's BLEU score being 97.55 and the processed one's 98.0.
# Acknowledgments
This project was funded by the Language Technology Programme for Icelandic 2019-2023. The programme, which is managed and coordinated by [Almannarómur](https://almannaromur.is/), is funded by the Icelandic Ministry of Education, Science and Culture.
|
atlijas/byt5-is-ocr-post-processing-old-texts
|
atlijas
| 2022-11-15T16:54:24Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"is",
"arxiv:1907.06292",
"arxiv:1910.10683",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-11-15T13:58:44Z |
---
language: is
license: apache-2.0
widget:
- text: "Yonum vjer að pað pví fremur fái góðar viðtökur, par sem svo lítur út, sem aldrei muni verða svo heiðskýrt á pessum vetri að „Noi'ðurljósið“ sjáist, eu paðan væntum vér allir skemmtunar."
---
# Details of ByT5 - Base 🧠
ByT5 is a tokenizer-free version of [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) and generally follows the architecture of [MT5](https://huggingface.co/google/mt5-base).
ByT5 was only pre-trained on [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) excluding any supervised training with an average span-mask of 20 UTF-8 characters. Therefore, this model has to be fine-tuned before it is useable on a downstream task.
ByT5 works especially well on noisy text data,*e.g.*, `google/byt5-base` significantly outperforms [mt5-base](https://huggingface.co/google/mt5-base) on [TweetQA](https://arxiv.org/abs/1907.06292).
Paper: [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/pdf/1910.10683.pdf)
Authors: *Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel*
# Details of byt5-is-ocr-post-processing-old-texts
This model generates a revised version of a given Icelandic OCRed text. The model was trained with [simpleT5](https://github.com/Shivanandroy/simpleT5) on 900.000 lines (\~7.000.000 tokens) of which only 50.000 (\~400.000 tokens) were from real OCRed texts. The rest were extracted from [The Icelandic Gigaword Corpus](https://clarin.is/en/resources/gigaword/) and augmented with artificial errors. It can be assumed that increasing the amount of OCRed data can significantly improve the model.
For inference, it is recommended to feed the model one line (not necessarily whole sentences, though) at a time.
# Usage
```python
from transformers import pipeline
from transformers.pipelines.pt_utils import KeyDataset
from datasets import load_dataset
MODEL = 'atlijas/byt5-is-ocr-post-processing-old-texts'
correct_ocr = pipeline('text2text-generation', model=MODEL, tokenizer=MODEL, num_return_sequences=1)
dataset = load_dataset('/path/to/', data_files='my_ocred_file.txt')
lines = dataset['train']
file_length = len(lines)
for corrected in correct_ocr(KeyDataset(lines, 'text'), max_length=150, batch_size=32):
print(corrected[0]['generated_text'])
```
# Evaluation results
The test set for this model consists of various Icelandic texts from the 19th and early 20th century. On it, the model achieves a chrF error rate reduction of 39.3%, with the original text's score being 94.6, and the processed one's 96.7. The model achieves a proportional BLEU improvement of 51.6%, with the original text's BLEU score being 97.2 and the processed one's 98.6.
# Acknowledgments
This project was funded by the Language Technology Programme for Icelandic 2019-2023. The programme, which is managed and coordinated by [Almannarómur](https://almannaromur.is/), is funded by the Icelandic Ministry of Education, Science and Culture.
|
dxiao/bert-finetuned-ner-10percent
|
dxiao
| 2022-11-15T16:51:24Z | 115 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-11-15T16:45:16Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner-10percent
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner-10percent
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5010
- Precision: 0.0
- Recall: 0.0
- F1: 0.0
- Accuracy: 0.4556
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 2022
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|:--------:|
| No log | 1.0 | 8 | 1.7713 | 0.0 | 0.0 | 0.0 | 0.4508 |
| No log | 2.0 | 16 | 1.5699 | 0.0 | 0.0 | 0.0 | 0.4517 |
| No log | 3.0 | 24 | 1.5010 | 0.0 | 0.0 | 0.0 | 0.4556 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
PlanTL-GOB-ES/bsc-bio-ehr-es-cantemist
|
PlanTL-GOB-ES
| 2022-11-15T16:40:59Z | 113 | 2 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"token-classification",
"biomedical",
"clinical",
"eHR",
"spanish",
"es",
"dataset:PlanTL-GOB-ES/cantemist-ner",
"arxiv:1907.11692",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-04-07T14:29:07Z |
---
language:
- es
tags:
- biomedical
- clinical
- eHR
- spanish
license: apache-2.0
datasets:
- "PlanTL-GOB-ES/cantemist-ner"
metrics:
- f1
model-index:
- name: PlanTL-GOB-ES/bsc-bio-ehr-es-cantemist
results:
- task:
type: token-classification
dataset:
name: cantemist-ner
type: PlanTL-GOB-ES/cantemist-ner
metrics:
- name: f1
type: f1
value: 0.8340
widget:
- text: "El diagnóstico definitivo de nuestro paciente fue de un Adenocarcinoma de pulmón cT2a cN3 cM1a Estadio IV (por una única lesión pulmonar contralateral) PD-L1 90%, EGFR negativo, ALK negativo y ROS-1 negativo."
- text: "Durante el ingreso se realiza una TC, observándose un nódulo pulmonar en el LII y una masa renal derecha indeterminada. Se realiza punción biopsia del nódulo pulmonar, con hallazgos altamente sospechosos de carcinoma."
- text: "Trombosis paraneoplásica con sospecha de hepatocarcinoma por imagen, sobre hígado cirrótico, en paciente con índice Child-Pugh B."
---
# Spanish RoBERTa-base biomedical model finetuned for the Named Entity Recognition (NER) task on the Cantemist dataset.
## Table of contents
<details>
<summary>Click to expand</summary>
- [Model description](#model-description)
- [Intended uses and limitations](#intended-use)
- [How to use](#how-to-use)
- [Limitations and bias](#limitations-and-bias)
- [Training](#training)
- [Evaluation](#evaluation)
- [Additional information](#additional-information)
- [Author](#author)
- [Contact information](#contact-information)
- [Copyright](#copyright)
- [Licensing information](#licensing-information)
- [Funding](#funding)
- [Citing information](#citing-information)
- [Disclaimer](#disclaimer)
</details>
## Model description
A fine-tuned version of the [bsc-bio-ehr-es](https://huggingface.co/PlanTL-GOB-ES/bsc-bio-ehr-es) model, a [RoBERTa](https://arxiv.org/abs/1907.11692) base model and has been pre-trained using the largest Spanish biomedical corpus known to date, composed of biomedical documents, clinical cases and EHR documents for a total of 1.1B tokens of clean and deduplicated text processed.
For more details about the corpora and training, check the _bsc-bio-ehr-es_ model card.
## Intended uses and limitations
## How to use
## Limitations and bias
At the time of submission, no measures have been taken to estimate the bias embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated.
## Training
The dataset used is [CANTEMIST](https://huggingface.co/datasets/PlanTL-GOB-ES/cantemist-ner), a NER dataset annotated with tumor morphology entities. For further information, check the [official website](https://temu.bsc.es/cantemist/).
## Evaluation
F1 Score: 0.8340
For evaluation details visit our [GitHub repository](https://github.com/PlanTL-GOB-ES/lm-biomedical-clinical-es).
## Additional information
### Author
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es)
### Contact information
For further information, send an email to <plantl-gob-es@bsc.es>
### Copyright
Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)
### Licensing information
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
### Funding
This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.
### Citing information
If you use these models, please cite our work:
```bibtext
@inproceedings{carrino-etal-2022-pretrained,
title = "Pretrained Biomedical Language Models for Clinical {NLP} in {S}panish",
author = "Carrino, Casimiro Pio and
Llop, Joan and
P{\`a}mies, Marc and
Guti{\'e}rrez-Fandi{\~n}o, Asier and
Armengol-Estap{\'e}, Jordi and
Silveira-Ocampo, Joaqu{\'\i}n and
Valencia, Alfonso and
Gonzalez-Agirre, Aitor and
Villegas, Marta",
booktitle = "Proceedings of the 21st Workshop on Biomedical Language Processing",
month = may,
year = "2022",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.bionlp-1.19",
doi = "10.18653/v1/2022.bionlp-1.19",
pages = "193--199",
abstract = "This work presents the first large-scale biomedical Spanish language models trained from scratch, using large biomedical corpora consisting of a total of 1.1B tokens and an EHR corpus of 95M tokens. We compared them against general-domain and other domain-specific models for Spanish on three clinical NER tasks. As main results, our models are superior across the NER tasks, rendering them more convenient for clinical NLP applications. Furthermore, our findings indicate that when enough data is available, pre-training from scratch is better than continual pre-training when tested on clinical tasks, raising an exciting research question about which approach is optimal. Our models and fine-tuning scripts are publicly available at HuggingFace and GitHub.",
}
```
### Disclaimer
The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.
When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence.
In no event shall the owner of the models (SEDIA – State Secretariat for digitalization and artificial intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.
Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables.
Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial.
En ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos.
|
PlanTL-GOB-ES/bsc-bio-ehr-es-pharmaconer
|
PlanTL-GOB-ES
| 2022-11-15T16:37:38Z | 593 | 2 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"token-classification",
"biomedical",
"clinical",
"eHR",
"spanish",
"es",
"dataset:PlanTL-GOB-ES/pharmaconer",
"arxiv:1907.11692",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-04-06T13:43:19Z |
---
language:
- es
tags:
- biomedical
- clinical
- eHR
- spanish
license: apache-2.0
datasets:
- "PlanTL-GOB-ES/pharmaconer"
metrics:
- f1
model-index:
- name: PlanTL-GOB-ES/bsc-bio-ehr-es-pharmaconer
results:
- task:
type: token-classification
dataset:
name: pharmaconer
type: PlanTL-GOB-ES/pharmaconer
metrics:
- name: f1
type: f1
value: 0.8913
widget:
- text: "Se realizó estudio analítico destacando incremento de niveles de PTH y vitamina D (103,7 pg/ml y 272 ng/ml, respectivamente), atribuidos al exceso de suplementación de vitamina D."
- text: " Por el hallazgo de múltiples fracturas por estrés, se procedió a estudio en nuestras consultas, realizándose análisis con función renal, calcio sérico y urinario, calcio iónico, magnesio y PTH, que fueron normales."
- text: "Se solicitó una analítica que incluía hemograma, bioquímica, anticuerpos antinucleares (ANA) y serologías, examen de orina, así como biopsia de la lesión. Los resultados fueron normales, con ANA, anti-Sm, anti-RNP, anti-SSA, anti-SSB, anti-Jo1 y anti-Scl70 negativos."
---
# Spanish RoBERTa-base biomedical model finetuned for the Named Entity Recognition (NER) task on the PharmaCoNER dataset.
## Table of contents
<details>
<summary>Click to expand</summary>
- [Model description](#model-description)
- [Intended uses and limitations](#intended-use)
- [How to use](#how-to-use)
- [Limitations and bias](#limitations-and-bias)
- [Training](#training)
- [Evaluation](#evaluation)
- [Additional information](#additional-information)
- [Author](#author)
- [Contact information](#contact-information)
- [Copyright](#copyright)
- [Licensing information](#licensing-information)
- [Funding](#funding)
- [Citing information](#citing-information)
- [Disclaimer](#disclaimer)
</details>
## Model description
A fine-tuned version of the [bsc-bio-ehr-es](https://huggingface.co/PlanTL-GOB-ES/bsc-bio-ehr-es) model, a [RoBERTa](https://arxiv.org/abs/1907.11692) base model and has been pre-trained using the largest Spanish biomedical corpus known to date, composed of biomedical documents, clinical cases and EHR documents for a total of 1.1B tokens of clean and deduplicated text processed.
For more details about the corpora and training, check the _bsc-bio-ehr-es_ model card.
## Intended uses and limitations
## How to use
## Limitations and bias
At the time of submission, no measures have been taken to estimate the bias embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated.
## Training
The dataset used is [PharmaCoNER](https://huggingface.co/datasets/PlanTL-GOB-ES/pharmaconer), a NER dataset annotated with substances, compounds and proteins entities. For further information, check the [official website](https://temu.bsc.es/pharmaconer/).
## Evaluation
F1 Score: 0.8913
For evaluation details visit our [GitHub repository](https://github.com/PlanTL-GOB-ES/lm-biomedical-clinical-es).
## Additional information
### Author
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es)
### Contact information
For further information, send an email to <plantl-gob-es@bsc.es>
### Copyright
Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)
### Licensing information
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
### Funding
This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.
## Citing information
If you use these models, please cite our work:
```bibtext
@inproceedings{carrino-etal-2022-pretrained,
title = "Pretrained Biomedical Language Models for Clinical {NLP} in {S}panish",
author = "Carrino, Casimiro Pio and
Llop, Joan and
P{\`a}mies, Marc and
Guti{\'e}rrez-Fandi{\~n}o, Asier and
Armengol-Estap{\'e}, Jordi and
Silveira-Ocampo, Joaqu{\'\i}n and
Valencia, Alfonso and
Gonzalez-Agirre, Aitor and
Villegas, Marta",
booktitle = "Proceedings of the 21st Workshop on Biomedical Language Processing",
month = may,
year = "2022",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.bionlp-1.19",
doi = "10.18653/v1/2022.bionlp-1.19",
pages = "193--199",
abstract = "This work presents the first large-scale biomedical Spanish language models trained from scratch, using large biomedical corpora consisting of a total of 1.1B tokens and an EHR corpus of 95M tokens. We compared them against general-domain and other domain-specific models for Spanish on three clinical NER tasks. As main results, our models are superior across the NER tasks, rendering them more convenient for clinical NLP applications. Furthermore, our findings indicate that when enough data is available, pre-training from scratch is better than continual pre-training when tested on clinical tasks, raising an exciting research question about which approach is optimal. Our models and fine-tuning scripts are publicly available at HuggingFace and GitHub.",
}
```
### Disclaimer
The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.
When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence.
In no event shall the owner of the models (SEDIA – State Secretariat for digitalization and artificial intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.
Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables.
Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial.
En ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos.
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.